entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
18
175
authors
sequencelengths
1
447
primary_category
stringclasses
116 values
categories
sequencelengths
1
6
text
stringlengths
4
418k
http://arxiv.org/abs/2408.11474v1
20240821094056
Limit theorems for a strongly irreducible product of independent random matrices under optimal moment assumptions
[ "Axel Péneau" ]
math.PR
[ "math.PR" ]
§ ABSTRACT Let ν be a probability distribution over the semi-group of square matrices of size d ≥ 2. We assume that ν is proximal, strongly irreducible and that ν^*n{0}=0 for all integers n∈. We consider the sequence γ_n:=γ_0⋯γ_n-1 for (γ_k)_k ∈ independent of distribution law ν. We denote by (γ_n) the logarithm of the ratio of the two top singular values of γ_n. We show that ((γ_n))_n∈ escapes to infinity linearly and satisfies exponential large deviations inequalities below its escape rate. We also show that the image of a generic line by γ_n as well as its eigenspace of its maximal eigenvalue both converge to the same random line l^∞ at an exponential speed. This is an extension of results by Guivarc'h and Raugi. If we moreover assume that the push-forward distribution N_*ν is L^p for N: g↦log(gg^-1) and for some p≥ 1, then we show that -log(l^∞, H) is uniformly L^p for all proper subspace H ⊂^d. Moreover the logarithm of each coefficient of γ_n is almost surely equivalent to the logarithm of the norm. This is an extension of results by Benoist and Quint which were themselves quantitative versions of results by Furstenberg and Kesten. To prove these results, we do not rely on the existence of the stationary measure nor on the existence of the Lyapunov exponent. Instead we describe an effective way to group the i.i.d. factors into i.i.d. random words that are aligned in the Cartan decomposition. We moreover have an explicit control over the moments. How to use the dispersion in the χ^(3) tensor for broadband generation of polarization-entangled photons Christophe Galland August 26, 2024 ======================================================================================================== § INTRODUCTION §.§ Preliminaries Let = be the field of real number endowed with the usual absolute value |·|. Let d ≥ 2 be an integer. Let End(^d) ≃Mat_d × d() be the set of square matrices which we identify with the semi-group of linear maps from ^d to itself. Let ν be a probability distribution on End(^d) and let (γ_n)_n∈∼ν^⊗ be a random i.i.d[The abbreviation "i.i.d" is short for "independent and identically distributed" but we will prefer to write ∼ν^⊗ as it allows us to specify the distribution in question as well as the set of indices of the sequence.] sequence of matrices. We will denote by (γ_n)_n∈ := (γ_0 ⋯γ_n-1)_n∈ the random walk of step ν. Given g a square d × d matrix, let ρ_1(g) ≥ρ_2(g) ≥…≥ρ_d(g) be the moduli of its eigenvalues. If g is not nilpotent, we define the spectral gap or quantitative proximality of g as: (g) := log(ρ_1(g)/ρ_2(g)). If g is nilpotent, we define (g) = 0. We will always consider measures that are strongly irreducible and proximal in the following sense. [Proximality] Let E be a vector space and let ν be a probability distribution on End(E). We say that ν is proximal if both of the following conditions are satisfied: ∃ n∈, ν^*n{γ∈End(E) | (γ) > 0} > 0. ∀ n∈, ν^*n{0} = 0. [Strong irreducibility] Let E be a vector space and let ν be a probability distribution on End(E). We say that ν is irreducible if: ∀ f ∈ E^*∖{0}, ∀ v ∈ E∖{0}, ∃ n ∈, ν^*n{γ∈End(E) | f γ v ≠ 0} > 0. We say that ν is strongly irreducible if: ∀ N ∈, ∀ (f_1, …, f_N) ∈ (E^*∖{0})^N, ∀ v ∈ E∖{0}, ∃ n ∈, ν^*n{γ∈End(E) | ∏_i = 1^N f_i γ v ≠ 0 } > 0, and ∀ N ∈, ∀ f ∈ E^*∖{0}, ∀ (v_1, …, v_N) ∈ (E∖{0})^N, ∃ n ∈, ν^*n{γ∈End(E) | ∏_j = 1^N f γ v_j ≠ 0 } > 0. We will work with real valued matrices but all the results still hold for complex valued matrices or for matrices with coefficients in a ultra-metric locally compact field with the same proofs. We simply need to replace the Euclidean norm with a Hermitian norm or a ultra-metric norm. Without making any moments assumptions, we will study the behaviour of the projective class [γ_n] for all n ∈ and not only asymptotically. All the following results are corollaries of Theorem <ref>, which is the main theorem of this article. In fact Theorem <ref> follows from Lemma <ref> and Theorem <ref>. §.§ Regularity results with optimal moment assumptions Let =. Given two Euclidean spaces (E, ·) and (F,·), we write Hom(E,F) for the vector space of linear maps from E to F, we endow it with the operator norm h ↦h := max_x∈ E∖{0}hx/x. Given E a Euclidean space, we define E^* := Hom(E, ) to be the dual space of E. Given a matrix h ∈Hom(E,F), we write h^* ∈Hom(F^*, E^*) for the composition by h on the right. Let h ∈End(E) := Hom(E, E). Denote by ρ_1(h) the limit of h^n^1/n, we call ρ_1(h) the spectral radius of h. Note that it is the modulus of the maximal eigenvalue of h. We denote by GL(E) the group of isomorphisms of E. Given g ∈GL(E), we write N(g) := logg + logg^-1. Let ν be a probability measure on GL(E) and let (γ_n)∼ν^⊗. A long standing question is whether the sequence of rescaled entries (|(γ_n)_i,j|^1/n)_n∈ converges almost surely for all 1 ≤ i, j ≤ d. We know from <cit.> that if (logγ_0) < + ∞, and without any other assumptions, then the sequence of norms (γ_n^1/n)_n∈ converges almost surely to a finite non-random limit that we denote by ρ_1(ν). Furstenberg and Kesten also show that ρ_1(ν) > 1 when ν is strongly irreducible and supported on the group SL(E). With the above assumptions and when moreover (N(γ_0)^2) < +∞ Xiao, Grama and Liu prove in <cit.> that the random sequence of rescaled entries (|(γ_n)_i,j|^1/n)_n∈ converges almost surely to ρ_1(ν) for all i,j. The following theorem allows ut to get rid of the assumption (N(γ_0)^2) < +∞. [Strong law of large numbers for the coefficients and for the spectral radius] Let E be a Euclidean vector space. Let ν be a strongly irreducible and proximal probability measure on GL(E). There exist constants C, β > 0 such that for all f ∈ E^* ∖{0}, all v ∈ E ∖{0}, for all n∈ and for γ_n ∼ν^*n, we have for all t ∈_≥ 0: (logfγ_nv/|f γ_n v|≥ t)≤ Cexp(-β n) + ∑_k=1^+∞ C exp(-β k) N_*ν(t/k, +∞). Moreover: ∀ t ≥ 0, (logγ_n/ρ_1(γ_n)≥ t)≤∑_k=1^+∞ C exp(-β k) N_*ν(t/k, +∞). We prove this result in Section <ref> Note that (<ref>) implies that for all non-random sequence α_n → 0 and for all 1 ≤ i,j ≤(E), the sequence (|(γ_n)_i,j|^α_n/γ_n^α_n)_n∈ converges weakly in distribution to the Dirac measure at 1, without any moment assumption. Given C, β > 0 and ν a probability measure on GL(E), we denote by ζ^C,β_ν the probability distribution on _≥ 0 characterized by: ∀ t ≥ 0, ζ^C,β_ν(t,+∞) := min{1,∑_k=1^+∞ C exp(-β k)N_*ν(t/k, +∞)}. [Almost sure convergence of the coefficients] Let E be a Euclidean space and let ν be a strongly irreducible and proximal probability measure on GL(E). Let (γ_n)∼ν^⊗. Assume that (logγ_0) and (logγ_0^-1) are both finite. Then for all f ∈ E^*∖{0} and all v ∈ E ∖{0}, we have almost surely: lim_n→∞log|f γ_n v|/n = lim_n→∞logγ_n/n = log(ρ_1(ν)). By Lemma <ref>, for all p ∈ (0,+∞), there exist a constant D_p such that M_p(ζ^C,β_ν) ≤ D_p M_p(N_*ν), where M_p is the p-th moment of a measure. Therefore, if we assume that M_1(N_*ν)= < +∞, then M_1(ζ_ν^C,β) < +∞. Then by Theorem <ref>, for all > 0, we have: ∑_n∈(log(fγ_nv) - log(|f γ_n v|) ≥ n ) ≤∑_n∈ C exp(-β n) + ∑_n∈ζ_ν^(C, β)(n , + ∞) ≤C/β + ^-1M_1(ζ_ν^C,β) < +∞. Then by Borel-Cantelli's Lemma, we have n^-1logfγ_nv/|f γ_n v|→ 0 almost surely. Then we can apply <cit.> which tells us that n^-1logγ_n→log(ρ_1(ν)). The following Corollary is about the regularity of the stationary measure. The formulation (<ref>) is analogous to the regularity result for the stationary measure on hyperbolic groups <cit.>. This is also an improvement of <cit.>. Let E be a Euclidean space and V ⊂ E be a proper subspace and let 0 < r ≤ 1. We define 𝒩_r(V) := {l ∈𝐏(E) | ∃ v∈ V∖{0}, ([v],l) < r}. The weak and strong polynomial moments are defined in Definition <ref>. [Regularity of the measure] Let E be a Euclidean vector space. Let ν be a strongly irreducible and proximal probability measure on GL(E). Let C, β be as in Theorem <ref> and let ξ_ν^∞ be the ν-stationary measure as in Theorem <ref>. Then we have: ∀ V ∈Gr(E) ∖{E}, ∀ 0 < r ≤ 1, ξ_ν^∞ (𝒩_r(V)) ≤ζ_ν^C,β (|log(r)|, +∞) Let p >0. If we assume that N_*ν has finite strong L^p moment, then there exists a constant C' such that: ∀ V ∈Gr(E) ∖{E}, ∫_l ∈𝐏(E)|log(𝐏(V), l)|^p dξ_ν^∞(l) ≤ C'. If we assume that N_*ν has finite weak L^p moment, then there exists a constant C' such that: ∀ V ∈Gr(E) ∖{E}, ∀ 0 < r < 1, ξ_ν^∞ (𝒩_r(V)) ≤ C'|log(r)|^-p. Note that by Lemma <ref>, the probability distribution ζ_ν^C,β is in the same integrability class as N_*ν. Inequalities (<ref>) and (<ref>) follow directly from that observation and from (<ref>). We prove Theorem <ref> and (<ref>) from Corollary <ref> in Section <ref>. §.§ Contraction results without moment assumptions Let E be a Euclidean vector space, and let 1 ≤ k ≤(E) be an integer. We denote by ⋀^k E the k-th exterior product of E, the minimal-up-to-isomorphism space that factorises all alternate k-linear maps. It naturally comes with a k-linear alternate map E^k →⋀^k E; (v_1, …, v_k) ↦ v_1 ∧⋯∧ v_k. We endow ⋀^k E with the canonical Euclidean metric, which is characterized by the fact that for all family (v_1, …, v_k) ∈ E^k, one has v_1 ∧⋯∧ v_k≤v_1⋯v_k, with equality when the family (v_1, …, v_k) is orthogonal. Let E and F be Euclidean spaces and let h ∈Hom(E,F). We define the squeeze coefficient or logarithmic singular gap of h as follows: sqz(h) := log(hh/h∧ h). It is the logarithm of the ratio between the first and second largest singular values (counted with multiplicity). Note that then by the spectral theorem, for all square matrix h which is not nilpotent, we have (h^n)/n→(h). [Quantitative estimate of the escape speed] Let E be a Euclidean space and let ν be a proximal and strongly irreducible probability distribution on End(E). Let (γ_n)∼ν^⊗. Write γ_n := γ_0⋯γ_n-1 for all n. Then there exists a positive constant σ(ν)∈ (0,+∞] such that almost surely (γ_n)/n→σ(ν). Moreover, we have the following large deviations inequalities: ∀α<σ(ν), ∃ C, β>0 , ∀ n∈, ((γ_n)≤α n ∪(γ_n)≤α n) ≤ C exp(-β n). Let ν be a probability measure on GL(E). Let (γ_n)∼ν^⊗. Assume that (logγ_0) < +∞. Then we know from sub-additivity <cit.> that logγ_n/n→log(ρ_1(ν)). Let log(ρ_2(ν)) be the second Lyapunov exponent of ν. Again by sub-additivity, logγ_n ∧γ_n/n→log(ρ_1(ν)) + log(ρ_2(ν)). Hence (γ_n)/n→log(ρ_1(ν)) - log(ρ_2(ν)), which is therefore equal to σ(ν) from Theorem <ref>. A celebrated result by Guivarc'h and Raugi <cit.> asserts that this difference is positive when ν is strongly irreducible and proximal. Without the first moment assumption, the Lyapunov coefficients ρ_1(ν) and ρ_2(ν) do not make sense and in general, the sequence γ_n^1/n does not converge almost surely. Still Theorem <ref> above shows that the limit σ(ν) = lim(γ_n)/n still exists and is a positive constant. In that sense, the first part of the above theorem is an extension of Guivarc'h and Raugi's theorem to all strongly irreducible and proximal probability measures. Moreover, the quantitative estimates (<ref>) are new even in the setting of <cit.>. In fact they are key to our approach. We deduce the qualitative convergence from the strong quantitative estimates. We denote by 𝐏(E) the projective space associated to E the set of vector lines in E. Write [·] : E∖{0}→𝐏(E) for the projection map. We endow 𝐏(E) with the metric: :([x],[y])↦x∧ y/xy. Let h be a square matrix such that (h) > 0. Then the top eigenvalue of h is simple and real. We write E^+(h) for the associated eigenspace which is a real line. [Quantitative convergence of the image] Let ν be a strongly irreducible and proximal probability distribution on End(E). Let (γ_n) ∼ν^⊗. There exists a random line l^∞∈𝐏(E) such that for all α < σ(ν), there exist constants C,β > 0, such that: ∀ v ∈ E ∖{0}, ∀ n∈, (([γ_n v],l^∞) ≥exp(-α n) | γ_n v ≠ 0) ≤ Cexp(-β n), ∀ n∈, ((γ_n) = 0 ∪(E^+(γ_n),l^∞) ≥exp(-α n)) ≤ Cexp(-β n) We moreover show in Proposition <ref> that the set of vectors v ∈ E such that sup_n∈(γ_n v = 0) > 0 is a countable union of proper subspaces of E. We denote this set by (ν). In this proposition, we also show that (γ_n v = 0) is bounded away from 1, uniformly in n ∈ and v ∈ E ∖{0}. Note that if two random lines l^∞ and l'^∞ satisfy (<ref>) then we have l^∞ = l'^∞ almost surely. We define ξ_ν^∞ to be the distribution of l^∞. Then ξ_ν^∞ is the only ν-stationary measure on 𝐏(E) in the sense that ν * ξ^∞_ν = ξ^∞_ν. Moreover, we have the following exponential mixing property. [Proximality implies exponential mixing] Let ν be any strongly irreducible and proximal distribution on End(E). There is a unique ν-stationary probability distribution ξ_ν^∞ on 𝐏(E). Moreover, there exist constants C, β such that for all probability distribution ξ on 𝐏(E)∖(ν) and for all Lipschitz function f: 𝐏(E) → with Lipschitz constant λ(f), we have: ∀ n∈, |∫_𝐏(E)fdξ_ν^∞-∫_𝐏(E)fdν^*n*ξ|≤λ(f) Cexp(-β n). Note that saying that ξ is supported on 𝐏(E)∖(ν) is not very restrictive because any measure that gives measure 0 to all hyperplanes would satisfy that condition. However, ξ_ν^∞ itself may give positive measure to some hyperplanes. For example if ν is the barycentre of the Haar measure on the group of isometries and a Dirac mass δ_π at a projection endomorphism π, then ξ^∞_ν is the average of the isometry-invariant measure and of the Dirac mass on the image of π. In particular ξ^∞_ν gives positive measure to any hyperplane that contains the image of π. Note that if ν is supported on GL(E), then (ν) = {0}. The existence and uniqueness of the stationary measure are well known in this case. This was in fact the first step towards the formalization of boundary theory by Furstenberg <cit.>. Even in this case, with the pivoting technique, we get regularity results for the stationary measure which are better that the ones obtained using ergodic theory. Let p ∈ (0, +∞) and let η be a probability measure on _≥ 0. We say that η is strongly L^p if M_p(η) := ∫_t=0^+∞ t^p-1η(t, +∞) dt < + ∞ and we say that η is weakly L^p if W_p(η) := sup_t ≥ 0t^pη(t, +∞) < +∞. Given E a vector space and k ≤(E), we denote by Gr_k(E) the set of k-dimensional subspaces of E. §.§ Alignment and pivotal extraction An important tool that we will use is the notion of alignment of matrices that we define as follows: [Coarse alignment of matrices] Let g,h be two matrices whose product is well defined. Let 0 < ≤ 1, we say that g is -coarsely aligned to h and we write gÅ^ h if we have: g h≥gh. An important observation is that (<ref>) (together with the sub-multiplicativity of the norm on ⋀^2 E) implies that (g h) ≥(g) + (h) - 2 |log()| (see Lemma <ref>). Using the pivoting technique, we will prove theorem <ref> below. To give a precise statement we need to introduce some notations. Let Γ = GL(E) or Γ = End(E). We will write Γ for the semi-group of words with letters in Γ the set of all tuples _l∈Γ^l, (where Γ^l is identified with Γ^{0, …, l-1} and endowed with the product σ-algebra for all l ∈) that we endow with the concatenation product [ ⊙ : Γ×Γ ⟶ Γ; ((γ_0, …,γ_k-1),(γ'_0,…,γ'_l-1)) ∈Γ^k ×Γ^l ⟼ (γ_0, …,γ_k-1,γ'_0,…,γ'_l-1)∈Γ^k+l. ] We also define the length functor: L : Γ⟶ ; (γ_0, …,γ_k-1) ⟼ k, and the product functor: Π : Γ⟶Γ ; (γ_0, …,γ_k-1) ⟼γ_0⋯γ_k-1. Moreover, for all 0 ≤ k < l, we define χ_k^l : Γ^l →Γ to be the k-th coordinate projection. Let I be a countable set, let (ζ_i)_i∈ I be a family of probability distributions on _≥ 0. Let η be a probability distribution on _≥ 0. We say that η dominates the family (ζ_i)_i∈ I if there exists a constant C such that ζ_i(t,+∞)≤ Cη(t/C-C,+∞) for all t ∈_≥ 0 and all i∈ I. Let (η_i)_i∈ I be a family of probability distribution on _≥ 0. We say that (η_i) has a bounded exponential moment if there exist constants C, β> 0 such that η_i(t, +∞) ≤ Cexp(-β t) for all t ∈ and all i ∈ I. Note that saying that a family (η_i)_i∈ I has a bounded exponential moment is not the same as saying that each η_i has a finite exponential moment because the exponent β and the constant C may depend on the index i ∈ I. We say that a family of random variables has a bounded exponential moment if the family of their distributions have. Given A a measurable event a measurable subset of a measurable space X, we write 1_A for the indicator function of A, it is the measurable function that takes value 1 on A and value 0 on X ∖ A. [Pivotal extraction] Let E be a Euclidean vector space. Let Γ∈{End(E),GL(E)} and let N be a continuous map defined on Γ. Let ν be a strongly irreducible and proximal probability distribution over Γ. Let ρ < 1 and let K ∈. There exist 0< ≤ 1 and three probability distributions (κ̃_0,κ̃_1,κ̃_2) supported on Γ that satisfy conditions (<ref>) to (<ref>). For all i∈{0,1,2}, we write κ_i := Π_*κ̃_i. * We have κ̃_0⊙(κ̃_1⊙κ̃_2)^⊙ = ν^⊗, we say that κ̃_0⊗(κ̃_1⊗κ̃_2)^⊗ is an extraction of ν^⊗. * The push-forward measures L_*κ̃_0 and L_*κ̃_2 have a bounded exponential moment and L_*κ̃_1 = δ_m is the Dirac mass at a positive integer denoted by m. * The measure κ̃_1 has compact support in Γ̃ and κ_1{γ∈Γ | (γ)≥ K |log()| + Klog(2)} = 1. * Given (g_n)_n∈∼κ_0⊗(κ_1⊗κ_2)^⊗, and 0≤ i < j < k ∈, we have g_i⋯ g_j-1Å^/4 g_j⋯ g_k-1 almost surely. * For all g ∈Γ, we have κ_1{γ∈Γ | g Å^γ}≥ 1-ρ and κ_1 {γ∈Γ | γÅ^ g}≥ 1-ρ. * Let i∈{0,2} and let k < l be integers such that L_*κ̃_i{l} > 0. Let: ζ_i,k,l := N_*(χ_k^l)_*(1_L=l)κ̃_i/L_*κ̃_i{l} be the push-forward by N of the conditional distribution of the k-th marginal of κ̃_i relatively to the event L(g̃) = l. Then the family (ζ_i,k,l) is dominated by the push-forward measure N_*ν. Only points (<ref>) to (<ref>) are used in the proofs of Theorems <ref> and <ref> and point (<ref>) is more technical and is only used in the proof of Theorem <ref>. Note that if we moreover assume that N is sub-additive, then points (<ref>) and (<ref>) imply that for i ∈{0,2} the distribution N_*κ_i is virtually dominated by N_*ν, in the sense that there exist constants C, β > 0 such that N_* κ_i(t,+∞) ≤∑_k = 1^+∞Cexp(-β k)N_*ν(t/(C k)-C,+∞) for all t ∈_≥ 0. This is a consequence of Lemmas <ref> and Lemma <ref>. Then by Lemma <ref>, it means that if N_*ν has finite p-th moment, then N_*κ_i also has. Note also that if N_*ν has a finite exponential moment, then N_*κ_i also has for all i ∈{0,1,2}. However, this is not a consequence of (<ref>) but a consequence of (<ref>) and of Lemma <ref>. §.§ Background The study or products of random matrices bloomed with the eponym article <cit.> where Furstenberg and Kesten construct an escape speed for the logarithm of the norm using the sub-additivity. This proof was generalized by Kingman's sub-additive ergodic Theorem <cit.>. This article followed the works of Bellman <cit.> who showed the almost sure convergence of the rescaled logarithms of coefficients as well as a central limit theorem for one specific example. In <cit.> Furstenberg and Kesten show that we have a law of large numbers for the norm under a strong L^1 moment condition for log·. For matrices that have positive entries and under an L^∞ moment condition, they show that moreover, we have a law of large numbers for the coefficients (entries) and under an additional L^2+δ moment assumption, they show that we have a central limit Theorem. These works on matrices inspired the theory of measurable boundary theory for random walks on groups <cit.>. In <cit.>, Bougerol and Lacroix give an overview of the field of study with applications to quantum physics. In <cit.>, Guivarc'h and Raugi show a qualitative version of Theorem <ref>: in the case when ν is proximal and strongly irreducible, the two top Lyapunov exponents are distinct. In <cit.> the same authors show that we have almost sure convergence of the limit flag for totally strongly irreducible distributions. In <cit.> Goldsheid and Margulis show that the distribution ν is proximal and totally strongly irreducible when the support of ν generates a Zariski-dense sub-group of SL(E). In <cit.> Yves Benoist and Jean-François Quint give an extensive state of the art overview of the field of study with an emphasis on the algebraic properties of semi-groups. Later, in <cit.> Xiao, Grama and Liu use <cit.> to show that coefficients satisfy a law of large numbers under some technical L^2 moment assumption. We can also mention <cit.> and <cit.> that give other probabilistic estimates for the distribution of the coefficients. The strong law of large numbers and central limit-theorem for the spectral radius were proven by Aoun and Sert in <cit.> and in <cit.> under an L^2 moment assumption. The importance of alignment of matrices was first noted in <cit.> along with the importance of Schottky sets. Those notions were then used by Aoun in <cit.> where he uses it to show that independent draws of an irreducible random walk that has finite exponential moment generate a free group outside of an exponentially rare event (note that the pivoting technique allows us to drop the finite exponential moment assumption). In <cit.> and <cit.>, Cuny, Dedecker, Jan and Merlevède give KMT estimates for the behaviour of (logγ_n)_n∈ under L^p moment assumptions for p > 2. The main difference between these previous works and this paper is that the measure ν has to be supported on the General Linear group GL(E) for the above methods to work. Indeed, they rely of the existence of the stationary measure ξ_ν^∞ on 𝐏(E), which is a consequence of the fact that GL(E) acts continuously on 𝐏(E), which is compact. Some work has been done to study non-invertible matrices in the specific case of matrices that have real positive coefficients. In <cit.>, Furstenberg and Kesten show limit laws for the coefficients under an L^∞ moment assumption, in <cit.> and <cit.> Mukherjea, Kesten and Spitzer show some limit theorems for matrices with non-negative entries that are later improved by Hennion in <cit.> and more recently improved by Cuny, Dedecker and Merlevède in <cit.>. In <cit.>, Le Page shows the exponential mixing property by exhibiting a spectral gap for the action of ν on the projective space under some moments assumptions on ν. The large deviations inequalities were already known for the norm in the specific case of distributions having finite exponential moment by the works of Sert <cit.>. §.§ Method used To prove the results, we use Markovian extractions. The idea is to adapt the following "toy model" construction to the case of matrices. Let G=⟨ a,b,c|a^2=b^2=c^2=1_Γ⟩ be the free right angle Coxeter group with 3 generators. One can see the elements of G as reduced words in {a,b,c}, finite sequences of letters of type (x_1, …, x_n) without double letters in the sense that x_i ≠ x_i+1 for all 1 ≤ i < n. We write Σ := {a,b,c}^() for the set of words in the alphabet {a,b,c}. We write 1_Σ for the empty word, which is the identity element of Σ. We write ⊙ for the concatenation product on Σ and Π: Σ→ G the word reduction map which is a monoid morphism. We consider the simple random walk on the 3-tree, seen as the Cayley graph of G. Draw a random independent uniformly distributed sequence of letters (l_n)_n∈∈{a,b,c}^. Then for every n∈, write g_n := l_0 ⋯ l_n-1∈ G for the position of the random walk at step n and g̃_n := (l_0, …, l_n-1) the word encoding the trajectory of the random walk up to step n. Then we know that (g_n) almost surely escapes to a point in ∂ G, the set of infinite simple words. To prove it, we can show, using Markov's inequality, that (g_n=1_G) ≤(8/9)^n/2. Indeed, given n ∈, if |g_n|≥ 1, then |g_n+1| = |g_n| + 1 with probability 2/3 and |g_n+1| = |g_n| - 1 with probability 1/3 and if |g_n| = 0, then |g_n+1| = |g_n| + 1 with probability 1. It implies that: ∀ n∈, (√(2)^-|g_n+1|) ≤2√(2)/3(√(2)^-|g_n|). Hence ∀ n∈, (√(2)^-|g_n|) ≤(8/9)^n/2. Therefore (g_n) visits 1_G only finitely many times. After that it gets trapped in a branch (the set of simple words starting with a given letter x_1 ∈{a,b,c}). Then using the same argument, (g_n) visits the first node of this branch only finitely many times and then escapes along the branch starting with x_1x_2 for some x_2 ≠ x_1 and by induction, one can show that (g_n) escapes along a branch (x_1, x_2, …) (an infinite reduced word). By symmetry, one can show that for all k > 1, the distribution of the letter x_k knowing x_1, …, x_k-1 is the uniform distribution on {a,b,c}∖{x_k-1}. For all k ≥ 1, we define the k-th pivotal time of the sequence (l_n) as t_k := min{t ∈ | ∀ j ≥ t, |g_j| ≥ k}. For example t_0 = 0 and t_1 is the first time after the last visit in 1_G. Then for k ≥ 2, the time t_k follows the time of last visit in the closed ball of radius k-1. An interesting observation is that for all k ≥ 1, we have x_k = l_t_k - 1 = l_t_k-1 l_t_k-1+1⋯ l_t_k-1. Then instead of drawing the sequence (l_n)_n∈ of letters, we can draw the limit (x_n)_n∈ first and then the letters (l_n)_n∈ as follows. Write X = {a,b,c,s}, (s like "start") and endow X with a transition kernel p such that p(i,j) = 1/2 for all i ≠ j ∈{a,b,c} and p(s,i) = 1/3 for i ∈{a,b,c}. s[rd]^1/3@/^1pc/[rrd]^1/3@/_1pc/[rdd]_1/3 (X,p)= a@<->[r]^1/2@<->[d]_1/2 b@/^1pc/@<->[ld]^1/2 c Let x_0 = s and draw a Markov chain (x_n)_n∈ in (X,p). It means that we have: ∀ n∈,∀ l∈ X, (x_n+1=l | x_0,…,x_n) = p(x_n,l). Then the sequence (x_k)_k≥ 1 has the same distribution as the sequence l_t_k-1 defined above. Moreover, the distribution of the word (l_t_k, …, l_t_k+1-1) only depends on x_k and x_k+1 and not on the time k ≥ 1. Write ν̃_a,b for the distribution of (l_t_k, …, l_t_k+1-1) knowing that l_t_k-1 = a and l_t_k+1-1 = b and write ν̃_s,a for the distribution of the word (l_0, …, l_t_1-1) knowing that l_t_1-1 = a. Both are probability distributions on Σ. In the same fashion, we define the whole decoration: s [rd]^ν̃_s,a@/^1pc/[rrd]^ν̃_s,b@/_1pc/[rdd]_ν̃_s,c (X,p,ν̃)= a @<->[r]^ν̃_a,b_ν̃_b,a@<->[d]^ν̃_a,c_ν̃_c,a b @/^2pc/@<->[ld]^ν̃_b,c_ν̃_c,b c Then instead of drawing the (l_n) 's uniformly and independently, one can simply draw a random sequence of words (w̃_k) with distribution ⊗ν̃_x_k,x_k+1 relatively to (x_n). Then for every k∈, the random word w̃_k has the distribution of (l_t_k,…,l_t_k+1-1) and the infinite word W = _k=0^∞w̃_k∈{a,b,c}^ has the distribution of the infinite word L = (l_0,l_1,l_2,…). Note also that for all k∈, one has Πw̃_k = x_k+1 and w̃_k has no prefix whose product is x_k. Now, we consider a filtration (ℱ_k)_k≥ 0 such that x_k and w̃_k-1 are ℱ_k-measurable for all k ≥ 1, the distribution of x_k+1 knowing ℱ_k is p(x_k,·) and the distribution of w̃_k knowing ℱ_k and x_k+1 is ν̃_x_k,x_k+1. Now the fact that a time t is pivotal or not is decided as soon as w̃_0 ⊙⋯⊙w̃_k-1 has length at least t. In particular the event (t is a pivotal time) is ℱ_t-measurable. However, given (𝒞_n)_n∈ the cylinder filtration associated to the random sequence (l_n)_n∈, the event (t is a pivotal time) is never 𝒞_n-measurable whatever the choice of n,t∈. This construction gives a proof of the exponential large deviations inequalities for the random walk (g_n). This is not the simplest proof but it shows how and why we want to use the setting of Markovian extractions. ∃σ>0, ∀>0, ∃ C,β > 0, ∀ n∈,(||g_n| - nσ| ≥ n)≤ Cexp(-β n). Let (l_0, l_1, l_2, …) = w̃_0 ⊙w̃_1 ⊙w̃_2 ⊙⋯ be as above. We associate to every integer n ∈ a pair of indices k ∈, r ∈{0, …, |w̃_k|-1} such that n = |w̃_0| + … + |w̃_k-1| + r. This means that l_n-1 is the r-th letter of w̃_k and then by triangular inequality, we have k - r ≤ |g_n| ≤ k + r because k = |x_1⋯ x_k| and r ≥ |l_n-r⋯ l_n-1|. Then note that the lengths (|w̃_k|)_k≥ 1 are independent, identically distributed random variables that are independent of |w̃_0|. Moreover, they all have a finite exponential moment by (<ref>). By Lemma <ref>, r also has an exponential moment which is uniformly bounded in n. Let σ := 1/𝔼(|w̃_1|) = 1/3 and let >0. Then by the classical large deviations inequalities (see Lemma <ref> and Lemma <ref> (<ref>)), we have: (|k - nσ |≥ n /2)≤ Cexp(-β' n) for some C,β'>0 and for all n. Now note that ||g_n| - n σ | ≤ |k - n σ| + r so we have (<ref>) by Lemma <ref> (<ref>). §.§ About the pivoting technique In the second part of this article we mainly use the tools introduced in <cit.>, some of them having been introduced or used in former works like <cit.> where Adrien Boulanger, Pierre Mathieu, Cagri Sert and Alessandro Sisto state large deviations inequalities from below for random walks in discrete hyperbolic groups or <cit.> where Mathieu and Sisto show some bi-lateral large deviations inequalities in the context of distributions that have a finite exponential moment. In <cit.> Sébastien Gouëzel uses the pivoting technique in the setting of hyperbolic groups to get large deviations estimates bellow the escape speed and to show the continuity of the escape speed. For us, the most interesting part of Gouëzel's work is the "toy model" described in section 2. In <cit.> Inhyeok Choi applies the pivoting technique to show results that are analogous to the ones of Gouëzel for the mapping class group of an hyperbolic surface. In <cit.>, Chawla, Forghani, Frisch and Tiozzo use another view of the pivoting technique and the results of <cit.> to show that the Poisson boundary of random walk with finite entropy on a group that has an acylindrical action on an hyperbolic space is in fact the Gromov Boundary of said space. I believe that similar method can be used to describe the Poisson boundary of a totally strongly irreducible random walk that has finite entropy, in the sense of Conjecture <ref>. §.§ Structure of this paper In Section <ref> of this article, we state some local-to-global properties for alignment of matrices. In Section <ref>, we state some preliminary results about random products of non-invertible matrices. In section <ref> Theorem <ref>, we state an abstract version of the construction of the pivoting extraction using the pivoting technique as in <cit.> and prove Theorem <ref> as a corollary of that statement. Then in Section <ref> we give complete proofs of Theorems <ref>, <ref> and <ref> using the pivoting technique and Theorem <ref>. Section <ref> is an appendix where we prove classical results for real valued random variables, we state these lemmas in a convenient way to be able to use them through this paper. § LOCAL-TO-GLOBAL PROPERTIES FOR THE ALIGNMENT OF MATRICES In this section, we describe the geometry of the monoid Γ := End(E) for E a Euclidean space. We can think of = but all the proofs work the same when = or when is a ultra-metric field. Given E a -vector space, we will identify E with Hom(, E). Note that up to choosing a canonical basis for all Euclidean spaces, linear maps between Euclidean spaces can be seen as matrices. Moreover, vectors and linear form can also be seen as matrices. We want to translate ideas of hyperbolic geometry into the language of products of endomorphisms. The idea is to exhibit a local-to global property in the same fashion as <cit.>. That way we can adapt the arguments of <cit.> to the setting of products of random matrices. §.§ Alignment and squeezing coefficients We remind the definition of the singular gap and of the distance in the projective space. Note that given x,y two vectors, we have the characterization x ∧ y = min_a∈x - y ay. Therefore, given h∈Hom(E,F), we have h∧ h = max_x,ymin_a∈h(x - y a)h(y)/xy. [Singular gap] Let E,F be Euclidean vector spaces and h∈Hom(E,F)∖{0}. We define the first (logarithmic) singular gap, or squeeze coefficient of h as: (h) := log(h^2/ h ∧ h) ∈ [0, + ∞]. [Distance between projective classes] Let E be a Euclidean space. We denote by 𝐏(E) the projective space of E the set of lines in E, endowed with the distance map which is characterized by: ∀ x, y ∈ E ∖{0}, ([x],[y]) = x∧ y/xy = min_a ∈x - y a/x. [Lipschitz property for the norm cocycle] Let E and F be Euclidean spaces and let f ∈Hom(E,F)∖{0}. Let x, y ∈ E ∖{0}, we have: |fx/fx- fy/fy| ≤([x],[y]). Let f ∈Hom(E,F) and let x,y ∈ E be unit f = x = y =1. We show that fx≤fy + ([x],[y]) and conclude by homogeneity and by symmetry. Let c ∈ be such that x - y c = min_a ∈x - y a. Then by Definition <ref>, we have ([x],[y]) = x - y c. Moreover |c| ≤ 1 by property of the orthogonal projection. By triangular inequality and by definition of the norm, we have: fx≤fyc + f(x-yc)≤fy|c| + fx - y c≤fy + ([x],[y]). We remind that given g and h two matrices such that the product gh is well defined and given 0< ≤ 1, we write g Å^ h when gh≥gh. We also remind that given h ∈Hom(E, F), we write h^*∈Hom(F^*, E^*) for the map f ↦ fh, we call it the transpose of h. Let E,F be Euclidean spaces and h∈Hom(E,F)∖{0}. Let 0 < ≤ 1. We define V^(h) := {x∈ E;h x≥hx} and U^(h):=h(V^(h)) and W^(h) := U^(h^*). Note that the families (V^(h))_0 < ≤ 1, (U^(h))_0 < ≤ 1 and (W^(h))_0 < ≤ 1, are decreasing for the inclusion order. Note also that for h an endomorphism of rank one, and for all 0 < ≤ 1, the cone U^(h) is the image of h so it has diameter 0 in the projective space. The idea to have in mind is that given h a matrix that has a large singular gap U^(h) will have a small diameter in the following sense. Let E,F be Euclidean spaces, let h∈Hom(E,F)∖{0} and let 0 < ≤ 1. Let u ∈ U^1(h)∖{0} and let u' ∈ U^(h)∖{0}. Then we have: ([u], [u']) ≤exp(-(h))/. Let v ∈ V^1(h) and let v'∈ V^(h) be such that u = hv and u' = hv' Then we have u∧ u' = ⋀^2 h(v ∧ v') so: u∧ u'≤^2 hv ∧ v'. Now saying that v ∈ V^1(h) and v'∈ V^(h), means that u = hv and u'≥hv'. Hence: uu'≥h^2vv'. Then by taking the quotient, we have: u∧ u'/uu'≤⋀^2 h/h^2v∧ v'/vv'≤⋀^2 h/h^2. By definition, the term on the left is ([u], [u']) and the term on the right is exp(-(h))/. Lemma <ref> tells us that the projective image of U^(h) has diameter at most as long as (h) ≥ 2|log()| + log(2). With the toy model analogy, the condition (h) ≥ 2|log()| + log(2) will play the role of the condition for word to be non-trivial. We will extensively use the following simple remarks. Let g and h be non-zero matrices such that the product gh is well defined and let 0 < ≤ 1. We have gÅ^ h if and only if h^*Å^ g^*. Moreover (h^*)=(h). This is a consequence of three well known facts. One is that we have h = h^* for all homomorphism h. One way of seeing that is to notice that the operator norm admits the following (obviously symmetric) characterization: ∀ E, ∀ F, ∀ h∈Hom(E,F), h = max_f ∈ F^* ∖{0} v ∈ E∖{0}|f h v|/fv. The second fact is that (gh)^* = h^* g^*. It implies that for all non trivial g,h, we have gh/gh = h^* g^*/h^*g^*. The third fact is that h^*∧ h^* = (h∧ h)^*. It implies that (h^*)=(h). Let g and h be non-zero matrices such that the product gh is well defined and let 0 < ≤ 1. If there exist u ∈ U^1(h)∖{0} and w ∈ W^1(g)∖{0} such that |wu|/uw≥, then gÅ^ h. If gÅ^ h, then there exist u ∈ U^(h)∖{0} and w ∈ W^(g)∖{0} such that |wu|/wu≥ and gu/gu≥ and wh/wh≥. Let u ∈ U^1(h)∖{0} and w ∈ W^1(g)∖{0}. Assume that |wu|/uw≥. Let f ∈ V^1(g^*) and let v ∈ V^1(h) be such that w = fg and u = hv. We have |fghv|/fghv≥, therefore |fghv|/fghv≥, so gh/gh≥, which means that g Å^ h. This proves the first implication of Lemma <ref>. Now assume that g Å^ h. Let f ∈ E^*∖{0} and let v ∈ E∖{0} be such that |fghv| = fghv. Then |fghv|/fghv≥. Let u := hv and let w := fg. Then we have: ≤|fghv|/fghv = |wu|/wufg/fghv/hv = |fgu|/fguhv/hv= fg/fg|whv|/whv. All factors are in [0,1] so |wu|/wu≥ and gu/gu≥fgu/fgu≥ and wh/wh≥|whv|/whv≥. Moreover hv/hv≥ so v ∈ V^(h) and therefore u ∈ U^(h). We also have fg/fg≥ so w ∈ W^(g). This proves the second implication of Lemma <ref>. Let g and h be non-zero matrices such that the product gh is well defined and let 0 < ≤ 1. Assume that gÅ^ h. Then one has: (gh) ≥(g) + (h) - 2|log()|. Moreover, for every non-zero vectors u∈ U^1(g)∖{0}, and u'∈ U^1(gh)∖{0}, we have: ([u],[u']) ≤1/exp(-(g)). Note that the norm of the ∧ product is sub-multiplicative because it is a norm so: gh ∧ gh≤g ∧ gh ∧ h. So if we do 2log(<ref>) -log(<ref>) we find (<ref>). Now to prove (<ref>), we only need to show that U^1(gh) ⊂ U^(g) and use (<ref>) from Lemma <ref>. Indeed, consider v ∈ V^1(gh), then one has ghv≥ghv≥gh v which means that hv ∈ V^(g), therefore ghv ∈ U^(g) and we can apply Lemma <ref>. Let f, g and h be non-zero matrices such that the product fgh is well defined and let 0 < ≤ 1. Assume that f Å^ g Å^ h and that (g) ≥ 2|log()| + 2log(2). Then (fgh)≥(g) - 4|log()|-2log(2). Let u ∈ U^(g)∖{0}, let u'∈ U^1(gh)∖{0} and let u”∈ U^1(g)∖{0} be non-trivial vectors. By Lemma <ref>, we have ([u],[u”])≤/4. By Lemma <ref>, we have ([u”],[u'])≤/4. So by triangular inequality, we have ([u],[u'])≤/2. Let v ∈ V^1(fg)∖{0}. We have fgv = fgv≥fgv. Hence v∈ V^(g)∖{0} therefore gv ∈ U^(g)∖{0}. Let v'∈ V^1(gh)∖{0}. Assume that u = gv and that u' = gh v'. Then by Lemma <ref>, we have fu'/fu'≥fu/fu - ([u],[u']) ≥/2. Therefore, we have fghv'≥fghv'/2 so fgh≥/2fgh. Moreover, we have g Å^ h, therefore fgh≥^2/2fgh. Now using the formula (γ) = log(γ^2/γ∧γ), we get: (fgh) ≥(f)+(g)+(h) - 4 |log()| - 2log(2). [Heredity of the alignment] Let f,g,h be non-zero matrices such that the product fgh is well defined and let 0 < ≤ 1. Assume that (g) ≥ 2 |log()| + 3log(2) and that f Å^ g Å^/2 h. Then f Å^/2 gh. Let u∈ U^1(gh)∖{0}, let u' ∈ U^(g)∖{0} and let u”∈ U^1(g)∖{0}. By Lemma <ref>, we have ([u'],[u”])≤/8 and by Lemma <ref>, we have ([u], [u”])≤/4. Then by triangular inequality, we have ([u],[u'])≤/2. Now let v ∈ V^1(fg)∖{0}. Then we have fgv≥fgv so gv ∈ U^(g)∖{0} and by the above argument, we have ([u],[gv])≤/2. Then by Lemma <ref>, we have fu/fu≥fgv/fgv- /2. Moreover fgv≤fgv≤fgv /, therefore fu/fu≥/2 and u ∈ U^1(gh) hence f Å^/2 gh. [The ultra-metric case is easier] Let 0 < ≤ 1, let be a ultra-metric locally compact field and let f,g,h be matrices with entries in such that th product f,g,h is well defined. If we assume that f Å^ g Å^ h, and that (g) > 2|log()|, then f Å^ gh. Therefore, in the ultra-metric case, we get rid of all the + k log(2) constants. [Contraction property for aligned chains] Let E be a Euclidean vector space, let 0 < ≤ 1 let n∈. Let g_0, …, g_n be non-zero matrices such that the product g_0⋯ g_n is well defined. Assume that for all k ∈{0, …, n-1}, we have (g_k) ≥ 2|log()| + 3log(2) and g_kÅ^ g_k+1. Then one has: g_0⋯ g_n ≥(/2)^n∏_j = 0^n g_j (g_0⋯ g_n) ≥∑_j = 0^n (g_j) - 2n(|log()| + log(2)). Moreover, for every non-zero vectors u∈ U^1(g_0)∖{0}, and u'∈ U^1(g_0⋯ g_n)∖{0}, we have: ([u],[u'])≤2/exp(-(g_0)). The lemma is trivial when n = 0. Assume n ≥ 1. We claim that for all 0 ≤ k < n, we have g_k Å^/2 g_k+1⋯ g_n. For k = n-1, we assumed g_n-1Å^ g_n so g_n-1Å^/2 g_n. Let 0 < k < n and assume that g_k Å^/2 g_k+1⋯ g_n. Then by Lemma <ref> with f = g_k-1, g := g_k and h := g_k+1⋯ g_n, we have g_k-1Å^/2 g_k⋯ g_n. Hence, we have g_0 Å^/2 g_1⋯ g_n so by (<ref>) in Lemma <ref>, we have (<ref>). For all 0< k <n, we have g_k⋯ g_n≥/2g_kg_k+1⋯ g_n by definition of Å^/2. Then by induction on k, we have g_k⋯ g_n≥(/2)^n-kg_kg_k+1⋯g_n, for k =0, we have (<ref>). Now by (<ref>), we have (g_k ⋯ g_n)≥(g_k) + (g_k+1⋯ g_n) - 2(|log()| + log(2)) for all 0 ≤ k < n. Then by induction, we have (g_k ⋯ g_n)≥∑_j = k^n (g_j)-2(n-k)(|log()| + log(2)) for all 0≤ k <n, therefore we have (<ref>). [Alignment of partial products] Let g_0,…,g_n be non-zero matrices such that the product g_0⋯ g_n is well defined. Let 0 < ≤ 1. Assume that for every k ∈{1,…,n-1} we have (g_i) ≥ 2|log()| + 4log(2). Assume also that g_0Å^ g_1 Å^⋯Å^ g_n for all k ∈{0, …, n-1}, we have g_k Å^ g_k+1. Then for all k ∈{1, …, n}, we have (g_0 ⋯ g_k-1) Å^/2 (g_k⋯ g_n). Let k ∈{2, …, n-1}. Let u ∈ U^1(g_k⋯ g_n)∖{0}, let u' ∈ U^(g_k)∖{0}, let w ∈ W^1(g_0⋯ g_k-1)∖{0} and let w'∈ W^(g_k-1)∖{0}. By Lemma <ref> applied to the sequence g_k, …, g_n, and by Lemma <ref> applied to g_k and by triangular inequality, we have ([u],[u']) ≤/8 + /16≤/4. By Lemma <ref> applied to the sequence g_k-1^*, …, g_0^* and by the above argument, we have ([w],[w']) ≤/4. Now since g_k-1Å^ g_k and by Lemma <ref>, there exist w'∈ W^(g_k-1)∖{0} and u' ∈ U^(g_k)∖{0} such that |w'u'|/w'u'≥. Assume that |w'u'|/w'u'≥. Then by Lemma <ref>, we have |w'u|/w'u≥3/4 and by duality, we have |wu|/wu≥/2, hence (g_0 ⋯ g_k-1) Å^/2 (g_k⋯ g_n). Let 0 < ≤ 1, let E be a Euclidean space and let (γ_n)_n∈ be a sequence in End(E). Assume that for all n ∈, one has γ_n Å^γ_n+1 and (γ_n+1) ≥ 2|log()|+ 3log(2). Then there is a limit line l^∞∈𝐏(E) such that: ∀ n∈, ∀ u_n∈ U^1(γ_0⋯γ_n-1)∖{0}, ([u_n],l^∞) ≤2/exp(-(γ_0⋯γ_n-1)). Let m ≤ n be integers and let u_n∈ U^1(γ_0⋯γ_n-1) ∖{0} and u_m∈ U^1(γ_0⋯γ_m-1) ∖{0}. By Lemma <ref>, we have (γ_0⋯γ_n-1) Å^/2 (γ_n ⋯γ_m-1), then by Lemma <ref>, we have: ([u_n],[u_m]) ≤2/exp(-(γ_0⋯γ_n-1)). By Lemma <ref>, we have (γ_1⋯γ_n-1) ≥ (n-1) log(2)+ 2|log()|+ 3log(2) and by Lemma <ref>, we have (γ_1⋯γ_n-1) ≥ n log(2). So for any sequence (u_n)_n∈∈∏_n = 0^+ ∞ (U^1(γ_0⋯γ_n-1) ∖{0}), the sequence ([u_n]) is a Cauchy sequence in 𝐏(E), therefore it has a limit. Moreover, the diameter of 𝐏U^1(γ_0⋯γ_n-1) goes to 0 by the above argument so the limit l^∞ does not depend on the choice of the u_n's. Now we take the limit of (<ref>) for m→ + ∞ and we get (<ref>). Let 0< ≤ 1 and let n∈. Let h and g_0, …, g_n be matrices such that the product h g_0 ⋯ g_n is well defined. Assume that for all i ∈{0, …, n}, we have (g_i) ≥ 2|log()| + 4 log(2). Assume also that we have h Å^ g_0 and that g_iÅ^/2 g_i+1 for all i ∈{0, …, n-1}. Then we have h Å^/2 (g_0⋯ g_n). By Lemma <ref> (<ref>), we have (g_0⋯ g_n) ≥(g_0) + ∑_j =1^n ( (g_j) - 2|log()| - 2log(2)) ≥(g_0). Let u ∈ U^1(g_0)∖{0} and let u' ∈ U^1(g_0⋯ g_n)∖{0}. By (<ref>) in Lemma <ref>, we have ([u],[u']) ≤4/16. Let v ∈ V^(h) ∩ U^ (g_0)∖{0}, which is not empty by Lemma <ref>. Then by Lemma <ref>, we have ([v], [u]) ≤/16. Hence ([u'],[v]) ≤5/16 so by Lemma <ref>, we have hu'/hu'≥11/16≥/2. Hence, we have h Å^/2 (g_0⋯ g_n). Now we prove a tricky lemma that is essential for the pivoting technique. Let 0< ≤ 1 and let n∈_1. Let γ_-1,γ_0, γ_1, …, γ_2n be non-zero matrices and assume that the product γ_-1⋯γ_2n is well defined. Assume that for all i ∈{0,1,3,5, …, 2n-1}, we have (γ_i) ≥ 4|log()|+ 7log(2) and that for all 0 ≤ i < n, we have: (γ_0⋯γ_2i)Å^γ_2i + 1Å^γ_2i + 2 and that γ_-1Å^γ_0. Then γ_-1Å^/2(γ_0⋯γ_2n). Let i ∈{0, …, n}. By Lemma <ref> applied to f = γ_0⋯γ_2i, g = γ_2i+1 and h = γ_2i+2, we have (γ_0⋯γ_2i) Å^/2 (γ_2i +1γ_2i +2) and by (<ref>) in Lemma <ref>, we have (γ_2i +1γ_2i +2) ≥ 2|log()|+ 7log(2). We moreover claim that for all i ∈{1, …, n-1}, we have (γ_2i-1γ_2i) Å^/4 (γ_2i +1γ_2i +2). Let i ∈{2, …, n-1}, let w ∈ W^1(γ_2i-1γ_2i) ∖{0} and let w' ∈ W^/2(γ_0⋯γ_2i) ∖{0} and let w”∈ W^1(γ_0⋯γ_2i) ∖{0}. We have (γ_2i +1γ_2i +2) ≥ 2 |log()| + 7 log(2) so by Lemma <ref>, we have ([w'],[w”]) ≤/64. Moreover, γ_0⋯γ_2i-2Å^/2(γ_2i-1γ_2i) so by <ref>, we have ([w],[w”]) ≤/64. Then by triangular inequality, we have ([w],[w']) ≤/4 so by Lemma <ref>, we have: w γ_2i +1γ_2i +2/wγ_2i +1γ_2i +2≥w' γ_2i +1γ_2i +2/w'γ_2i +1γ_2i +2 -/4 Moreover, we have (γ_0⋯γ_2i) Å^/2 (γ_2i +1γ_2i +2) so there exists a linear form w' ∈ W^/2(γ_0⋯γ_2i)∖{0} such that w' γ_2i +1γ_2i +2/w'γ_2i +1γ_2i +2≥/2. Hence we have w γ_2i +1γ_2i +2/wγ_2i +1γ_2i +2≥/4, which proves the claim. Now we have γ_0 Å^/4 (γ_1γ_2) Å^/4⋯Å^/4 (γ_2i-nγ_2n). Let u ∈ U^1(γ_0⋯γ_2n)∖{0} and let u' ∈ U^(γ_0)∖{0} and let u”∈ U^1(γ_0)∖{0}. By Lemma <ref> applied to g_0 = γ_0 and g_i = γ_2i-1γ_2i for all i ∈{1, …, n} and ' = /4, we have ([u],[u”]) ≤/16. Moreover, by Lemma <ref>, we have ([u'],[u”]) ≤^3/128. Then by triangular inequality, we have ([u],[u']) ≤/2. Now we may assume that γ_-1 u'/γ_-1 u'≥ because γ_-1Å^γ_0. Then by Lemma <ref>, we have γ_-1 u/γ_-1 u≥ so γ_-1Å^/2(γ_0⋯γ_2n). §.§ Link between singular values and eigenvalues In this short section, we prove the following lemma. We will use the following notations. Let g be an endomorphism such that (g) > 0. We write E^+(g) for the eigenspace associated to the maximal eigenvalue of g, it is a line because (g) > 0. A basic fact is that g(E^+(g)) = E^+(g) and there is a g-stable supplementary E^-(g) such that g(E^-(g)) ⊂ E^-(g) and E = E^+(g) ⊕ E^-(g). Let E be a Euclidean space and let 0 < ≤ 1. Let g be an endomorphism such that (g) ≥ 2|log()|+ 4log(2) and g Å^ g. Then g is proximal and we have the following: ρ_1(g) ≥/2h (g) ≥(g) - 2|log()| - 2 log(2) ∀ u ∈ U^1(g), ([u], E^+(g)) ≤2/exp(-(g)) Consider (g_k)_k≥ 0 to be the sequence of copies of g. First, we apply Lemma <ref> and we get that (g^n) ≥ n ((g) - 2|log()| - 2 log(2)), then going to the limit n → +∞, we get that lim inf(g^n)/n≥(g) - 2|log()| - 2 log(2). Moreover, we know that this inferior limit is in fact an honest limit and it is (g), which proves (<ref>). The proof of (<ref>) goes the same but using (<ref>). Note that (<ref>) implies that (g) > 0 so E^+(g) is a line. To get (<ref>), we apply Corollary <ref>. We get a line l^∞ such that for any u_n ∈ U^1(g^n)∖{0}, we have [u_n] → l^∞. Moreover, we have ([u], l^∞) ≤2 exp(-(g))/. Now we only need to show that l^∞ = E^+(g). Let e ∈ E^+(g). Then we have ge = ρ_1(g) e so by (<ref>), e ∈ V^/2(g). Moreover, e is an eigenvector associated to a simple eigenvalue so e ∈ ge and as a consequence e ∈ U^/2(g). Now this reasoning holds for all positive power of g so we have e ∈ U^/2(g^n). Moreover, (g^n) → +∞ by(<ref>) so by Lemma <ref>, the projective diameter of U^/2(g^n) goes to zero so [u_n] → [e], so l^∞ = [e]. §.§ Finite description of the alignment In this section, on construct a finitely described alignment relation that allows us to use the tools described in the toy model of paragraph <ref>, even though we are not working on locally finite groups. Let Å be a measurable binary relation on a measurable set Γ an 𝒜_Γ⊗𝒜_Γ measurable subset of Γ×Γ. We say that Å is finitely described if there exist an integer M ∈_≥ 1 and two families of measurable subsets (L_i)_1≤ i ≤ M∈𝒜_Γ^M and (R_j)_1≤ j ≤ M∈𝒜_Γ^M such that: Γ=_i = 1^M L_i=_j=1^M R_j and a subset A⊂{1,…,M}^2 such that: Å = _(i,j)∈ A L_i× R_j. [Discrete descriptions of alignment relations] Let E be an Euclidean vector space and 0 < _1 < _2 ≤ 1. There exists a discrete binary relation Å on End(E) that satisfies the inclusions Å^_2⊂Å⊂Å^_1 for any given g,h ∈End(E), we have gÅ^_2h⇒ gÅ h⇒ gÅ^_1h. Let := _2-_1/4. Let N := ⌊1/⌋. Let k ∈ and let (u_1, … u_k)⊂ E∖{0} be such that: ∀ v ∈ E ∖{0}, ∃ i ∈{1, …, k}, ([v],[u_i])≤. Such a family exists because 𝐏(E) is compact. Now let (w_1, … w_k)⊂ E^*∖{0} be such that: ∀ f ∈ E^* ∖{0}, ∃ i ∈{1, …, k}, ([f],[w_i])≤. Such a family exists because E^* is isometric to E. Now let h ∈End(E) ∖{0} and let n ∈{1,…, N}, we define: ϕ_n(h) := {i ∈{1, …,k} | w_i ∈ W^n(h)} and ψ_n(h) := {i ∈{1, …,k} | u_i ∈ U^n(h)}. Now let: Å := {(g,h) ∈End(E) ∖{0} | ∃ n_1, n_2∈{1,…, N}, ∃ i ∈ϕ_n_1(g), ∃ j ∈ψ_n_2(h), |w_i u_j|/w_iu_j n_1n_2^2 ≥_1} ⊔((End(E) ∖{0})×{0}) ⊔({0}×(End(E) ∖{0})) ⊔({0}×{0}). First we claim that Å⊂Å^_1. Let g Å h. If g = 0 or h = 0, then we have g Å^_1 h trivially. Assume that g ≠ 0 and h ≠ 0. Let n_1, n_2∈{1,…, N}, let i ∈ϕ_n_1(g) and let j ∈ψ_n_2(h) be such that |w_i u_j|/w_iu_j n_1 n_2^2 ≥_1. Let f ∈ V^n_1 (g^*) be such that f g = w_i and let v ∈ V^n_2 (h) be such that hv = u_j. Such f, v exist because w_i ∈ W^n_1(g) and u_j ∈ U^n_2 (h), moreover, they are not trivial. Then, we have |f g h v|/fghv = |w_i u_j|/w_iu_j and fg/fg≥ n_1 and hv/hv≥ n_2. Hence, we have |f g h v|/fghv≥_1 so gh≥_1gh, which proves the claim. Now we claim that Å^_2⊂Å. Let g Å^_2 h. Assume that g ≠ 0 and h ≠ 0. Let f ∈ E^*∖{0} and v ∈ E ∖{0} be such that |fghv| = fghv≥_2 fghv. let n_1 := ⌊fg/fg⌋ and let n_2 := ⌊hv/hv⌋. Then we have n_1 ≥fg/fg- and n_2≥hv/hv-. Let i,j∈{1, …, k} be such that ([w_i],[fg])≤ and ([u_j],[hv]) ≤. Then by Lemma <ref>, we have |w_i u_j|/w_iu_j≥|fghv|/fghv - 2. Hence: |w_i u_j|/w_iu_j n_1 n_2 ^2 ≥(|fghv|/fghv - 2 ) (fg/fg-) (hv/hv-) Moreover all three factors are in [0,1] so if we develop, we get: |w_i u_j|/w_iu_j n_1 n_2 ^2 ≥(|fghv|/fghv) (fg/fg- ) (hv/hv - ) - 2 ≥(|fghv|/fghv) (fg/fg) (hv/hv) - 4 ≥|fghv|/fghv-4≥_2 - 4 = _1. Therefore, we have g Å h, which proves the claim. Now we claim that Å is discrete. This follows directly from the fact that given g and h two matrices, the condition g Å h is expressed in terms of (ϕ_1(g), …, ϕ_N(g)) and (ψ_1(h), …, ψ_N(h)), which take only finitely many values. Note that the same proof may be used to construct discrete alignment relations on Hom(E,F) ×Hom(H,E) for E, F and H, three given Euclidean spaces. § RANDOM PRODUCTS AND EXTRACTIONS §.§ Notations for extractions In the two following sections, we will denote by Γ an abstract measurable semi-group a second countable measurable space endowed with an associative and measurable composition map · :Γ×Γ→Γ. We will assume that Γ has an identity element that we denote by 1_Γ. The measurable semi-group Γ can be (, +), End(E) or a semi-group of words. The semi-group will always be endowed with the addition map. Let us recall the notations introduced in Paragraph <ref>. We write Γ for the semi-group of words with letters in Γ the set of all tuples _l∈Γ^l, (where Γ^l is identified with Γ^{0, …, l-1} and endowed with the product σ-algebra for all l ∈) that we endow with the concatenation product [ ⊙ : Γ×Γ ⟶ Γ; ((γ_0, …,γ_k-1),(γ'_0,…,γ'_l-1)) ∈Γ^k ×Γ^l ⟼ (γ_0, …,γ_k-1,γ'_0,…,γ'_l-1)∈Γ^k+l. ] We also define the length functor: L : Γ⟶ ; (γ_0, …,γ_k-1) ⟼ k, and the product functor: Π : Γ⟶Γ ; (γ_0, …,γ_k-1) ⟼γ_0⋯γ_k-1. Given (γ̃_n)_n∈∈Γ^, we write _n = 0^+∞γ̃_n ∈Γ^ for the left to right concatenation of all the γ̃_k's and we write ^∞ : Γ^→Γ^ ; (γ̃_n) ↦_n = 0^+∞γ̃_n. In other words, for all n ∈ and all 0 ≤ k < L(γ̃_n), and for m := L(γ̃_0⊙⋯⊙γ̃_n-1) + k, the m-th element, the projection on the m-indexed coordinate, of the sequence _n = 0^+∞γ̃_n is the k-th letter of γ̃_n, the projection on the k-indexed coordinate. Note that all the above defined maps ⊙, L, Π and ^∞ are measurable. [Grouping of factors] Let Γ be a semi-group. Let γ := (γ_n)_n∈∈Γ^ and let w := (w_n)_n∈∈^ be non-random sequences. For all n ∈, define w_n := w_0 + … + w_n-1. We denote by γ^w ∈Γ^ the sequence of w-groups of γ which we define by: ∀ n∈, γ^w_n := (γ_w_n +k)_0 ≤ k < w_n = (γ_w_n,…, γ_w_n+1-1). We denote by γ^w the sequence of w-products of γ defined as γ^w := Π∘γ^w ∈Γ^ γ^w_n = γ_w_n⋯γ_w_n+1-1 for all n∈. We denote by γ∈Γ^ the left to right product associated to γ, defined as γ_n := γ_0 ⋯γ_n-1 for all n∈ and we denote by γ^w∈Γ^ the left to-right product associated to γ^w defined as γ^w_n := γ^w_0 ⋯γ^w_n-1 = γ_w_n for all n ∈. Let (g̃_n) ∈Γ^ be a sequence which is not stationary to the trivial word, note that the map (^∞, L^⊗), that sends (g̃_n) to the pair (_n = 0^+∞g̃_n,(L(g̃_n))_n∈)∈Γ^×^, is one-to-one. Indeed, to get back to the sequence g̃, write γ := _n = 0^+∞g̃_n and w_n := (L(g̃_n))_n∈, then g̃ = γ^w. Given μ̃ a probability distribution on Γ^, we will write (γ^w_n) ∼μ̃ to introduce a random sequence (γ_n)∈Γ^ and a random sequence (w_n)∈^, defined on the same probability space and such that (γ^w_n) ∼μ̃. Given η̃ and κ̃ two probability measures on Γ, we write η̃⊙κ̃:= ⊙_*(η̃⊗κ̃) for the convolution of η̃ and κ̃ the push-forward by ⊙ of the product measure η̃⊗κ̃. Then η̃⊙κ̃ is the distribution of the concatenation of two independent random words of respective distribution η̃ and κ̃. Given (η̃_n)_n∈ a sequence of probability measures, we write _n=0^+∞η̃_n for the push forward of ^∞_*(⊗_n=0^+∞η̃_n). Given η̃ a probability measure on Γ, we write η̃^⊙ for _n=0^+∞η̃. [Extraction] Let Γ be a semi-group, let μ be a probability measure on Γ^ and let μ̃ be a probability measure on Γ^. We say that μ̃ is an extraction of μ if μ = ^∞_* μ̃ and if there exist constants C, β >0 such that for (g̃_n)_n∈∼μ̃ and for all n ∈, we have almost surely: (exp(β L(g̃_n)) | (g̃_k)_k < n) ≤ C. §.§ Rank and essential kernel of a probability distribution In this section, we describe the probabilistic behaviour of the kernel of a product of i.i.d. random matrices. For that we do not use the tools from Section <ref>, in particular, we do not care about the Euclidean structure of the space. Given h a linear map, we denote by (h) the rank of h the dimension of the image of h. We say that a probability measure ν is supported on a set S if ν(S) =1, it is weaker than saying that S is the support of ν because we do not assume that S is closed nor minimal. Note that given E a vector space a measure ν may be supported on GL(E) [Rank of a distribution] Let E be a vector space and let ν be a step distribution on End(E). We define the eventual rank of ν as the largest integer (ν) such that: ∀ n≥ 0, ν^*n{γ∈End(E) | (γ) < (ν)}=0. [Eventual rank of a distribution] Let E be a Euclidean space and let Γ := End(E). Let ν be a probability measure on Γ. There is a probability measure κ̃ on Γ such that κ̃^⊗ is an extraction of ν^⊗ and Π_*κ̃ is supported on the set of endomorphisms of rank (ν). Given a non-random sequence γ = (γ_n)_n∈, the sequence ((γ_n))_n∈ is a non-increasing sequence of non-negative integers so it is stationary. Write r_γ for the limit of ((γ_n))_n∈. Then there is an integer n' ≥ 1 such that (γ_n) = r_γ for all n ≥ n'. Write n_γ for the minimal such n'. Note that γ↦ r_γ and γ↦ n_γ are measurable maps. Now let (γ_n)_n∈∼ν^⊗ be a random sequence. We define κ̃ to be the distribution of (γ_0, …, γ_n_γ - 1). Note that n_γ is a stopping time for (γ_n)_n∈ so the conditional distribution of (γ_n + n_γ)_n∈ relatively to (γ_0, …, γ_n_γ - 1) is ν^⊗. Hence, we have κ̃⊙ν^⊗ = ν^⊗, so κ̃^⊙ k⊙ν^⊗ = ν^⊗ for all k ∈ and by construction κ̃ is non-trivial so κ̃^⊙ = ν^⊗. Therefore, the measure κ̃^⊗ is an extraction of ν^⊗. Moreover Π_*κ̃ is the distribution of γ_n_γ which has rank r_γ and L_*κ̃ is the distribution of n_γ. Therefore, we only need to show that r_γ is almost surely constant and that n_γ has finite exponential moment. Let r_0 be the essential lower bound of r_γ the largest integer such that (r_γ≥ r_0) =1. Let n_0 ∈ be such that ((γ_n_0) = r_0) > 0 and write α := ((γ_n_0) = r_0). We claim that such an integer n_0 exists. Indeed, by minimality, we have (r_γ > r_0) < 1 so (r_γ = r_0) > 0, which means that ((γ_n_γ) = r_0) > 0. Let n_0 to be such that (n_γ≤ n_0 ∩ r_γ = r_0) > 0. Such an n_0 exists, otherwise n_0 would be almost surely infinite, which is absurd. Now since the sequence (γ_n) is i.i.d, we have ((γ_kn_0⋯γ_(k+1)n_0-1) = r_0) > 0 for all k ∈. Moreover these events are independents so for all k ∈, we have: (∀ k' < k, (γ_k'n_0⋯γ_(k'+1)n_0-1) > r_0 ) = (1-α)^k. Now note that the rank of a product is bounded above by the rank on each of its factor so: ∀ k∈, ((γ_kn_0) > r_0) ≤ (1-α)^k ∀ n∈, ((γ_⌊n/n_0⌋ n_0) > r_0) ≤ (1-α)^⌊n/n_0⌋ Now note that for all n∈, we have ⌊n/n_0⌋≥n/n_0-1 and ⌊n/n_0⌋ n_0 ≤ n so: ∀ n∈, ((γ_n) > r_0) ≤ (1-α)^n/n_0-1. Let C = 1/1-α and β = -log(1-α)/n_0 > 0. Then we have ((γ_n) > r_0) ≤ C exp(-β n) for all n ∈ and β >0. Note that for all n ∈, we have ((γ_n) > r_0) ≥(r_γ > r_0) so (r_γ > r_0) =0, which means that r_0 = r_γ. Hence ((γ_n) > r_0) = ( n < n_γ) for all n∈, so (n_γ > n)≤ C exp(-β n), which means that n_γ has finite exponential moment. [Essential kernel] Let E be a Euclidean space of dimension d≥ 2 and let ν be a probability distribution on End(E). We define the essential kernel of ν as: (ν) := {v ∈ E | ∃ n∈, ν^*n{h∈End(E) | hv =0} > 0 }. Let E be a Euclidean space of dimension d≥ 2 and let ν be a probability distribution on End(E). There is a probability distribution κ on End(E) which is supported on the set of rank (ν) endomorphisms and such that: (ν) = (κ) = {v ∈ E | κ{h∈End(E) | hv = 0} > 0 } ∀ v∈ E, lim_n → +∞ν^*n{h | hv = 0} = sup_n ∈ν^*n{h | hv = 0} = κ{h | hv = 0}. Moreover, there exists a constant α < 1 such that: ∀ v∈ E, sup_n∈ν^*n{h∈End(E) | hv = 0}∈ [0, α]∪{1}. Moreover, the set: (ν) := {v ∈ E | sup_n∈ν^*n{h∈End(E) | hv = 0} = 1} is a subspace of E which is ν-almost surely invariant. Let (γ_n) ∼ν^⊗. We define the random integer: n_0 := min{n ∈ | (γ_n-1⋯γ_0) = (ν) }. Let g := γ_n_0-1⋯γ_0 and let κ be the distribution of g. Then by Lemma <ref> applied to the transpose of ν, the random integer n_0 has finite exponential moment. Now let v ∈ E, and let n ∈, one has: ν^*n{h∈End(E) | hv = 0} = (γ_n-1⋯γ_0 v = 0) ≤(g v = 0) Indeed if n ≤ n_0 and γ_n-1⋯γ_0 = 0, then gv = 0 and if n ≥ n_0 then (γ_n-1⋯γ_0) = (g) therefore (γ_n-1⋯γ_n_0) ∩im(g) = {0}. So γ_n-1⋯γ_0 = 0 ⇒ gv = 0. The inequality (<ref>) is true for all n, this implies that: (ν) ⊂{v ∈ E | κ{h∈End(E) | hv = 0} > 0 }. Now let v be such that (g v = 0) > 0, then for all n ∈, we have: (γ_n-1⋯γ_0 v = 0) ≥(g v = 0) - (n_0 > n). Moreover (n_0 > n) → 0, so we have (<ref>) by (<ref>) and (<ref>). Therefore, there exists an integer n ∈ such that (γ_n-1⋯γ_0 v = 0) > 0. This proves (<ref>) by double inclusion. Let us prove (<ref>). Let V be the largest subspace of E such that g(V) ={0} almost surely. Let α := sup_n∈, v ∈ E ∖ Vν^*n{h∈End(E) | hv = 0}. Assume by contradiction that α = 1. Let (v_n) be a non-random sequence in E ∖ V such that (g v_n = 0)≥ 1 - 2^-n. Then we have ∑_n = 0^+∞(g v_n ≠ 0) < +∞. Therefore, by Borel-Cantelli's Lemma, the set {n ∈ | g v_n ≠ 0} is almost surely finite. Let V' := ⋂_m → +∞⟨(v_n)_n≥ m⟩. Then since E is finite dimensional, there is an integer m ∈ such that v_n ∈ V' for all n ≥ m. Moreover g(V') = {0} almost surely, so V' ⊂ V, which is absurd. This proves (<ref>) by contradiction. Let us prove (<ref>). Assume by contradiction that (γ_0(V) = V) ≠ 1. Let v ∈ V be such that (γ_0 v ∉ V) > 0. Then for all n∈, we have: (γ_n⋯γ_0 v ≠ 0) ≥(γ_0 v ∉ V) (γ_n⋯γ_1 γ_0 v ≠ 0 | γ_0 v ∉ V) ≥(γ_0 v ∉ V) (1-α)> 0, which is absurd because (γ_n⋯γ_0 v ≠ 0) → 0. Let ν be a probability distribution on End(E). The set (ν) is a countable union of subspaces of E that each have dimension at most (E) - (ν). Let d' := (E) - (ν). For all k ∈{0, …, (E)}, we denote by Gr_k(E) the set of subspaces of E of dimension k. Let κ be as in Lemma <ref> First we show that (ν) is included in a countable union of subspaces of dimension exactly d'. Given n ∈ and α > 0, we define: K_α := {x∈ E | κ{h∈End(E) | h x =0}≥α} Note that we have: (ν) = ⋃_m ∈ K_2^-m, so we only need to show that K_2^-m is included in a countable union of subspaces for all m ∈. Let m ∈, we claim that K_2^-m is included in a union of at most d'2^md' subspaces of E of dimension d'. Let g ∼κ, write α := 2^-m and assume that K_α≠{0}. Let N be an integer and let (x_1, …, x_N) ∈ K_α. Assume that for all 1 ≤ i_1 < … < i_k ≤ N with k ≤ d'+1, the space ⟨ x_i_1, …, x_i_k⟩ has dimension exactly k. In this case, we say that the family (x_i)_1≤ i≤ N is in general position up to d'. We claim that in this case: N ≤d'/α. To all index i ∈{1, …, N}, we associate a random integer variable a_i := 1_g x_i = 0∈{0,1} such that a_i = 1 when g(x_i) = 0 and a_i = 0 otherwise. Note that (g) has dimension at most d' almost surely. As a consequence, for all ≤ i_1 < … < i_d'+1≤ N, we have ⟨ x_i_j⟩_1≤ j ≤ d'+1 > ((g)) almost surely. Hence (⟨ x_i_j⟩⊂(g)) = 0 so g x_i_j≠ 0 for at least one index j. This means that, with probability 1, the random set of indices {1≤ i ≤ N | g x_i =0} does not admit any subset of size d'+1 so it has cardinal at most d'. In other words, ∑_i=1^N a_i≤ d' almost surely. Now, note that by definition of _α, we have (a_i) = (g x_i = 0) ≥α for all i ∈{1,…, N}. Hence N α≤∑_i=1^N E(a_i) ≤ d', which proves (<ref>). Now we want to construct a family (x_1, …, x_N) ∈ K_α that is in general position up to d' and such that: K_α⊂⋃_1≤ i_1 < … < i_d'≤ N⟨ x_i_j⟩_1≤ j ≤ d'. We do it by induction. Since we assumed that K_α≠{0}, there is a non-zero vector x_1 ∈ K_α. Now let j ∈_≥ 1. Assume that we have constructed a sequence (x_1, …, x_j) ∈ K_α that is in general position up to d'. If we have: K_α⊂⋃_1≤ i_1 < … < i_d'≤ j⟨ x_i_j⟩_1≤ j ≤ d', then we write N := j and the algorithm ends as (<ref>) is satisfied. Otherwise, we take: x_j+1∈ K_α∖(⋃_1≤ i_1 < … < i_d'≤ j⟨ x_i_1,…, x_i_d'⟩). Then we have constructed a family (x_1, …, x_j+1) ∈ K_α that is in general position up to d'. This process terminates after at most ⌊d'/α⌋ steps by (<ref>). Then we conclude by noting that for all N ∈, the set of multi-indices {1≤ i_1 < … < i_d'≤ N} has cardinality Nd'. Now for all m we choose a family (V^m_1, …, V^m_d'2^md') ∈Gr_d'(E)^d'2^md' such that K_2^-m⊂⋃_j=1^d'2^md' V^m,n_j and we have: (ν) ⊂⋃_m ∈, j∈, 1 ≤ j ≤d'2^md' V^m_j. This proves that (ν) is included in a countable union of subspaces of dimension exactly d'. Now we will show that (ν) is in fact equal to a countable union of subspaces. Let g ∼κ and let K := (ν). Let (V_k)_k∈∈Gr_d'(E)^ be such that K ⊂⋃ V_k. We will construct a family (V_k_0, …, k_j)_0 ≤ j ≤ d', (k_0, …, k_j) ∈^j+1 such that (V_k_0)_k_0 ∈ = (V_k)_k∈, and such that for all j ∈{1, …,d'}, we have: K ⊂⋃_(k_0, …, k_j) ∈^j+1 V_k_0, …, k_j, and for all multi-index (k_0, …, k_j) ∈^j+1, we have V_k_0, …, k_j⊂ V_k_0, …, k_j-1, with equality if and only if V_k_0, …, k_j-1⊂ K. Then we have: K = ⋃_(k_0, …, k_d') ∈^d'+1 V_k_0, …, k_d'+1. Indeed, for all (k_0, …, k_d') ∈^d+1, we either have V_k_0, …, k_d'⊂ K or V_k_0⊋ V_k_0, k_1⊋…⊋ V_k_0, …, k_d'. In the second case, we have (V_k_0, …, k_d') ≤(V_k_0) - d' so V_k_1, …, k_d' = {0}, which is a contradiction because 0 ∈ K by definition. We do it by induction. Let 0 ≤ c ≤ d'. Assume that we have constructed a family (V_k_0, …, k_j)_0 ≤ j ≤ c, (k_0, …, k_j) ∈^j+1, such that for all j ∈{0,…, c}, we have: K ⊂⋃_(k_0, …, k_j)∈^j V_k_1, …, k_j, and such that for all j∈{1, c} and all (k_0, …, k_j)∈^j+1, we have V_k_0, …, k_j⊂ V_k_0, …, k_j-1, with equality if and only if V_k_0, …, k_j-1⊂ K. Let (k_0, …, k_c) ∈^c+1 be a multi-index such that V_k_1, …, k_c⊄K. Then we have almost surely g(V_k_0, …, k_c) ≠{0} so the restriction of h to V_k_0, …, k_c has rank at least 1 almost surely. By the previous argument, the set: K ∩ V_k_0, …, k_c = {x∈ V_k_0, …, k_c | (h x = 0) > 0} is included in the union of a countable family of subspaces of V_k_0, …, k_c that have dimension (V_k_0, …, k_c) - 1. For all multi-index (k_0, …, k_c)∈^c+1 such that V_k_1, …, k_c⊄K, we define ( V_k_0, …, k_c+1)_k_c+1∈ to be such a family. For every other multi-index (k_0, …, k_c) ∈^c+1, we define V_k_0, …, k_c+1 := V_k_0, …, k_c for all k_c ∈. Let ν be a probability distribution over End(E) and (γ_n) be a random sequence of distribution ν^⊗. Then for every x∈ E, the sequence (γ_n x = 0) is non-decreasing and its limit is positive if and only if x ∈(ν). §.§ Rank one boundary of a semi-group Given a subset A of a topological space X, we denote by 𝐜𝐥_X(A) the closure of A in X. Note that saying that an endomorphism has rank one is equivalent to saying that it is the product of a non-trivial vector on the left by a non-trivial linear form on the right. Given a probability measure ν on a topological space X, we denote by 𝐬𝐮𝐩𝐩_X(ν) or simply 𝐬𝐮𝐩𝐩(ν) the smallest closed subset of X on which ν is supported. Then 𝐬𝐮𝐩𝐩(ν) is characterized by the fact that it is closed and for all open 𝒰⊂ X, we have ν(𝒰) > 0 if and only if 𝒰∩𝐬𝐮𝐩𝐩(ν) ≠∅. We remind that given E a vector space and u∈ E∖{0}, we denote by [u] the projective class of u and we denote by 𝐏(E) the projective space of E. Given X ⊂𝐏(E), we will write "Let [u] ∈ X" for "Let u be a non-zero vector such that [u] ∈ X". [Rank one boundary] Let E be a Euclidean space and let Γ < End(E) be a sub-semi-group. Let Γ:= 𝐜𝐥_End(E)(Γ). We denote by ∂Γ the rank-one boundary of Γ, defined as: ∂Γ := {[γ] | γ∈Γ, (γ) = 1} We define the left and right boundaries of Γ as: ∂_uΓ := {[hv] | [h] ∈∂Γ, v ∈ E ∖(h)}⊂𝐏(E) ∂_wΓ := {[fh] | [h] ∈∂Γ, f ∈ E^* ∖(h^*)}⊂𝐏(E^*). [Range and boundary of a distribution] Let E be a Euclidean space and let ν be a probability measure on End(E). We denote by Γ_ν the range of ν defined as the smallest closed sub-semi-group of End(E) that has measure 1 for ν. We define ∂ν := ∂Γ_ν and ∂_uν := ∂_uΓ_ν and ∂_wν := ∂_wΓ_ν. [Invariant subspaces and irreducibility] Let E be a Euclidean space. Let ν be a probability measure on End(E). Let S := 𝐬𝐮𝐩𝐩(ν). Let V ⊂ E be a proper non-trivial subspace. If S V ⊂ V then ν is not irreducible. Let N ≥ 1 and let V_1, ⋯, V_n ⊂ E be proper non-trivial subspaces. If ⋃_i =1^N S · V_i ⊂⋃_i =1^N V_i then Γ is not strongly irreducible. Let Γ = ⋃_n∈ S^· n = ⋃_n∈Π(S^n) be the semi-group generated by S. Note that for all n∈, one has ν^*n(Γ) = 1. Let V ⊂ E be a proper non-trivial subspace such that SV ⊂ V. The fact that V is a proper subspace implies that there is a linear form f ∈ E^*∖{0} such that V ⊂(f). The fact that V is not trivial implies that there is a vector v ∈ V ∖{0}. Let f and v be as above. We have Γ· v ⊂ V, hence f γ v = 0 for all γ∈Γ so ν is not irreducible. Let N ≥ 1 and let V_1, ⋯, V_n ⊂ E be a family of proper non-trivial subspaces such that ⋃_i =1^N S · V_i ⊂⋃_i =1^N V_i. Let f_1, …, f_n ∈ E^*∖{0} be such that V_i ⊂(f_i) for all i∈{1, …, N}. Let v ∈ V_1 ∖{0}. Then one has Γ· v ⊂⋃ V_i, hence ∏_i = 1^N f_i γ v = 0 for all γ∈Γ so ν is not strongly irreducible. We call irreducible semi-group a semi-group Γ⊂End(E) such that for all v ∈ E∖{0} and all f ∈ E^*∖{0}, there is an element γ∈Γ such that f γ v ≠ 0. Let E be a Euclidean space and let Γ < End(E) be an irreducible semi-group. Then we have a factorization: ∂Γ = { [uw] | [u] ∈∂_uΓ, [w] ∈∂_wΓ}. Note that the space 𝐏(End(E)) is metrizable so the closure is characterized by sequences. Let π be a rank one endomorphism. Saying that [π] ∈∂Γ is equivalent to saying that there is a sequence (γ_n) ∈(Γ∖{0})^ such that [γ_n] → [π]. Note also that the product map is continuous so Γ is a semi-group. Therefore, for all [π_1], [π_2] ∈∂Γ, and for all γ∈Γ such that π_1 γπ_2 ≠ 0, we have [π_1 γπ_2] ∈∂Γ. Let v_1 ∈∂_uΓ and let f_2∈∂_v Γ. Let v_2 ∈ E and f_1 ∈ E^* be such that [v_1 f_1] ∈∂Γ and [v_2 f_2] ∈∂Γ. By definition of the irreducibility, there is an element γ∈Γ such that f_1 γ v_2 ≠ 0. Let γ be such an element. Then we have v_1 f_1 γ v_2 f_2 ≠ 0 hence [v_1 f_1 γ v_2 f_2] ∈∂Γ. Moreover [v_1 f_1 γ v_2 f_2 ] = [v_1 f_2] because f_1 γ v_2 is a scalar, therefore [v_1 f_2] ∈∂Γ. [Characterisation of proximality] Let E be a Euclidean space and let ν be a probability measure on End(E). Let (γ_n)∼ν^⊗. Assume that ν is irreducible and that (ν) ≥ 1. Then the following assertions are equivalent: * ν is not proximal (in the sense of Definition <ref>). * There is a constant B such that (γ_n)≤ B almost surely and for all n∈. * ∂ν = ∅. We assume (<ref>) and we claim that we have (<ref>). Note that is a continuous map over End(E) ∖{0} and it only depends on the projective class so for B as in (<ref>), we have (h) ≤ B for all h ∈Γ_ν∖{0}. It implies that there is no rank one endomorphism in Γ_ν. Therefore ∂ν = ∅. Now we prove the converse by contraposition. Assume that ((γ_n))_n∈ is not almost surely bounded. It means that for all B, there is an integer n such that ((γ_n) > B) > 0 or equivalently, there is a matrix h ∈𝐬𝐮𝐩𝐩(ν^*n) such that (h) > B. Then there is a sequence (h_k)∈Γ_ν^ such that (h_k) → +∞. The space 𝐏(End(E)) is compact so the sequence [h_k] has a limit point. Let [π] be such a limit point, then we have (π) = +∞ so π has rank one, hence [π]∈∂ν. We assume that ν is proximal and show that (<ref>) is false. The map is not continuous on End(E) ∖{0} but it is on the set of proximal matrices. That means that given a matrix h such that (h) > 0, there is a neighbourhood 𝒩 of h such that (h') ≥1/2(h) for all h'∈𝒩. Let n be an integer such that ((γ_n) > 0) > 0, then there is a matrix h ∈𝐬𝐮𝐩𝐩(ν^*n) such that (h) > 0, it means that (h^m) m→∞⟶ +∞. Moreover, for all m ∈, we have h^m ∈𝐬𝐮𝐩𝐩(ν^*nm), hence ((γ_nm) ≥(h^m)-1) > 0, which contradicts (<ref>). Now we assume that ∂ν≠∅ and show that ν is proximal. First we prove that there is [π] ∈∂ν such that (π) = +∞, which simply means that π^2 ≠ 0. Let [w] ∈∂_wν. Let [u] ∈∂_uν be such that wu ≠ 0. Such a u exists because Γ_ν is invariant by left multiplication by Γ_ν. Therefore the set {0}∪{u∈ E | [u]∈∂_uν} is invariant by the action of Γ_ν, which is irreducible, hence it is not included in (w) by Lemma <ref>. Then by Lemma <ref>, we have [uw] ∈∂ν and since wu≠ 0, we have (uw)^2≠ 0. Let 𝒩 be an open neighbourhood of uw such that (h') ≥ 1 for all h'∈𝒩. This neighbourhood intersects Γ_ν so 𝒩 intersects 𝐬𝐮𝐩𝐩(ν^*n) for some n, which means that ν is proximal. §.§ Construction of the Schottky measure Remind that a measurable binary relation over a measurable space Γ is a subset Å⊂Γ×Γ that is measurable for the product σ-algebra. Given g,h ∈Γ, we write g Å h to say that (g,h) ∈Å. Given S, T ⊂Γ, we write S Å T to say that S × T ⊂Å. Given n ∈ and (g_0, …, g_n )∈Γ^n+1, we write g_0 Å…Å g_n to say that g_i Å g_i+1 for all i ∈{0, …, n-1}. Let Γ be a measurable space, let Å be a measurable binary relation on Γ and let 0 ≤ρ < 1. Let ν_s be a probability measure on Γ. We say that ν_s is ρ-Schottky for Å if: ∀ h ∈Γ, ν_s{γ∈Γ | h Åγ}≥ 1 - ρ and ν_s{γ∈Γ | γÅ h}≥ 1 - ρ. Let Γ be a measurable space, let Å⊂Å' be measurable binary relation on Γ and let 0 ≤ρ≤ρ' < 1. Let ν be a probability measure on Γ that is ρ-Schottky for Å. Then ν_s is also ρ'-Schottky for Å'. We recall that the alignment Å^ has been defined in Definition <ref>. Given 0 < ≤ 1 and g,h two matrices, we write gÅ^ h when gh≥gh. Let E be a Euclidean space, let Γ < End(E) be a strongly irreducible semi-group and let 0 < ρ≤ 1. There exist an integer N∈, a constant > 0 and a family ([π_1], …, [π_N])∈∂Γ^N such that: ∀ h ∈End(E)∖{0}, #{k | π_kÅ^ h}≥ (1-ρ) N and #{k | hÅ^π_k}≥ (1-ρ) N. Let d := (E). Let m ∈. Assume that we have constructed a family ([u_1], …, [u_m]) ∈∂_uΓ^m that is in general position in the sense that for all k ≤ d, and for all 1 ≤ i_1 < … < i_k ≤ M, we have (⟨ u_i_1, …, u_i_k⟩) = k. Let: u_m+1∈{u∈ E | [u]∈∂_uΓ}∖⋃_k ≤ d-1, 1≤ i_1<… <i_k≤ m⟨ u_i_j⟩_1 ≤ j≤ k. Such a u_m+1 exists because {u∈ E | [u]∈∂_uΓ}∪{0} is Γ-invariant and it is not {0} by Lemma <ref>. Hence {u∈ E | [u] ∈∂_u Γ} can not be included in ⋃_i_1<… <i_k⟨ u_i_j⟩_1 ≤ j≤ k, which is a finite union of hyperplanes. Then [u_m+1] ∈∂Γ and we can easily check that ([u_1], …, [u_m+1]) is in general position. let M := ⌈d-1/ρ⌉. Let ([u_1], …, [u_M]) ∈∂_uΓ^M be in general position. We can construct such a family by induction using the above argument. Let ([w_1], …, [w_M]) ∈∂_wΓ^M be in general position. By the above argument applied to Γ^* := {γ^* | γ∈Γ}⊂End(E^*), which is also strongly irreducible, we can construct such a family. Let N := M^2. Given i,j ∈{1, …,M}, we define π_Mi + j := u_i w_j. Then ([π_1], …, [π_N])∈∂Γ^N by Lemma <ref>. Note also that given g,h ∈End(E), saying that gÅ^ h is equivalent to saying that gh/gh≥. Let h ∈End(E) ∖{0}. Let I^h := {i | u_i ∈(h)}. Since the family ([u_1], …, [u_M]) is in general position and ⟨ u_i⟩_i∈ I⊂(h), he have # I ≤ d-1. Let J^h := {j | w_j ∈(h^*)}. By the same argument, we have #J^h ≤ d-1. Now let: ψ(h) := max_I, J ⊂{1, …, M}, #I≤ d-1, #J ≤ d-1min{min_i ∉ Ih u_i/hu_i, min_j ∉ Jw_j h/w_jh}. By the previous argument, one has ψ(h) > 0 for all h. Moreover the maps h↦h v/hv and f h/fh are continuous for all v ∈ E∖{0} and all f∈ E^* ∖{0}. Hence ψ is continuous. Moreover, ψ is invariant by scalar multiplication so there is a continuous map ϕ : 𝐏(End(E)) → (0,1] such that ϕ([h]) = ψ(h) for all h ∈End(E) ∖{0}. The projective space 𝐏(End(E)) is compact so ϕ (𝐏(End(E))) is compact. Let be its lower bound. Now let h ∈End(E)∖{0}. The set of indices {k | π_kÅ^ h} is {M i + j | i,j ∈{1, …,M}, w_j Å^ h}, which has cardinality at least M (M-d+1). Indeed, since ψ(h) ≥, a sufficient condition to have w_j Å^ h is for j not to be included in a set J that realises the maximum in the definition of ψ(h). Moreover we have M ρ≥ d - 1 so M (M-d+1) ≥ (1 - ρ) N. By the same argument #{k | hÅ^π_k}≥ (1 - ρ) N. Let E be a Euclidean space, let ν be a strongly irreducible and proximal probability measure on End(E), let 0 <ρ≤ 1 and let K ≥ 8. There exist an integer N, two constants α', ∈ (0,1), a family (n_k)_1≤ k ≤ N∈^N and a family (S'_k)_1≤ k ≤ N of measurable subsets of Γ such that ν^*n_k (S'_k) ≥α' for all 1≤ k ≤ N and such that: ∀ h ∈End(E), #{k | S'_kÅ^2 h}≥ (1-ρ) N and #{k | hÅ^2 S'_k}≥ (1-ρ) N, ∀ j ∈{1, …, N}, #{k | S'_kÅ^ S'_j}≥ (1-ρ) N and #{k | S'_jÅ^ S'_k}≥ (1-ρ) N, ∀ h ∈⋃_k = 1^N S'_k, (h) ≥ K |log()| + K log(2). Let N ∈, let > 0 and let ([π_1], …, [π_N])∈∂ν^N be such that: ∀ h ∈End(E)∖{0}, #{k | π_kÅ^3 h}≥ (1-ρ) N and #{k | hÅ^3π_k}≥ (1-ρ) N. Such a family exists by Lemma <ref>. To all k ∈{1, …, N}, we associate the open set: S'_k := { h∈End(E)∖{0} | > h/h-π_k/π_k, (h) > K |log()| + K log(2)}. Now let k ∈{1, …, N} and h∈End(E) ∖{0}. Assume that h Å^3π_k, then we claim that for all h' ∈ S'_k, we have h Å^2 h'. Note that the right multiplication by h' is h'-Lipschitz so hh'/h- π_kh'/π_k≤h'. We assumed h Å^3π_k, which means that π_kh'/π_k≥ 3h', hence by triangular inequality, we have hh'/h≥ 2 h'. The same reasoning works the same for the left alignment and we have: ∀ h ∈End(E)∖{0}, #{k | S'_kÅ^2 h}≥ (1-ρ) N and #{k | hÅ^2 S'_k}≥ (1-ρ) N. Now let j,k ∈{1, …, N} be such that π_j Å^3π_k and let h ∈ S'_j and h' ∈ S'_k. Then by the above argument, we have h Å^2π_k and by the same argument, we have hÅ^ h'. Hence: ∀ j ∈{1, …, N}, #{k | S'_k Å^ S'_j}≥ (1-ρ) N and #{k | S'_j Å^ S'_k}≥ (1-ρ) N. Let k∈{1, …, N}. The interior of S'_k contain π_k. It means that the interior of S'_k intersects Γ_ν = 𝐜𝐥(⋃_n=0^∞𝐬𝐮𝐩𝐩(ν^*n)). Hence, there is an integer n_k such that the interior of S'_k intersects 𝐬𝐮𝐩𝐩(ν^*n_k) and then S'_k intersects 𝐬𝐮𝐩𝐩(ν^*n_k) because it is a cone and ν^n_k(S'_k) > 0 by characterization of the support. Let (n_k)∈^N be such that ν^*n_k(S'_k) > 0 for all k∈{1, …, N}. Let α' := min_k ν^*n_k(S'_k). Let E be a Euclidean space, let ν be a strongly irreducible and proximal probability measure on End(E), let 0 < ρ < 1 and let K ≥ 8. There exist an integer N, two constants α, ∈ (0,1), an integer m and a family (S_k)_1≤ k ≤ N of measurable subsets of End(E) such that ν^*m (S_k) ≥α for all 1≤ k ≤ N and such that: ∀ h ∈End(E), #{k | S_kÅ^ h}≥ (1-ρ) N and #{k | hÅ^ S_k}≥ (1-ρ) N, ∀ h ∈⋃_k = 1^N S_k, (h) ≥ K |log()| + K log(2). Without loss of generality, we may assume that ρ < 1/3 Let N ∈, let α', ∈ (0,1), let (n_k) and let (S'_k) be as in Corollary <ref>. To all index k∈{1, …, N}, we associate two indices index i_k, j_k ∈{1, …, N} such that S'_k Å^ S'_i_k and S'_i_kÅ^ S'_j_k and S'_j_kÅ^ S'_k. By (<ref>), such indices i_k,j_k exist because: #{(i,j) | S'_iÅ^ S'_k Å^ S'_iÅ^ S'_jÅ^ S'_kÅ^ S'_j}≥ (1 - 2ρ) N (1-3ρ) N > 0. Hence the set of all possible values for i_k,j_k is non-empty. Now let m' be the smallest common multiple of {n_k + n_i_k | 1 ≤ k ≤ N}, let m” be the smallest common multiple of {n_k + n_j_k | 1 ≤ k ≤ N} and let m = m' + m”. Let k ∈{1, …, N}. We define p_k := m'/n_k + n_i_k and q_k := m”/n_k + n_j_k and: S_k := (S'_k · S'_i_k)^· p_k·(S'_j_k· S'_k)^· q_k = Π( (S'_k × S'_j_k)^p_k×(S'_j_k× S'_k)^q_k). Then we have ν^*m(S_k) ≥(ν^n_k(S'_k)ν^n_i_k(S'_i_k))^p_k(ν^n_i_k(S'_i_k)ν^n_j_k(S'_k))^q_k≥α'^2 p_k + 2 q_k≥α'^2m =: α. Now let h ∈End(E) ∖{0} and let k∈{1, …, N}. Assume that h Å^2 S'_k, then by Lemma <ref>, we have h Å^ S_k. If we instead assume that S'_k Å^2 h, then by the same argument applied to the transpose, we have S_k Å^ h. Lemma <ref> also implies that min(S_k) ≥min(S'_k) for all k. Let E be a Euclidean space, let Γ∈{End(E), GL(E)} and let ν be a strongly irreducible and proximal probability measure on Γ, let 0 <ρ < 1 and let K ≥ 8. There exist an integer m, two constants α, ∈ (0,1) and a probability measure ν̃_s on Γ^m such that: * The measure Π_*ν̃_s is ρ-Schottky for Å^. * The measure ν̃_s is absolutely continuous with respect to ν^⊗ m in the sense that αν̃_s ≤ν^⊗ m. * We have _*Π_*ν̃_s[K |log()| + K log(2),+∞] = 1 for all (h_1, …, h_m) in the support of ν̃_s, we have (h_1⋯ h_m) ≥ K |log()| + K log(2). * The support of ν̃_s is compact in Γ^m. Let m,N∈, α,∈(0,1) and (S_k)_1 ≤ k≤ N be as in Lemma <ref>. Define f : Γ^m →_≥ 0 as: f := ∑_k = 1^N1_S_k∘Π/ν^*m(S_k). Then f ≤N/α because we assumed that ν^*m(S_k) ≥α for all k. Moreover, we can check that ∫_Γ^m f dν^⊗ m = N and for all k ∈{1, …, N} we have ∫_Π^-1(S_k) f dν^⊗ m≥ 1. Let: ν̃_s := f ν^⊗ m/∫_Γ^m f dν^⊗ m and ν_s := Π_*ν̃_s. Then α/Nν_s ≤ν^*m by definition. Moreover, for all I ⊂{1, …, N}, we have ν_s(⋃_i∈ IS_i)≥# I/N. Hence ν_s is ρ-Schottky by (<ref>). Moreover ν_s is supported on ⋃ S_k and (S_k)⊂[K |log()| + K log(2),+∞] for all k so _*ν_s[K |log()| + K log(2),+∞] = 1. Now we only need to show that we can moreover assume that ν̃_s has compact support. Let β∈ (0,1). There is a compact 𝐂⊂Γ^m such that ν̃_s(𝐂) > 1-β. Let ν̃^𝐂_s := 1_𝐂ν̃_s/ν̃_s(K). Let ν_s^𝐂 := Π_*ν̃_s^𝐂. Then ν_s^𝐂(⋃_i∈ IS_i)≥1/N - β for all I ⊂{1,…, N}. Hence ν_s^𝐂 is (ρ + β)-Schottky. Moreover α(1-β)ν̃_s ≤ν^⊗ m. This is true for all 0 < β < 1/3 - ρ so ρ + β can take any value in (0,1) and we always have α(1-β) > 0. § PIVOTING TECHNIQUE §.§ Statement of the result and motivation In this section, we denote by Γ a measurable semi-group that we endow with a binary relation Å and a subset S ⊂Γ. The idea is to think of Γ as End(E) or GL(E), think of Å as Å^ and think of S as a compact subset of Γ such that min(S) ≥ K |log()| + K log(2). We denote by Γ the associated semi-group of words. We define the semi-binary relation Å^S ⊂Γ×Γ recursively. Let g ∈Γ and let γ̃∈Γ. We write g Å^S γ̃ if g ∈ S and one of the following conditions holds: * There exist words g̃_0, g̃_1, g̃_2 ∈Γ such that γ̃= g̃_0 ⊙g̃_1 ⊙g̃_2, and Π(g̃_1) ∈ S and Π(g̃_0) ÅΠ(g̃_1) ÅΠ(g̃_2) and g ÅΠ(γ̃). * There exist words g̃_0, g̃_1, g̃_2 ∈Γ such that γ̃= g̃_0 ⊙g̃_1 ⊙g̃_2, and Π(g̃_1) ∈ S and g Å^S g̃_0 and Π(g̃_0)ÅΠ(g̃_1) ÅΠ(g̃_2). Given 0 ≤ k < n ∈, we write χ_k^n : Γ^{0, …,n-1}→Γ ; (γ_0, …, γ_n-1)↦γ_k for the k-th coordinate projection, in the same way, we define χ_k^∞: Γ^→γ. We sometimes omit the total length l ∈∪{∞} and write χ_k instead of χ_k^l. Given γ∈Γ̃, and k < L(γ̃), we call χ_k(γ̃) the k-th character of γ̃. [Pivotal extraction] Let Γ be a measurable semi-group, let Å be a binary relation on Γ and let S ⊂Γ be measurable. Let ν be a probability measure on Γ, let 0< α <1, let 0< ρ <1/5 and let m ∈. Let ν̃_s be a probability measure on Γ^m such that αν̃_s≤ν^⊗ m and let ν_s := Π_*ν̃_s. Assume that ν_s is supported on S and ρ-Schottky for Å. Then there exist constants C, β > 0 that only depend on (α, ρ, m) and an extraction μ̃ of ν^⊗ such that for (g̃_k)_k∈∼μ̃, for (γ_n) = _m =0^∞g̃_k ∼ν^⊗, all the following conditions hold: * For all k ∈, we have Π(g̃_2k) ÅΠ(g̃_2k +1) Å^S g̃_2k+2 almost surely. * For all k ∈, we have L (g̃_2k+1) = m almost surely and the conditional distribution of g̃_2k +1 relatively to (g̃_j)_j ≠ 2k+1 is almost surely bounded above by ν̃_s/1-2ρ for all measurable A ⊂Γ^m, we have almost surely: (g̃_2k +1∈ A | (g̃_j)_j ≠ 2k+1) ≤ν̃_s(A)/1-2ρ * For all k ∈, we have almost surely: ∀ l ∈, (L (g̃_k) > l | (g̃_j)_j ≠ k) ≤ C exp(-β l). * For all n ∈, and for all measurable A ⊂Γ∖⋃_k = 0^m-1χ_k^m(𝐬𝐮𝐩𝐩(ν̃_s)), we have: (γ_n∈ A | (L(g̃_2k))_k∈) ≤ν(A)/1-α. Sections <ref> and <ref> are devoted to the proof of Theorem <ref>. In section <ref>, we construct a preliminary extraction that gives us a sequence of independent random matrices alternating between an unknown distribution and the Schottky distribution ν_s of Theorem <ref>. Then from this ping-pong extraction, we construct an extraction for which the unknown words now have a large squeeze coefficient. We do not claim that the words in this preliminary extraction are aligned. In section <ref>, we implement the pivoting technique to construct an extraction which is aligned. This means that we look at the sequence of words that we have constructed in section <ref> from past to future. All oddly indexed words are candidate pivotal times. The pivoting technique is an inductive way to eliminate pivotal times so that the word indexed by each selected pivotal time is aligned with its neighbours aligned on the left with the product of everything until the previous selected pivotal time and aligned on the right with the product of everything until the next candidate or selected pivotal time. We move forward and select the current candidate pivotal time when the ν̃_s-distributed word is aligned with both its neighbours. Otherwise we eliminate the candidate pivotal time and move backwards concatenate everything together and look at the last candidate pivotal time. We then use a version of (<ref>) which holds all over the inductive construction to show that the probability of backtracking each step is at most ρ/1-2ρ and we get an exponential control over the size of the backtrack. The issue is that the algorithm does not guarantee proper alignment but alignment in the sense of Å^S. Indeed the selected pivotal time that guarantees us the alignment satisfies three alignment conditions, hence (<ref>) does not hold any more for this time, therefore we have to discard it. Then by induction we show that the previous candidate pivotal time is Å^S-aligned with the concatenation of all the words we have just concatenated together. Hence the inductive and non-symmetric definition of Å^S. Note that in concrete applications the alignment Å^S implies genuine alignment as testified by the next two results. [Rigidity of the alignment in the toy model] Let Γ = ⟨ a,b,c | a^2=b^2=c^2=1⟩. Let Å := {(g,h); |g · h|= |g|+|h|} and let S := Γ∖{1}. Let γÅ^S g̃ and let g = Π(g̃). Then we have γÅ g and g ∈ S. [Rigidity of the alignment of matrices] Let E be a Euclidean vector space. Let Γ = End(E), let ∈(0,1). Let Å⊂Å^ and let S ⊂{γ∈Γ | (γ)≥ 8|log()|+ 10log(2)} be measurable. Let (γ̃_n)_n ∈∈Γ^ and let (γ_n)_n ∈ := (Π(γ̃_n))_n ∈. Assume that for all n ∈, we have: γ_2n+1∈ S and γ_2nÅγ_2n+1Å^S γ̃_2n+2. Then we have γ_2n+1Å^/2γ_2n+2 and (γ_2n+2) ≥ 4|log()| + 7 log(2) for all n ∈ and γ_i⋯γ_j-1Å^/4γ_j⋯γ_k-1 for all 0≤ i ≤ j ≤ k ∈. Let n ∈. We want to show that γ_2n+1Å^/2γ_2n+2 and (γ_2n+2) ≥ 4|log()| + 7 log(2). Let g̃_0, g̃_1, g̃_2 ∈Γ be such that γ̃_2 = g̃_0 ⊙g̃_1 ⊙g̃_2 and for all i∈{0,1,2}, let g_i = Π(g̃_i). Assume that g_0 Å g_1 Å g_2 and g_1∈ S. Then by Lemma <ref>, we have (γ_2k+2) ≥ 4|log()| + 7log(2). If we assume that γ_2n+1Å (g_0 g_1 g_2), then trivially γ_2n+1Å^/2γ_2n+2. Otherwise, we assume that γ_2n+1Å^S g̃_0. We claim that for all γÅ^S g̃, there is an integer l ≤ L(g̃) and a family h_0,…, h_2l∈End(E) such that h_0⋯ h_2l = g, and γÅ h_0 and (h_0) ≥ 4|log()| + 7log(2) and for all 0≤ i < l, we have h_0⋯ h_2iÅ h_2i+1Å h_2i+2 and h_2i+1∈ S. We prove the claim by induction on the length of g̃. Consider a decomposition g̃ = g̃_0 ⊙g̃_1 ⊙g̃_2 with g_1 ∈ S. Since g_1 ∈ S, the word g̃_1 has positive length, therefore L(g̃_0) < L(g̃). If γÅ g, then we simply set l := 0 and h:= g. Note that if L(g̃)=1, then γÅ g. If we do not have γÅ g, then we are in the second case of the definition of Å^S and therefore we assume that γÅ^S g̃_0. By induction hypothesis there is an integer l' ≤ L(g̃_0) and a family h_0,…, h_2l' such that h_0⋯ h_2l' = g_0, and γÅ h_0 and (h_0) ≥ 4|log()| + 7log(2) and for all 0≤ i < l', we have h_0⋯ h_2iÅ h_2i+1Å h_2i+2 and h_2i+1∈ S. Let l := l' + 1 ≤ L(g̃), let h_2l-1 := g_1 and let h_2l := g_2. Then the family (h_0,…,h_2l) satisfies the claim. We have constructed a family h_0,…, h_2l such that h_0⋯ h_2l = γ_2n+2, and γ_2n+1Å^ h_0 and (h_0) ≥ 4|log()| + 5|log(2)| and for all 0≤ i <l, we have h_0⋯ h_2iÅ^ h_2i+1Å^ h_2i+2 and (h_2i+1)≥ 4|log()| + 7log(2). Then by lemma <ref>, we have γ_2n+1Å^/2γ_2n+2. Let 0≤ i ≤ j ≤ k ∈, we have γ_iÅ^/2…Å^/2γ_k-1 and (γ_n)≥ 4|log()| + 7log(2) ≥ 2|log(/2)| + 3log(2) so by Lemma <ref>, we have γ_i⋯γ_j-1Å^/4γ_j⋯γ_k-1. §.§ Construction of the ping-pong extraction Let 0 < α < 1. We write 𝒢_α for the geometric probability measure of scale factor α defined by 𝒢_α{k} := α^k(1-α) for all k ∈. Note that 𝒢_α has a finite exponential moment. We write ℬ_α for the Bernoulli measure of parameter α, the probability measure which gives mass α to 1 and mass 1-α to 0. Given ν a probability distribution over a measurable semi-group, given η a probability distribution on and given (w,(γ_k))∼η⊗ν^⊗, we write ν^*η for the distribution of γ_0⋯γ_w-1 and ν^⊗η for the distribution of (γ_0,…,γ_w-1). When ν̃ is defined on a semi-group of words, we write ν̃^⊙η instead of ν̃^*η. Given η be a probability distribution on , given m ∈ and given w ∼η, we write m +_* η for the distribution of m+w, and we write m ×_*η for the distribution of m× w. Given κ̃ a probability distribution on Γ, we write κ̃^⊙ for the distribution ^∞_*κ̃^⊗, which is defined on Γ^. The distribution κ̃^⊗ is defined on Γ̃^ and κ̃^⊙ is defined on Γ^. The following lemma comes from basic probability theory, we give a complete proof as a warm up. Let Γ be a measurable semi-group. Let ν be a probability measure on Γ, let 0 < α < 1, and let m ∈. Let ν̃_s be a probability measure on Γ^m such that αν_s ≤ν^⊗ m. Let κ̃: = (ν^⊗ m - αν̃_s/1 - α)^⊙𝒢_1-α. Then (κ̃⊗ν̃_s)^⊗ is an extraction of ν^⊗. Moreover, for all A ⊂Γ, and for g̃∼κ̃, we have: ∀ l ∈ m,∀ 0≤ k < l, (χ_k^l(g̃)∈ A | L(g̃) = l) ≤ν(A)/1-α. Let (x_n)∼(αδ_1^⊗ m + (1-α)δ_0^⊗ m)^⊙ x_km +r = x_km for all k∈ and all 0≤ r < m and (x_km)_k ∈∼ℬ_α^⊗. Let (g_n) ∼ν̃_s^⊙ and let (h_n) ∼(ν^⊗ m -αν̃_s/1-α)^⊙. Assume that x, g, h are independent. Let (γ_n)_n∈ := ( g_n^x_n h_n^1-x_n)_n∈ γ_n = g_n for all n∈ such that x_n = 1 and γ_n = h_n for all n∈ such that x_n = 0. Then the sequence ((γ_km,…, γ_(k+1)m-1))_k∈ is i.i.d. because the random sequence ((x_km, …, x_(k+1)m-1, g_km, …, g_(k+1)m-1, h_km, …, h_(k+1)m-1))_k∈ is. Moreover, for all k ∈, the distribution of (γ_km, …, γ_(k+1)m-1) is αν̃_s + (1-α) ν^⊗ m -αν̃_s/1-α = ν^⊗ m. Hence (γ_n) ∼ν^⊗. Now let (w'_j)_j∈ be a random sequence of integers such that for all j ∈_≥ 1, the integer w'_j is almost surely the j-th smallest element of {k ≥ 1 | x_km - 1= 1}. With that definition, (w'_k-1)_k∈ is the sequence of number of failures between consecutive successes of a Bernoulli process of parameter α so w'_j ∼ (1+_*𝒢_1-α)^⊗. Let (w_k) be the random sequence of integers such that w_2k+1 = m and w_2k = m '(w'_k-1) for all k∈. Then for all k∈, we have γ^w_2k+1 = g^w_2k+1 and γ^w_2k = h^w_2k. Moreover, the sequences w, g and h are independent so γ^w_2k is independent of γ^w_2k+1 for all k and (γ^w_2k,γ^w_2k+1) is i.i.d. Given j,k∈ the event (j = w'_k+1), implies that γ^w_2k+1 = (g_mj-m, …, g_mj-1). Moreover, the value of (g_mj-m, …, g_mj-1) is independent of (x_n) so it is independent of w'_k+1. Therefore, we have γ^w_2k+1∼ν̃_s conditionally to the event (j = w'_k+1) so γ^w_2k+1∼ν̃_s. Given k∈ and given j := w'_k, we have γ^w_2k = (h_jm, …, h_(j+w'_k-1)m-1) and w'_k - 1 ∼𝒢_1 - α and it is independent of the random sequence h so γ^w_2k∼(ν^⊗ m-αν̃_s/1-α)^⊙𝒢_1-α. We have shown that the distribution of (γ^w_k) is (κ̃⊗ν̃_s)^⊗. We also proved in the previous paragraph that (γ_n) ∼ν^⊗. We also note that the w_k's all have a bounded exponential moment. Therefore (κ̃⊗ν̃_s)^⊗ is an extraction of ν^⊗ in the sense of Definition <ref>. Now we show (<ref>). Note that ν^⊗ m-αν̃_s/1-α≤ν^⊗ m/1-α so for all j <m, we have (χ_j^m)_*( ν^⊗ m - αν̃_s/1 - α) ≤ν/1-α. Moreover, given g̃∼κ̃, given l ∈ and given k < lm, the distribution of g̃ conditioned to ( L(g̃) = lm ) is (ν^⊗ m - αν̃_s/1-α)^⊗ l. Therefore, the conditional distribution of χ^l_k(g̃) is (χ_k- m⌊k/m⌋^m)_*(ν^⊗ m - αν̃_s/1 - α) ≤ν/1 - α. Let Γ be a measurable semi-group, let Å be a binary relation on Γ and let S ⊂Γ be measurable. Let ν be a probability measure on Γ, let 0< α <1, let 0< ρ <1/5 and let m ∈. Let ν̃_s be a probability measure on Γ^m such that αν_s≤ν^⊗ m and let ν_s := Π_*ν̃_s. Assume that ν_s is supported on S and ρ-Schottky for Å. Then there exists a distribution κ̃ on Γ such that (κ̃⊗ν̃_s)^⊗ is an extraction of ν and L_*κ̃{k} = m+_*m×_*(1+_*𝒢(1-α))^*(1+_*𝒢(2ρ)). Moreover, for κ̃ almost all g̃∈Γ, there exist g̃_1,g̃_2,g̃_3 ∈Γ such that g̃ = g̃_1 ⊙g̃_2 ⊙g̃_3 and Π(g̃_2) ∈ S and Π(g̃_1) ÅΠ(g̃_2) ÅΠ(g̃_3) and for all measurable A ⊂Γ∖⋃_k = 0^m-1χ_k^m(𝐬𝐮𝐩𝐩(ν̃_s)), we have almost surely: ∀ l ∈ m_≥ 1,∀ 0≤ n < l, (χ_n^l(g̃)∈ A | L(g̃) = l) ≤ν(A)/1-α. Let κ̃_0 : = (ν^⊗ m-αν̃_s/1-α)^⊙𝒢_1-α be as in Lemma <ref>. Let (γ_n) ∼ν^⊗ and let (w_n)_n∈ be a random sequence of integers on the same probability space such that (γ^w_k) ∼(κ̃_0 ⊗ν̃_s)^⊗. Let 𝒰_[0,1] be the uniform probability measure on the interval [0,1] and let (τ_j)_j∈∼𝒰_[0,1]^⊗. Assume that the random sequences γ^w and τ are independent. We define the penalty function: P_ν_s : Γ^3 ⟶ [0,1]; (f,g,h) ⟼1_fÅ gÅ h/ν_s{γ∈Γ | fÅγÅ h}(1-2ρ). We check that P_ν_s≤ 1 because ν_s is ρ-Schottky for Å. Note also that for γ∼ν_s and for all non random g,h ∈Γ, one has (P_ν_s (f, γ, h)) = 1-2ρ. Hence, for all k∈, for all random f,h∈Γ that are independent of (τ_k, γ^w_2k+1), we have (τ_k < P_ν_s(f,γ^w_2k+1,h))= 1- 2ρ by definition of P_ν_s. Moreover τ_k < P_ν_s(f,γ^w_2k+1,h) ⇒ f Åγ^w_2k+1Å h. Now we define: k_γ, w^τ := min{k ∈ | τ_k < P_ν_s(γ^w_2k+1,γ^w_2k+1,γ^w_2k+2) }. Then for all k ∈, we have: (k_γ, w^τ = k | (γ^w_2k')_k'∈, k_γ, w^τ≥ k) = (P_ν_s(γ^w_2k+1,γ^w_2k+1,γ^w_2k+2) | (γ^w_2k')_k'∈) = 1 - 2ρ. Therefore k_γ, w^τ∼𝒢_(2ρ) and k_γ, w^τ is independent of (γ^w_2k)_k∈ . Let j_γ, w^τ := w_2k_γ, w+3. Then j_γ, w^τ∼ m +_* m ×_* (1 +_* 𝒢(1 - α))^* (1 +_* 𝒢(2ρ) ) so j_γ, w^τ has finite exponential moment by Lemma <ref>. Let κ̃ be the distribution law of g̃^τ_γ,w : = (γ_0,⋯,γ_j_γ, w^τ - 1). It follows from the definition that g̃_γ,w^τ = (_i =0^2kγ^w_i)⊙γ^w_2k+1⊙γ^w_2k+2 and Π(_i =0^2kγ^w_i)ÅΠ (γ^w_2k+1)ÅΠ(γ^w_2k+2) for k = k_γ,w^τ. Moreover Π (γ^w_2k+1) ∼ν_s so Π (γ^w_2k+1)∈ S almost surely. Note also that k_γ,w^τ is constructed as a stopping time for the sequence (τ_k, γ^w_2k+1)_k∈ and it is independent of (γ^w_2k)_k∈, so the conditional distribution of the random sequence (γ^w_k+2k_γ, w +3)_k∈ knowing g̃_γ,w is (ν̃_s ⊗κ̃_0)^⊗. Hence, we have κ̃⊙ν̃_s ⊙(κ̃_0 ⊙ν̃_s)^⊙ = (κ̃_0 ⊙ν̃_s)^⊙ so (κ̃⊙ν̃_s)^⊙ = (κ̃_0 ⊙ν̃_s)^⊙ = ν^⊗. Now let A ⊂Γ∖⋃_k = 0^m-1χ_k^m(𝐬𝐮𝐩𝐩(ν̃_s)) be measurable and let l ∈ m _≥ 1. Let q ∈ and let x_0, …, x_q+1∈ m be such that q m + ∑_i = 0^q+1 x_i = l. Now we work on the sub-probability space (Ω', '), (where ' is short for ^(l, q, (x_i))), defined as Ω' := (j_γ,w^τ = l)∩ (k_γ,w^τ = q) ∩⋂_i = 0^q+1 (w_2i = x_i) and ' := /(Ω'). Let n < l. We claim that '(γ_n ∈ A) ≤1/1-αν(A). If there exists an integer k ≤ q such that w_2k+1≤ n < w_2k+2, then '(γ_n ∈ A) = 0. Otherwise, let k ≤ q + 1 be such that w_2k≤ n < w_2k+1, then we have ' almost surely γ_n = χ^x_k_n-w_2k (γ^w_2k) and k_γ,w is independent of (γ^w_2j)_j∈, so the distribution of γ_n for ' is bounded by ν/1-α by (<ref>) in Lemma <ref>. Then we have (γ_n ∈ A | j_γ,w^τ = l) ≤max_q,(x_i)'(A) ≤1/1-αν(A) and this is true for all l ∈ m_≥ 1. Therefore, we have (<ref>). §.§ Construction of the aligned extraction The following definition describes the pivot algorithm. Starting from an already merged sequence (γ^w_n), we will merge some words recursively. At each step j, we only look at (γ^w_0, γ^w_1, …, γ^w_2j) and merge some of them together. We will denote by (p^k_j)_k∈ the sequence of waiting times (or lengths) at step j, starting with p^0 =w. We will denote by 2m_j + 1 the number of words left after merging (γ^w_0, γ^w_1, …, γ^w_2j). At each step, we make sure that (γ^p^j_k)_k ≤ m_j satisfies Theorem <ref>. It means that every oddly indexed block is a single oddly indexed word and that its distribution relatively to the merging process is still a Schottky distribution. The merging process consists in backtracking when the right alignment conditions are satisfied. [Weighted Pivot algorithm] Let Γ be a measurable semi-group endowed with a measurable relation Å. Let ν_s be a probability distribution on Γ that is ρ-Schottky for Å. We define the ν_s penalty functions. P_ν_s: Γ^3 ⟶ [0,1]; (f,g,h) ⟼1_fÅ gÅ h/ν_s{γ∈Γ | fÅγÅ h}(1-2ρ), P'_ν_s: Γ^4 ⟶ [0,1]; (f,g,h, h')⟼1_fÅ gÅ h1_gÅ h'/ν_s{γ∈Γ | fÅγÅ h and γÅ h'}(1-3ρ). Let (γ_n)∈Γ^, let (w_k)∈_≥ 1^ and let (τ_k)∈[0,1]^ be non-random sequences. Let (p_k^j)_j∈, k∈∈_≥ 1^^2. Assume that for all j ∈, there is an even integer m_j such that p_2m_j+1^k := ∑_k = 0^2m_j p^j_k = w_2j +1 and let (m_j)_j∈ be such a sequence. Given j,k ∈, we write l_k^j := max{l ≤ j | m_l ≤ k}. For all k ∈, we write l_k := sup{l ∈ | m_l ≤ k} for the time of the last visit in k. We say that (p_k^j) is the family of length of the pivotal blocks associated to the sequence (γ^w_k) with weights (τ_k) if: * For all j ∈, we have (p^j_k + 2m_j +1)_k∈ = (w_k + 2j +1)_k∈ and {p^j+1_k | k ∈}⊂{p^j_k | k ∈}. Note that it implies that m_0 =0 and that p^0_k = w_k for all k ∈. * For all j ∈, we have (p^j+1_k)_k∈ = (p^j_k)_k∈ and m_j+1 = m_j + 1 if and only if: τ_j < P_ν_s(γ^p^j_2m_j,γ^p^j_2m_j + 1,γ^p^j_2m_j + 2). * For all j ∈ such that (p^j+1_k)_k∈≠(p^j_k)_k∈, we have (p^j+1_k)_0≤ k < 2m_j+1 = (p^j_k)_0 ≤ k < 2m_j+1 and m_j+1 = max({k < m_j | τ_l^j_k< P'_ν_s(γ^p^j_2k, γ^p^j_2k+1, γ^w_2l_k^j+2,γ^p^j_2k+2⋯γ^p^j_2m_j+2)}∪{0}). If p^j_k converges to a limit p_k for all k, as j → +∞, then we say that (p_k)_k∈ is the sequence of pivotal times associated to γ^w with weights τ. Let us illustrate the first steps of the algorithm on an example. Initially, the letters are grouped into blocks of length p^0_0 = w_0, p^0_1 =w_1, p^0_2 = w_2, …. For simplicity, we will take all w_k equal to 1 in our example. With our previous construction, this happens when ν_s = ν, note also that the identity must be aligned with everyone. The important thing to note is that in that case, all words are in S. That way, for all 0 ≤ k ≤ j, we have p^j_2k+1 = 2l^j_k+1. We will describe the construction for j ∈{0,1,2,3,4}. For that construction, we only look at the first 11 words that all have a single letter, which is an element of a semi-group (a semi- group of matrices in our case): (γ_0); [γ_1], (γ_2), [γ_3], (γ_4), [γ_5], (γ_6), [γ_7], (γ_8), [γ_9], (γ_10). We mark with brackets (instead of the usual parenthesis for words) the candidate pivotal times, at step j= 0 they are all the oddly indexed times. At all times, the word within brackets will have a single letter. We mark with a semi column, our position. At all time, all the words that are on the left of this semi-column are aligned and the oddly indexed ones are candidate pivotal times, there are m_j of them. At step j = 0, we will add γ_1 and γ_2. We check whether τ_0 < P_ν_s (γ_0, γ_1, γ_2), which is a proxy for γ_0 Åγ_1 Åγ_2 but with a controlled conditional probability, constant and equal to 1 - 2ρ. If this condition fails (which is always the case when the above alignment condition si not satisfied), then we merge (γ_0, γ_1, γ_2) into a single word. Then, there is nothing more to check because there is no candidate k < m_1 for m_1 to satisfy (<ref>). In this case, m_1 = 0, so there are no candidate pivotal times left of the semi-column and the newly merged sequence is the following: (γ_0,γ_1,γ_2); [γ_3], (γ_4), [γ_5], (γ_6), [γ_7], (γ_8), [γ_9], (γ_10),… Then at step j = 1, we check whether τ_1 < P_ν_s (γ_0γ_1γ_2, γ_3, γ_4). Assume that this condition holds. This implies that we have the alignment γ_0γ_1γ_2 Åγ_3 Åγ_4. Then we do not merge any block and move to m_2 = 1: In this case, we have p^2_0 = 3, p^2_1 = 1, p^2_2 =1, p^2_3 =1, p^2_4= 1, … and the new sequence is the following: (γ_0,γ_1,γ_2), [γ_3], (γ_4); [γ_5], (γ_6), [γ_7], (γ_8), [γ_9], (γ_10),… Note that it is useless to specify that p^2_1 =1 and p^2_3 =1 or that p^2_2k +1 = 1 for all k ∈, because the oddly indexed blocks are the ones in brackets and they always have length 1. It is also useless to mention that p^2_4 = 1, because it is the length of the block (γ_6), which we have not yet considered. At step j = 2, we check whether τ_2 < P_ν_s (γ_4, γ_5, γ_6). Assume that this holds. Then m_3 = 2 and the new sequence is: (γ_0,γ_1,γ_2), [γ_3], (γ_4), [γ_5], (γ_6); [γ_7], (γ_8), [γ_9], (γ_10),… By construction, we have γ_0γ_1γ_2Åγ_3 Åγ_4 Åγ_5 Åγ_6. At step j = 3, we check whether τ_3 < P_ν_s (γ_6, γ_7, γ_8). Assume that this time, that condition fails. Then we have to backtrack to the previous candidate pivotal time: γ^p^3_2 m_3 - 1, which is simply γ_5. In other words, we look at the definition of m_j+1 (<ref>) with k = 3 and l^j_k = 5. We check whether τ_2 < P'_ν_s(γ_4, γ_5, γ_6,γ_6γ_7γ_8), which is a proxy for γ_5 Åγ_6γ_7γ_8 but with a controlled conditional probability constant and equal to 1-3ρ/1-2ρ. Indeed, we already know that τ_2 < P_ν_s(γ_4, γ_5, γ_6) from the previous step of the construction. Assume that this holds. By (<ref>), this means that m_4 = 1 and the only candidate pivotal time left is [γ_3]. The newly merged sequence becomes: (γ_0,γ_1,γ_2), [γ_3], (γ_4,γ_5,γ_6,γ_7,γ_8); [γ_9], (γ_10),… Note that γ_3 Åγ_4 Åγ_5 Åγ_6γ_7γ_8 and γ_4∈ S, so γ_3 Å^S(γ_4) and γ_5 ∈ S so γ_3 Å^S(γ_4,γ_5,γ_6,γ_7,γ_8). Moreover, we still have γ_0γ_1γ_3 Åγ_3. At the next step (j=4), we check whether τ_4 < P_ν_s (γ_4γ_5γ_6γ_7γ_8,γ_9,γ_10), which is a proxy for γ_4γ_5γ_6γ_7γ_8Åγ_9 Åγ_10. Assume that this holds. Then m_5 = 2 the newly merged sequence is (γ_0,γ_1,γ_2), [γ_3], (γ_4,γ_5,γ_6,γ_7,γ_8), [γ_9], (γ_10);… It feels a bit frustrating to lose two pivotal times instead of only one, because we had the alignment γ_4 Åγ_5 Åγ_6γ_7γ_8 so why not keep [γ_5] as a pivotal time. The issue with that is that we do not have any control over γ_6γ_7γ_8. For example, it may be the identity, which is aligned with everyone. In that case the alignment condition γ_5 Åγ_6γ_7γ_8 Åγ_9 is trivial and does not tell us anything about the product γ_5γ_6γ_7γ_8γ_9, which may again be the identity. Therefore, we really need to discard this pivotal time in order to be able to use Proposition <ref>. The other issue is that even if we somehow get rid of that problem, then [γ_5] would not have the same probabilistic behaviour as the other pivotal times. Indeed, knowing the construction up to step 4, it satisfies 3 alignment conditions. Note also that the first block (γ_0,γ_1,γ_2) has a particular status because we do not have any informations on its structure and only know that its product is aligned with γ_3. We need the index l^j_k in (<ref>) because if we ever backtrack to [γ_3] = γ^p_5_1 for example, then the alignment condition on γ_3 is not with the merged word (γ_4,γ_5,γ_6,γ_7,γ_8) but only with the first sub-word, namely (γ_4), so we need to keep track of its index, this is the role of 2l^5_0 + 2 = 4 and l^5_0 = 1 is indeed the last step at which we had m_j = m_1 =1. Let us recap in the next remark basic properties of the algorithm that follow readily from its definition. Let (γ_n)∈Γ^, let (w_k)∈_≥ 1^ and let (τ_k)∈[0,1]^ be non-random sequences. Note that, the family of lengths of the pivotal blocks (p^j_k) in the sense of Definition <ref> is unique and we can construct it by induction. Moreover that the map ((γ_n),(w_k),(τ_k))↦(p^j_k) is measurable. Let (p^j_k) be the family of lengths of the pivotal blocks, and let (m_j)_j∈ and (l^j_k)_j∈, 0≤ k ≤ m_j be as in Definition <ref>. By induction, we can easily check that the following facts hold: * For all j≤ j'∈, we have l^j_j' = j. * For all j ∈, and for all 0≤ k < m_j, we have γ^p^l^j_k_2kÅγ^p^l^j_k_2k +1Åγ^p^l^j_k_2k + 2 (because m_l^j_k+1 > m_l^j_k by definition of l^j_k). * For all j ∈, and for all 0≤ k < m_j, we have γ^p^j_k'= γ^p^j'_k' for all l^j_k ≤ j' ≤ j and all 0 ≤ k' ≤ 2k +1. * For all j ∈, we have γ^p_j_2m_j+2 = γ^w_2j+2. Hence, for all k < m_j, we have γ^p^l^j_k_2k + 2 = γ^w_2l_k^j+2. * For all j ∈, and for all 0≤ k < m_j, we have γ^p^j_2kÅγ^p^j_2k+1Åγ^w_2l_k^j+2. * The family (l^j_k)_j,k is determined by the data of the sequence (m_j)_j. * The family (p^j_k)_j,k is determined by the data of the sequences (m_j)_j and (w_k)_k. Let (γ_n)∈Γ^, let (w_k)∈_≥ 1^ and let (τ_k)∈[0,1]^ be non-random sequences. Let Å be a binary relation on Γ and let S ⊂Γ. Assume that for all k ∈, there exist three sub-words (g̃_0,g̃_1,g̃_2) such that γ_2k+2^w = g̃_0 ⊙g̃_1 ⊙g̃_2 and Π(g̃_0)ÅΠ(g̃_1) ÅΠ(g̃_2) and Π(g̃_1) ∈ S and γ^w_2k+1∈ S. Let (p^j_k)_j,k be the family of lengths of the pivotal blocks in the sense of Definition <ref> and let (m_j)_j be as in Definition <ref>. For all j ∈, and for all 0 ≤ k < m_j, we have γ^p^j_2k+1Å^Sγ^p^j_2k+2. We prove the claim by induction. Assume that for all j' ≤ j, and for all 0 ≤ k < m_j', we have γ^p^j'_2k+1Å^Sγ^p^j'_2k+2. If m_j+1 = m_j +1, then τ_j < P_ν_s(γ^p^j_2m_j,γ^p^j_2m_j + 1,γ^p^j_2m_j + 2). Therefore γ^p^j_2m_jÅγ^p^j_2m_j + 1Åγ^p^j_2m_j + 2, so we have γ^p^j_2m_j + 1Åγ^p^j_2m_j + 2. For smaller values of k, we use the induction hypothesis. If 0 < m_j+1 < m_j we have γ^p^j_2m_j+1Åγ^p^j_2m_j+1+1Åγ^p^j_2k+2⋯γ^p^j_2m_j+2 and by induction hypothesis, we have γ^p^j_2m_j+1 - 1Å^S γ^p^j_2m_j+1+1 and γ^p^j+1_2m_j+1 - 1 = γ^p^j_2m_j+1 - 1 and γ^p^j+1_2m_j+1 = γ^p^j_2m_j+1⊙γ^p^j_2m_j+1+1⊙γ^p^j_2k+2⋯γ^p^j_2m_j+2. Moreover γ^p^j_2m_j+1+1 is equal to one of the γ^w_2k+1 for some k ∈, therefore γ^p^j_2m_j+1+1∈ S by assumption so γ^p^j_2m_j+1 - 1Å^S γ^p^j+1_2m_j+1 by definition of Å^S. Let Γ be a measurable semi-group endowed with a measurable relation Å. Let ν_s be a probability distribution on Γ that is ρ-Schottky for Å and let S := 𝐬𝐮𝐩𝐩(ν_s). Let (γ^w_n)_n∈∈Γ^, and let (τ_j)_j∈∼𝒰_[0,1]^⊗ be independent random sequences defined on the same probability space. Assume that (w_2k+1)_k∈ is almost surely equal to a non-random constant. Assume also that the sequences (γ^w_2k)_k∈ and (γ^w_2k+1)_k∈ are independent. Assume that (γ^w_2k+1)_k∈∼ν_s^⊗. Let (p^j_k) be the random family of lengths of the pivotal blocks associated to γ^w with weights τ and let (l^j_k) and (m_j) be as in Definition <ref>. Then for all j ∈, and for all 0 ≤ k ≤ m_j-1, we have: (m_j+1=m_j +1 | (γ^w_2k)_k∈, (m_j')_j'≤ j) = 1-2ρ, (m_j+1 < m_j-k | (γ^w_2k)_k∈, (m_j')_j'≤ j) = 2ρ(ρ/1-2ρ)^k. First note that for all f,g,h,h', we have 0 ≤ P'_ν_s(f,g,h, h') ≤ P_ν_s(f,g,h) ≤ 1. Moreover, given a random γ∼ν_s and given g, h, h' ∈Γ non-random or independent of γ, we have (P_ν_s(f,γ,h)) =1-2ρ and (P'_ν_s(f,γ,h,h')) =1-3ρ. Let j ∈ and let k < m_j. We have: τ_l^j_k< P_ν_s(γ^p^j_2k, γ^p^j_2k+1, γ^w_2l_k^j+2). Indeed, by definition of l^j_k, we have m_l^j_k+1 > m_l^j_k. Therefore: τ_l^j_k < P_ν_s(γ^p^l^j_k_2k, γ^p^l^j_k_2k +1, γ^p^l^j_k_2k + 2). Moreover, for all l^j_k < j' ≤ j, we have m_j' > k so (γ^p^j_k')_k' ≤ 2k+1=(γ^p^l_k^j_k')_k' ≤ 2k+1. Hence, we have γ^p^l^j_k_2k = γ^p^j_2k and γ^p^l^j_k_2k +1 = γ^p^j_2k +1. Moreover, we have γ^p^l^j_k_2k + i = γ^w_2l^j_k + i for all i ≥ 1. Now Let γ, w, τ be random sequences as in Lemma <ref>. Given j ∈, let 𝒫_j be the σ-algebra generated by (p^j'_k)_k∈, j' ≤ j and (γ^p^j_2k)_k∈. Then (𝒫_j)_j∈ is a filtration. Let i, j ∈, we claim that the conditional distribution of γ^p^j_2m_j + 2 i +1 relatively to 𝒫_j is almost surely ν_s. We prove the claim by induction. For j = 0, we assumed that (γ^w_2k+1)_k∈∼ν_s^⊗, and that (γ^w_2k+1)_k∈ is independent of (γ^w_2k)_k∈, now since (w_2k+1) is non-random, the sequence (γ^w_2k)_k∈ determines (p^0_k) = (w_k), hence it generates 𝒫_0 which proves the claim. Given j ∈, we have (γ^p^j+1_2m_j+1 + 2 i + 1)_i = (γ^p^j_2m_j + 2 i + 3)_i∈ and the construction of (p^j+1_k)_k from (p^j_k)_k does not depend on (γ^p^j_2m_j + 2 i + 3)_i∈. Therefore, by induction on j, we have (γ^p^j_2m_j + 2 i + 1)_i∈∼ν_s^⊗. By the same argument, we show by induction on j that conditionally to 𝒫_j, we have: ∀ j∈, ((γ^p^j_2m_j + 2 i + 1,τ_2j+2i+1))_i∈∼(ν_s ⊗𝒰_[0,1])^⊗. Taking i = 0, we have (τ_2j+1< P_ν_s(γ^p^j_2m_j,γ^p^j_2m_j + 1,γ^p^j_2m_j + 2) | 𝒫_j) = 1 - 2ρ. Hence we have (<ref>). Now let j ∈ be fixed, and let q : Ω→{0, …, m_j - 1} be a 𝒫_j-measurable random variable. Then, we have almost surely γ^p^j_2 qÅγ^p^j_2 q + 1Åγ^w_2 l^j_q + 2 the conditional distribution of γ^p^j_2 q + 1 relatively to 𝒫_j is exactly the rescaled restriction of ν_s to {γ∈Γ | γ^p^j_2 qÅγÅγ^w_2 l^j_q + 2}. It means that for all A : Ω→𝒜_Γ which is 𝒫_j-measurable, we have: (γ^p^j_2 q + 1∈ A | 𝒫_j) = ν_s(A ∩{γ∈Γ | γ^p^j_2 qÅγÅγ^w_2 l^j_q + 2})/ν_s{γ∈Γ | γ^p^j_2 qÅγÅγ^w_2 l^j_q + 2}. Let h' : Ω→Γ, be a random variable which is independent of γ^p^j_2 q + 1 relatively to[We say that two events are independent relatively to a σ-algebra if the conditional probability of their intersection is almost surely equal to the product of their conditional probability. We say that two random variables are relatively independent if their level sets are. By Bayes formula, it implies that the conditional distribution of one with respect to the other and said σ-algebra is almost surely equal to its conditional distribution with respect to the σ-algebra alone.] 𝒫_j. Then by (<ref>), and because 1_γ^p^j_2 qÅγ^p^j_2 q + 1Åγ^w_2 l^j_q + 2 = 1 almost surely, we have: (P'_ν_s(γ^p^j_2 q,γ^p^j_2 q + 1,γ^w_2 l^j_q + 2,h') | 𝒫_j, h') = (1-3ρ) 1_γ^p^j_2 q + 1Å h'/ν_s{γ∈Γ | γ^p^j_2 qÅγÅγ^w_2 l^j_q + 2 and γÅ h'} = (1-3ρ) (γ^p^j_2 q + 1Å h' | 𝒫_j, h')/ν_s{γ∈Γ | γ^p^j_2 qÅγÅγ^w_2 l^j_q + 2 and γÅ h'} = 1-3ρ/ν_s{γ∈Γ | γ^p^j_2 qÅγÅγ^w_2 l^j_q + 2} = 1-3ρ/1-2ρP_ν_s(γ^p^j_2 q,γ^p^j_2 q + 1,γ^w_2 l^j_q + 2). Note that even though γ^p^j_2 q + 1 is not 𝒫_j measurable, its only role in the computation of P_ν_s(γ^p^j_2 q,γ^p^j_2 q + 1,γ^w_2 l^j_q + 2) is trough a 0-1 indicator function that we know to be equal to 1. So P_ν_s(γ^p^j_2 q,γ^p^j_2 q + 1,γ^w_2 l^j_q + 2) is indeed a 𝒫_j measurable quantity. Moreover, the conditional distribution of τ_l^j_q with respect to 𝒫_j and γ^p^j_2 q + 1 is almost surely uniform in [0 , P_ν_s(γ^p^j_2 q,γ^p^j_2 q + 1,γ^w_2 l^j_q + 2)]. Indeed, for all l ≤ j and for all constant k ≤ l, the conditions to have l_k^j = l are: * We have m_l = k. Note that this event only depends on (γ^w_k')_0≤ k' ≤ 2l and (τ_l')_0 ≤ l' < l, hence it is independent of (γ^w_2l+1, τ_l). * For all l < j' ≤ j, we have m_j' > k. Note that this event only depends on (γ^w_k')_2l+2 ≤ k' ≤ 2j+2 and (τ_l')_l < l' ≤ j, hence it is independent of (γ^w_2l+1, τ_l). * We have τ_l < P_ν_s(γ^p^l_2 m_l,γ^w_2 l + 1,γ^w_2 l + 2). In particular 0 < P_ν_s(γ^p^l_2 m_l,γ^w_2 l + 1,γ^w_2 l + 2), therefore, we have γ^p^j_2 kÅγ^p^j_2 k + 1Åγ^w_2 l^j_k + 2. Moreover τ_l and γ^w_2l+1 are independent so the distribution of γ^w_2 l + 1 knowing τ_l < P_ν_s(γ^p^l_2 m_l,γ^w_2 l + 1,γ^w_2 l + 2) is the restriction of ν_s to {γ∈Γ | γ^p^j_2 kÅγÅγ^w_2 l^j_k + 2} and τ_l and γ^w_2l+1 are still independent. The above argument implies moreover that the family ((τ_l^j_k,γ^p^j_2k+1))_0 ≤ k < m_j is independent with respect to 𝒫_j there is a 𝒫_j measurable family of distribution (η_k, κ_k)_0 ≤ k < m_j∈Prob() ×Prob() such that: ((τ_l^j_k,γ^p^j_2k+1))_0 ≤ k < m_j∼∫_Ω⊗_k = 0^m_j-1 (η_k⊗κ_k)d. Therefore, for all j ∈ and for all k ≤ m_j, we have: (τ_l^j_k< P'_ν_s(γ^p^j_2k, γ^p^j_2k+1, γ^w_2l_k^j+2,γ^p_j_2k+2⋯γ^p_j_2m_j+2) | 𝒫_j,((τ_l^j_l',γ^p^j_2k'+1))_k < k' < m_j) = 1-3ρ/1-2ρ. By induction on k, we have (<ref>). Let γ be a measurable semi-group, let Å be a measurable binary relation Γ and let S ⊂Γ be measurable. Let 0< α < 1, let 0 < ρ < 1/5 nd let m ∈. Let ν̃_s be a probability distribution on Γ^m and let ν_s := Π_*ν_s. Assume that ν_s is ρ-Schottky for Å and supported on S. Let κ̃ be as in Lemma <ref>, let (Ω,) be a probability space and let ((γ^w_n)_n,(τ_k)_k)∼(κ̃⊗ν̃_s)^⊗⊗𝒰_[0,1]^⊗. Let (p^j_k)_j,k be the random family of lengths of the pivotal blocks associated to γ^w with weights τ and let (m_j) and (l^k_j) be as in Definition <ref> Let j ∈. By Lemma <ref>, we have: (m_j+1 | (m_j')_j'≤ j) = m_j + (1-2ρ) - 2ρ∑_k=0^m_j-1(ρ/1-2ρ) = m_j + (1 - 2ρ) - 2 ρ1-2ρ/1-3ρ + 2 ρ1-2ρ/1-3ρ(ρ/1-2ρ)^m_j = m_j + (1 - 2ρ) 1-5ρ/1-3ρ + 2 ρ1-2ρ/1-3ρ(ρ/1-2ρ)^m_j Note that (1 - 2ρ) 1-5ρ/1-3ρ > 0. By Lemma <ref> applied to (m_j)_j∈, there are constants C, β >0 such that (m_j≤ 0)≤ Cexp(-β j) for all j ∈. Hence l_0 is almost surely finite and has finite exponential moment because (l_0 = j) ≤Cexp(-β j) for all j ∈. Now let 0 ≤ q ≤ l ∈ be fixed and let j ≥ l. We claim that: (m_j+1 | (m_j')_j'≤ j, l_q=l) ≥(m_j+1 | (m_j')_j'≤ j). Indeed, if we assume the values of (m_j')_j'≤ j to be fixed and that l_q^j =l. Then l_q =l if and only if there is no j' >j such that m_j' = 0. We claim that: ∀ k ≤ k', (l_q = l_q^j | (m_j')_j'≤ j,m_j+1 = k) ≤(l_q = l_q^j | (m_j')_j'≤ j,m_j+1 = k'). Note that (<ref>) implies that (m_j+1≥ k | (m_j')_j'≤ j, l_q=l) ≥(m_j+1≥ k | (m_j')_j'≤ j) almost surely and for all k, hence we have (<ref>). Now we prove (<ref>). Let η be the probability measure on such that η{1} = 1-2ρ and η{-k} = 2ρ1-3ρ/1-2ρ(ρ/1-2ρ)^k-1 for all k ≥ 1. Let (r_j)∼η^⊗. Let (r_j)∼η^⊗ be a random sequence defined on a probability space (Ω','). Define (m'_j) by induction taking m_0 = 0 and m'_j+1 := max{0,m'_j+r_j} for all j. With that construction, all the formerly defined random variables are defined on the coupling of (Ω',') and (Ω, ) relatively to m' = m. From now on we work on that coupling. Then for all q ≤ j ∈, we have l_q = l_q^j if and only if ∑_k = j^j'-1 r_k≥ 1+q-m_j for all j' > j. Hence, for all 0< k ≤ j ∈, we have l_q = l_q^j and m_j+1 = k if and only if ∑_k = j+1^j'-1 r_k≥ 1-k and m_j+1 = k. Moreover, the events ( ∀ j' > j+1,∑_k = j+1^j'-1 r_k≥ 1+q-k) and m_j+1 = k are independent so: (l_q = l_q^j | (m_j')_j'≤ j,m_j+1 = k) = ( ∀ j' > j+1,∑_k' = j+1^j'-1 r_k'≥ 1+q-k). This makes (<ref>) obvious. Then by lemma <ref> applied to (m_j), there exist constants C, β > 0 such that (l_q+1-l_q = j) ≤ C exp(-β j) for all j and for all q. Moreover, the distribution of l_q+1-l_q does not depend on q and the family (l_q+1-l_q)_q∈ is i.i.d. and independent of l_0. Now let v_ 0 := l_0 and for all q ∈, let v_2q+2 := 2(l_q+1-l_q)-1 and let v_2q+1 = 1. Let p = w^v. Then note that p^j j → +∞⟶p almost surely for the simple convergence topology. By Lemma <ref>, the random sequence (v_q) is independent of (w_k) so the sequences (p_2k+1) and (p_2k+2) are i.i.d. and independent of each other and of p_0. Moreover, by Lemma <ref>, each p_k has finite exponential moment. Let μ̃ be the distribution of γ^p. We have just proven (<ref>). Let k ∈, we want to show (<ref>) in Theorem <ref>, which states that the conditional distribution of γ^p_2k+1 relatively to (γ^p_k')_k'≠ 2k+1 is bounded above by ν̃_s/1-α. Let j ∈. Saying that l_k= j is equivalent to saying that τ_j< P_ν_s(γ^p_2k,γ^w_2j+1,γ^w_2j+2), that m_j = k and that m_j' > k for all j' > j. Once we assume that τ_j < P_ν_s(γ^p_2k,γ^w_2j+1,γ^w_2j+2), the conditions m_j = k and m_j' > k for all j' > j can be expressed in terms of (γ^w_k')_k'≠ 2j+1. Moreover, once we assume that l_k =j, the random sequence (γ^p_k')_k'≠ 2k+1 is the image of the random sequence (γ^w_k')_k'≠ 2j+1 by a measurable function (which is defined on the set l_k = j). Hence, the distribution of γ^p_2k+1 knowing l_k = j and (w_k)_k∈ and (γ^p_k')_k'≠ 2k+1 is 1_Π^-1(A')/ν_s(A')ν_s for A' : = {g∈Γ | γ^p_2kÅ g Åγ^w_2j+2}. By the Schottky property, we have ν_s(A') ≥ 1-2ρ. This proves (<ref>). Let n∈. Let q := max{k∈ | w_k≤ n} and let r := n-w_q. We claim that the conditional distribution of γ_n knowing (p_k^j)_k,j and knowing that q is even is (χ_r^w_q)_*κ. We prove by induction on j'∈ that the conditional distribution of γ_n knowing (p_k^j)_k,j ≤ j' and knowing that q is even is (χ_r^w_q)_*κ. For j' = 0, this comes from the definition of the random sequence γ^w. For larger j' ∈, note that the construction of (p_k^j)_k,j ≤ j'+1 from (p_k^j)_k,j ≤ j' only depends on events that are independent of (γ^w_2k)_k∈ and therefore independent of γ_n = χ_r^w_q(γ^w_q). Hence the conditional distribution of γ_n knowing (p_k^j)_k,j ≤ j'+1 and knowing that q is even is the conditional distribution of γ_n knowing (p_k^j)_k,j ≤ j' and knowing that q is even. Now let A ⊂Γ∖⋃_k=0^m-1χ_k^m(𝐬𝐮𝐩𝐩(ν̃_s)). We have (γ_n ∈ A | q∈ 2) =0 and knowing that q is odd, the conditional probability of (γ_n ∈ A) is (χ_r^w_q)_*κ(A), which is bounded above by ν(A)/1-α by (<ref>). This proves (<ref>) and concludes the proof of Theorem <ref>. §.§ Facts about ping-pong sequences Given (Ω,𝒜_Ω), and (Γ,𝒜_Γ) two measurable spaces, and γ: Ω→Γ a measurable map, we write ⟨γ⟩_σ := γ^*𝒜_Γ⊂𝒜_Ω for the σ-algebra generated by γ. [Ping-pong sequence] Let Γ be a semi-group, let Å be a measurable binary relation on Γ and let ρ∈(0,1). Let N∈∪{+∞} and let (γ_k)_0≤ k < N be a random sequence. We say that (γ_k) is ρ-ping-pong for Å if for all k∈ such that 0≤ 2k+1 < N, the conditional distribution of γ_2k+1 relatively to (γ_k')_k' ≠ 2k+1 is almost surely ρ-Schottky for Å. Then we say that the distribution of (γ_k)_0 ≤ k ≤ N is ρ-ping-pong for Å. [Pivoting technique] Let Γ be a semi-group, let Å be a measurable binary relation on Γ and let ρ∈(0,1). Let n∈ and let μ be a probability distribution on Γ^{0,…, 2n} that is ρ-ping-pong for Å. There exists a probability space (Ω, ) a random sequence (γ_k)_0≤ k ≤ 2n∼μ and and a random integer r∼𝒢_ρ such that γ_2n-2r-1Å(γ_2n-2r⋯γ_2n) or n≤ r and r and (γ_2k)_0≤ k≤ n are independent. Let ((γ_k)_0≤ k ≤ 2n,(τ_j)_0≤ j) ∼μ⊗𝒰_[0,1]^⊗. Given j ≥ n, we define P_j := 1-ρ. Given 0≤ j < n, we define: P_j := (1-ρ)1_Å(γ_2n-2j-1, γ_2n-2j⋯γ_2n)/( γ_2n-2j-1Å (γ_2n-2j⋯γ_2n) | (γ_k)_k≠ 2n-2j-1). Note that 0≤ P_j ≤ 1 almost surely because (γ_k) is ρ-ping pong. Moreover (P_j | (γ_k)_k≠ 2n-2j-1) = 1-ρ almost surely and P_j is independent of (τ_j')_j'∈. Therefore, we have: ∀ j∈, (τ_j < P_j | (γ_k)_k≠ 2n-2j-1,(τ_j')_j'≠ j) = 1-ρ. Moreover, for all j' < j∈, the random variable P_j' is measurable for ⟨(γ_k)_k ≥ 2n-2j'-2⟩, hence it is measurable for ⟨(γ_k)_k ≠ 2n-2j-1⟩. Therefore: ∀ j∈, (τ_j < P_j | (γ_2k)_0≤ k ≤ n, ∀ j' < j,τ_j'≥ P_j') = 1-ρ. Let r := min{j∈ | τ_j < P_j}. Assume that r < n, then P_r >0 so γ_2n-2r-1Å(γ_2n-2r⋯γ_2n). Then by (<ref>), we have (r ≥ j | (γ_2k)_0≤ k ≤ n) = ρ^j, almost surely and for all j. Hence r ∼𝒢_ρ and r is independent of (γ_2k)_0≤ k≤ n. Given N ∈, given Γ a semi-group, given γ_0,…, γ_N a finite sequence in Γ and given 0≤ j < i≤ N, we write γ_i⋯γ_j for γ_i⋯γ_Nγ_0⋯γ_j. [Cyclical pivoting technique] Let Γ be a semi-group, let Å be a measurable binary relation on Γ and let ρ∈(0,1). Let n∈ and let μ be a probability distribution on Γ^{0,…, 2n} that is ρ-ping-pong for Å. There exist a probability space (Ω, ), a random sequence (γ_k)_0≤ k ≤ 2n∼μ and an integer c∼𝒢_2ρ such that γ_2n-2c-1Å(γ_2n-2c⋯γ_2c) and (γ_2n-2c-2⋯γ_2c)Åγ_2c+1 or n ≤ 2c - 1 and c and (γ_2k)_0≤ k≤ n are independent. Let (Ω, ) := (Γ^2n+1×[0,1]^,μ⊗𝒰_[0,1]^⊗). Let ((γ_k)_0≤ k ≤ 2n,(τ_j)_0≤ j) ∼μ⊗𝒰_[0,1]^⊗. Given j ≥ n/2, we define P_j := 1-2ρ. Given 0≤ j < n/2, we define: P_j := (1-2ρ)1_Å(γ_2n-2j-1, γ_2n-2j⋯γ_2j)1_Å(γ_2n-2j-2⋯γ_2j,γ_2j+1)/( γ_2n-2j-1Å (γ_2n-2j⋯γ_2j)∩ (γ_2n-2j-2⋯γ_2j)Åγ_2j+1 | (γ_k)_k∉{2n-2j-1, 2j +1}). Note that 0≤ P_j ≤ 1 and (P_j | (γ_k)_k∉{2n-2j-1, 2j +1}) = 1-2ρ. Let c := min{j∈ | τ_j < P_j}. Then r∼𝒢_2ρ and c is independent of (γ_2k)_0≤ k≤ n. Let Γ be a semi-group, let Å be a binary relation on Γ and let S⊂Γ. Let (γ̃_k)_0≤ k ≤ 2n∈Γ^2n+1 and let γ_k :=Π(γ̃_k) for all 0≤ k ≤ 2n. Assume for the sake of the argument that the identity of Γ is aligned with everyone and not in S. Assume that for all 0≤ k≤ n, we have: γ_2kÅγ_2k+1Å^S γ̃_2k+2. In Lemma <ref>, we want to have γ_2n-2c-2⋯γ_2cÅγ_2c+1 instead of just γ_2n-2c⋯γ_2cÅγ_2c+1 because we have no control over the product γ_2n-2c⋯γ_2c (it may be the identity for example and the alignment would be meaningless). We however have control over the product γ_2n-2c-2⋯γ_2c. Indeed, if c < n/2 is such that γ_2n-2c-1Å(γ_2n-2c⋯γ_2c), then γ_2n-2c-3Å^S γ̃_2n-2c-2⊙⋯⊙γ̃_2c. In concrete cases, we have shown in Proposition <ref> that this implies a genuine alignment. §.§ Factorization of the pivotal extraction In this section, we prove Theorem <ref>. Given X and Y two measurable sets, we write χ_Γ: Y × X → Y and χ_X: Y × X → X for the first and second coordinate projections. [Factorization of the pivotal extraction] Let Γ be a measurable semi-group and let S ⊂Γ be measurable. Let M∈ and let (L_i)_1≤ i ≤ M and (R_j)_1≤ j ≤ M be two non-random families of disjoint measurable subsets of Γ. Let A ⊂{1,…, M}^2 and let Å := _(i,j)∈ A L_i × R_j. Let ν be a probability measure on Γ, let 0< α < 1, let 0< ρ <1/5 and let m ∈. Let ν̃_s be a probability measure on Γ^m such that αν_s≤ν^⊗ m and let ν_s := Π_*ν̃_s. Assume that ν_s is supported on S and ρ-Schottky for Å. Let μ̃ be as in Theorem <ref> and let (γ^w) ∼μ̃. Then there exists a Markov chain (x_n) on X ⊂{0, …, 2M}, with x_0 = 0, and a family (ν̃'_x)_x∈ X∈Prob( Γ× X )^X such that: * For all n ∈, the pair (γ^w_n, x_n+1) has distribution law ν̃'_x_n conditionally to (γ^w_k)_k < n and (x_k)_k ≤ n. * For all k ∈, one has x_2k+1∈{1, …, M} and x_2k+2∈{M+1,…, 2M}. * For all i∈{1,…, M}∩ X and all j∈{M+1,…, 2M}∩ X, one has ν̃'_i(Γ×{j}) > 0. * For all i∈{1,…, M}∩ X and all j∈{M+1,…, 2M}∩ X, the distribution: ν_i,j := Π_*(χ_Γ)_*(1_(χ_X=j)ν̃'_i/(χ_X)_* ν̃'_i{j}) is ρ/1-2ρ-Schottky. Let (x_n), (γ^w_n) be as in Lemma <ref> Note that items (<ref>) and (<ref>) imply that the supports of x_2k+2 and x_2k+3 do not depend on k. However, the support of x_1 may differ from the support of x_3. With that in mind, for all i∈{1,…, M}∩ X and all j∈{M+1,…, 2M}∩ X, the distribution ν_i,j is the distribution of γ^w_k knowing that x_k = i and x_k+1 = j for any k ∈{1,3,…} such that (x_k = i) > 0. Let κ̃ be as in Lemma <ref>. Let ((γ^w_n),(τ_k))∼(κ̃⊗ν̃_s)^⊗⊗𝒰_[0,1]^⊗ and let (p_k)_k∈ be the associated random sequence of pivotal times and let (l_k)_k∈ be as in definition <ref>. Let ϕ_L, ϕ_R :Γ→{1,…, M} be such that L_i = ϕ_L^-1{i} and R_i = ϕ_R^-1{i} for all i∈{1, …, M}. We define x_0 := 0 and we write ν̃'_0 for the distribution of (γ^p_0, ϕ_L(γ^p_0)). Given k ∈, we define: x_2k+1 := ϕ_L(γ^p_2k) and x_2k + 2 := M+ϕ_R(γ^w_2l_k+1). Note that for all g∈Γ, the set {h∈Γ | gÅ h} is determined by ϕ_L(γ) and the set {h∈Γ | hÅ g} is determined by ϕ_R(γ). Note also that by construction, for all integer k, the conditional distribution of (γ^p_k')_k' ≥ 2k+1 relatively to (γ^p_k')_k'≤ 2k and (τ_l')_l' < l_k only depends on x_2k+1 and not on k. However the distribution of x_2k+1 itself may depend on k. Given x ∈{1,…, M} a possible value for x_k, write ν̃'_x for the distribution of (γ^w_2k+1,x_2k+2) knowing x_2k+1 = x. Note that by construction ,this distribution does not depend on k. For all integer k, the distribution of (γ^p_k')_k'≥ 2k+2 relatively to (γ^p_k')_k'≤ 2k+1 and (τ_l')_l' ≤ l_k only depends on x_2k+2 and not on k. Given x ∈{M+1,…, 2M} a possible value for x_k, write ν̃'_x for the distribution of (γ^w_2k+2,x_2k+3) knowing x_2k+2 = x. Again this distribution does not depend on k. Then for all i ∈{1,…, M}∩ X and all j ∈{M+1,…, 2M}∩ X, the distribution ν_i,j is the distribution of γ^p_2k+1 knowing that ϕ_L (γ^p_2k) = i and M + ϕ_R(γ^w_2l_k+1) = j. This distribution is bounded above by ν/1-α by (<ref>) in Theorem <ref>. Now we can prove <ref> by taking an extraction. Let ρ∈(0,1/3) and let ρ' := ρ/1+2ρ∈(0,1/5). Let K ∈ and let K' = 2K. Without loss of generality, we assume that K ≥ 8. Let m∈, let 0 < ', α< 1 and let ν̃_s be as in Corollary <ref> applied to ν, ρ', K' and let ν_s := Πν̃_s. Then ν̃_s is compactly supported, and bounded above by ν^⊗ m/α. Moreover ν_s is ρ'-Schottky for Å^' and _*Π_*ν̃_s[K'|log(')| + K'log(2)]. Let := '/2 and let Å^'⊂Å⊂Å^ be a finitely described binary relation. Then K'|log(')| + K'log(2) = 2K |log()| + K log(2) ≥ K |log()| + K log(2) and ν_s is ρ'-Schottky for Å. Let M ∈, X⊂{0,…, 2M+1}, μ̃ and (ν̃'_x)_x∈ X be as in Lemma <ref>. Let (γ^p_k) ∼μ̃ and let (x_n) be the underlying Markov chain. Let i,j ∈ X be such that 0 < i ≤ M < j ≤ 2M. Let q_0 := min{q∈ | (x_q,x_q+1) = (i,j)}, and define by induction q_2k+1 = 1 and: q_2k+2 := min{q ≥q_2k+2 | (x_q,x_q+1) = (i,j)} - q_2k+2 for all k. Let κ̃_0 be the distribution of γ^p^q_0, let κ̃_1 be the distribution of γ^p^q_1 and let κ̃_2 be the distribution of γ^p^q_2. By the factorization property, we have (γ^p^q_k)∼κ̃_0⊗(κ̃_1⊗κ̃_2)^⊗, which proves point (<ref>). Then each q_k has bounded exponential moment because it is the hitting time of a finite Markov chain. Moreover each p_k has finite exponential moment so p_k^q has finite exponential moment for all q, therefore L_*κ̃_i has finite exponential moment for all i, this proves point (<ref>). By Proposition <ref>, we have γ^p^q_i⋯γ^p^q_j-1Å^/4γ^p^q_j⋯γ^p^q_k-1 for all 0≤ i≤ j≤ k, which proves (<ref>). Note also that κ̃_1 is the restriction of ν̃_s to {γ∈Γ | L_i ÅγÅ R_j}, which has measure at least 1-2ρ, hence κ_1 is ρ'/1-2ρ'-Schottky for Å, hence it is ρ-Schottky for Å^. Let i,j ∈ X that do not satisfy 0< i ≤ M < j ≤ 2M and such that (χ_X)_*ν̃'_i {j} > 0. The distribution ν̃_i,j := (χ_Γ)_*1_Γ̃×{j}/ν'_i(Γ̃×{j})ν̃'_i is absolutely continuous with respect to the distribution of (χ_2k^∞)_* μ̃ for k ∈ such that (x_2k = i) > 0. Therefore, by (<ref>) there is a constant C such that for all A ⊂Γ∖⋃_k = 0^m-1χ_k^m𝐬𝐮𝐩𝐩(ν̃_s), we have: ∀ k ≤ l, ∀ i∈{0,2}, κ̃_i(L^-1{l}∩ (χ_k^l)^-1(A)) ≤ C ν(A) L_*κ̃_i{l}. Now assume that Γ =GL(E). The set ⋃_k = 0^m-1χ_k^m𝐬𝐮𝐩𝐩(ν̃_s) is compact and N is a continuous function on Γ. Let B = max N(⋃_k = 0^m-1χ_k^m𝐬𝐮𝐩𝐩(ν̃_s)). Then with the notations of Definitions <ref> and <ref> the distributions (ζ_i,k,l) defined in Theorem <ref> are uniformly bounded by B ∧⌈ C N_*ν⌉. When ν is not supported on GL(E), we have N_*ν{+∞} > 0, therefore N_*ν dominates any probability distribution and point (<ref>) is trivial. However C can not be expressed in terms of (α, ρ, m) because we did not give an explicit formula for the distribution of the sequence (q_k). § PROOF OF THE RESULTS In this section, we use Theorem <ref> and Theorem <ref> together with Lemmas <ref> and <ref> to prove the results stated in the introduction. Most of the proofs are straightforward application of Theorem <ref> and Lemma <ref>, for the probabilistic estimates on the coefficients, on the coefficient and on the speed of convergence to the invariant measure; and Lemma <ref> for the estimates on the spectral radius, on the spectral gap and on the dominant eigenspace. Unexpectedly[This result is well known in the L^1 case without any algebraic assumption on the support of the measure.], the trickiest part is to show the almost sure convergence result in Theorem <ref>, namely that (γ_n)/n →σ(ν). This is also the only reason why we need Lemma <ref>. The question whether (γ_n)/n converges almost surely (or even in probability) without moment conditions remains open. §.§ Law of large numbers and large deviations inequalities for the singular gap In this section, we define the escape speed of a random product of matrices using Theorem <ref> but not the moment estimate (<ref>). We will use usual ergodic theory but only for the proof of the almost sure convergence. Given (x_n) a random sequence of real numbers and σ∈, we say that the sequence (x_n) satisfies (exponential) large deviations inequalities below the speed σ if for all α < σ, we have lim sup1/nlog((x_n ≤σ n)) < 0. Note that if the distribution of (x_n) is a Dirac measure then it satisfies large deviations inequalities below the speed σ if and only if lim infx_n/n≥σ. In Lemma <ref> in appendix, we show that the notion of large deviations behaves well when taking the sum (<ref>) (<ref>), maximum (<ref>) or minimum (<ref>) of finitely many random sequences and also when composing random sequences of integers (<ref>). [Escape speed and large deviations inequalities for self-aligned measures] Let E be a Euclidean vector space and let 0 < <1. Let κ be a probability distribution on End(E) and let (g_k) ∼κ^⊗. Assume that almost surely and for all 0 ≤ i≤ j≤ k, we have g_i⋯ g_k-1Å^/4 g_j⋯ g_k-1. Assume also that almost surely and for all n ∈, we have g_n ≠ 0. Then there is a limit σ(κ)∈[0, +∞] such that (g_n)/n→σ(κ) almost surely and: ∀α < σ(κ), ∃ C, β >0, ∀ n∈, ((g_n) ≤α n) ≤ C exp(-β n). If we moreover assume that (_*κ) > 2|log()|+4log(2), then σ(κ) > 0. Let N∈. Let σ_N := 1/N(_*κ^*N) = 1/N((g_N)). For all k ∈, let x_k^N := (g_kN⋯ g_(k+1) N - 1). Then (x_n^N) is i.i.d. and takes positive values and (x_n^N) = Nσ_N for all n. Then by Corollary <ref>, the sequence (x^N_k)_k∈ satisfies large deviations inequalities under the speed N σ_N. Moreover, by (<ref>) in Lemma <ref> applied to g_kNÅ^/4 g_kN⋯ g_(k+1) N - 1 for all 0≤ k < n/N and then to g_N ⌊n/N⌋Å^/4g_N ⌊n/N⌋⋯ g_n-1, we have: ∀ n ∈, (g_n) ≥ x^N_⌊n/N⌋ - n 2|log()|+4log(2)/N. Hence, by (<ref>) applied to (x^N_⌊n/N⌋)_n summed with (n 2|log()|+4log(2)/N)_n and (<ref>) applied to (x_n)_n composed with (⌊n/N⌋)_n in Lemma <ref>, the sequence (g_n) satisfies large deviations inequalities below the speed σ_N-2|log()|+4log(2)/N. This is true for all N ∈ so (g_n) satisfies large deviations inequalities below the speed σ(κ) := lim sup_N∈σ_N. Let T : Γ^→Γ^; (γ_k)_k∈↦ (γ_k+1)_k∈. The transformation T is ergodic for the measure μ := κ^⊗. For all n ∈, let f_n : (γ_k)_k∈↦ 2|log()|+4log(2)-(γ_n). Then f_n is bounded above so _μ(f_n)≤ 2|log()|+4log(2) for all n ∈. Let m,n be integers, then by (<ref>) in Lemma <ref>, we have almost surely for g ∼μ: (g_0⋯ g_n+m-1) ≥(g_0⋯ g_n-1) + (g_n ⋯ g_n+m-1) - 2|log()| - 4log(2) Hence f_n+m(g) ≤ f_n(g) + f_m∘ T^n(g). So by Kingman's sub-additive ergodic Theorem <cit.>, the sequence f_n/n converges μ-almost everywhere to lim inf(f_N*μ)/N = lim inf2|log()|+4log(2)/N - σ_N and this inferior limit is actually a limit by classical sub-additivity. Therefore (g_n)/n→σ(κ) and σ(κ) ≥(_*κ) - 2|log()| - 4log(2) almost surely by sub-additivity. [Large deviations inequalities for the singular gap] Let E be a Euclidean vector space and let ν be a strongly irreducible and proximal probability distribution on End(E). Let κ̃_0,κ̃_1,κ̃_2 be as in Theorem <ref> for ρ = 1/4 and K = 10. Let κ̃:= κ̃_1 ⊙κ̃_2 and let κ := Π_*κ̃. Let σ := σ(κ)/(L_*κ̃). Let (γ_n)∼ν^⊗. Then the random sequence ((γ_n))_n∈ satisfies large deviations inequalities below the speed σ in the sense of Definition <ref> ∀α < σ, ∃ C, β >0, ∀ n∈, ((γ_n) ≤α n) ≤ C exp(-β n). Let 0 < ≤ 1 be as in Theorem <ref>. Let (γ^w_k) ∼κ̃_0 ⊗(κ̃_1 ⊗κ̃_2). To all integer n ∈, we associate q_n =: max{q∈ | w_2q≤ n} and a random integer r_n such that γ^w_2q_n-2r_n-1Å^(γ_w_2q_n-2r_n⋯γ_n-1) or r_n ≥ q_n. Note that the conditional distribution of the sequence γ^w_0,γ^w_1, …, γ^w_2q_n-1,γ_w_2q_n⋯γ_n-1 relatively to q_n is 1/4-ping-pong for all values of q_n. Hence, we may assume that r_n ∼𝒢_1/4 for all n, by Lemma <ref>. The distribution L_*κ̃_i has finite exponential moment and is supported on _≥ 1 for all i. By Corollary <ref>, the random sequence (w_2m-w_0)_m satisfies large deviations inequalities around the speed (L_*κ̃) ∈ (0, +∞). Then by (<ref>) in Lemma <ref>, the random sequence (w_2m)_m also does. By (<ref>) in Lemma <ref>, the random sequence (q_n)_n satisfies large deviations inequalities around the speed (L_*κ̃)^-1 and by (<ref>) in Lemma <ref>, (q_n-r_n-1) also does. Then by Lemma <ref>, and by composition (<ref>) in Lemma <ref>), the sequence ((γ^w_1⋯γ^w_2q_n-2r_n-2))_n satisfies large deviations inequalities below the speed σ. Now by (<ref>) in Theorem <ref>, we have γ^w_0Å^/4γ^w_1⋯γ^w_2q_n-2r_n-2, so by (<ref>) in Lemma <ref>, the sequence ((γ^w_0⋯γ^w_2q_n-2r_n-2))_n satisfies large deviations inequalities below the speed σ. Moreover, we have: γ^w_0⋯γ^w_2q_n-2r_n-2Å^/4γ^w_2q_n-2r_n-1Å^(γ_w_2q_n-2r_n⋯γ_n-1) and (γ^w_2q_n-2r_n-1)≥ K|log()| + Klog(2) ≥ 2|log(/2)|+3log(2) so by the transpose of Lemma <ref>, we have: γ^w_0 ⋯γ^w_2q_n-2r_n-1Å^/4 (γ_w_2q_n-2r_n⋯γ_n-1). Hence ((γ_n))_n satisfies large deviations inequalities below the speed σ Note that in Theorem <ref>, we do not claim that (γ_n)/n→σ almost surely. We do however claim that in Theorem <ref>. The remaining part of this paragraph is dedicated to the proof of that claim. The usual proof using Lyapunov coefficients does not work in our case because σ may be finite even when ν has infinite first moment. Kingman's theorem can not be used either because is not sub-additive nor super-additive. In fact, we will really use the strong irreducibility of ν to prove it with the following trick. Let E be a Euclidean vector space and let ν be a strongly irreducible and proximal probability distribution on End(E). Let (γ_n) ∼ν^⊗. Let be as in Theorem <ref> for ρ= 1/4 and K = 10. There exist l_0∈ and 0 < β < 1 such that: ∀ g ∈Γ, (∀ n≥ l_0, gÅ^/4γ_n) > β. We use the notations of the proof of Theorem <ref>. Let 0< ≤ 1 and κ̃_0,κ̃_1,κ̃_2 be as in Theorem <ref> for ρ = 1/4 and K = 10. Let (γ^w_k) ∼κ̃_0 ⊗(κ̃_1 ⊗κ̃_2). For all n ∈, let q_n =: max{q∈ | w_2q≤ n} and let r_n be the smallest integer such that γ^w_2q_n-2r_n-1Å^(γ_w_2q_n-2r_n⋯γ_n-1) or r_n ≥ q_n. By Lemma <ref>, r_n has a bounded exponential moment that does not depend on n. Note that if r_n < q_n - 1, then by (<ref>) in Theorem <ref>, we have: γ_0γ_1Å^/4γ^w_2⋯γ^w_2q_n - 2 r_n -2Å^/4γ^w_2q_n - 2 r_n - 1Å^γ_w_2 q_n - 2r_n⋯γ_n-1 so we have γ_0γ_1Å^/8γ_w_2⋯γ_n-1 by Lemma <ref>. By Lemma <ref>, we may assume that (q_n-r_n) satisfies large deviations inequalities below the speed (L_*κ̃)^-1 > 0. Let l'_0 be such that (∀ n≥ l'_0, r_n < q_n) ≥1/2. Now let m∈ and α >0 be as in Corollary <ref> for ρ = 1/4 and K = 10. Let γ_-m, …, γ_-1∼ν^⊗ m be independent of (γ^w_k). Then we have: (g Å^γ_-m⋯γ_-1Å^(γ^w_0γ^w_1) ∩(γ_-m⋯γ_-1)≥ K|log()| + Klog(2)) ≥α (1-2ρ) ≥α/3. Now if we assume that: g Å^γ_-m⋯γ_-1Å^γ^w_0γ^w_1Å^/8γ_w_2⋯γ_n-1 Then by Lemma <ref>, we have gÅ^/4γ_-m⋯γ_n-1. Now note that (<ref>) holds for the conditional distribution relatively to (γ_n)_n≥ 0. Hence, we have: (∀ n ≥ l'_0, gÅ^/4γ_-m⋯γ_n-1) ≥α/6. Moreover (γ_k-m)_k≥ 0∼ν^⊗, therefore, we have (<ref>) for l_0 =l'_0 +m. Now we will use Lemma <ref> to deduce the almost sure convergence result in Theorem <ref> from the almost sure convergence result in Lemma <ref>. [Almost sure convergence] Let E be a Euclidean vector space, let Γ= End(E), let ν be a strongly irreducible and proximal probability distribution on Γ. Let 0< <1 and let κ̃_0,κ̃_1,κ̃_2 be as in Theorem <ref> for ρ = 1/4 and K = 10. Let κ̃:= κ̃_1 ⊙κ̃_2 and let κ := Π_*κ̃. Let σ := σ(κ)/(L_*κ̃). Let (γ_n)∼ν^⊗. Then (γ_n)/n→σ almost surely and σ > 0. Let (γ^w_k) ∼κ̃_0 ⊗(κ̃_1 ⊗κ̃_2)^⊗. Then (γ_n)∼ν^⊗ by (<ref>) in Theorem <ref>. By Theorem <ref>, ((γ_n))_n∈ satisfies large deviations inequalities, below the speed σ. Hence lim inf(γ_n)/n≥σ almost surely. Therefore, we only need to show that lim sup(γ_n)/n≤σ almost surely. Note that σ > 0 by Lemma <ref>. Assume by contradiction that σ < +∞ and (lim sup(γ_n)/n > σ) > 0. Let δ > 0 be such that (lim sup(γ_n)/n≥ (1 + 2δ) σ) ≥δ. Then for all integer n_0 ∈, we have: (∃ n≥ n_0, (γ_n)≥ n(1 + δ)σ) ≥δ. Let l_0∈ and 0 < β < 1 be as in Lemma <ref> Assume that they also satisfy (<ref>) for the transpose of ν (which is strongly irreducible and proximal). Then, for all g ∈End(E), and for all n ∈, we have: (∀ l_0 ≤ l ≤ n, γ_n-l⋯γ_n-1Å^/4 g) ≥β. Note that (<ref>) also works when g is a random endomorphism which is independent of the word (γ_0, … , γ_n-1). Let a_0 ∈ be the smallest integer such that (w_0 > a_0) ≤β^2δ/4 and let a_1 := a_0 + l_0. We use the convention min∅ = +∞. Let m_0 ∈. Let n_0 be the smallest integer such that (w_2m_0+1≥ n_0 + a_1 + l_0) ≤β^2δ/3. We define: n_1 := min{n≥ n_0 | (γ_a_1⋯γ_a_1 + n)≥ n (1+δ)σ}. Then (n_1 ≠ +∞) ≥δ. Moreover, n_1 is a stopping time so the random word (γ_a_1, … , γ_a_1 + n_1) is independent of the random sequence (γ_a_1 +n_1 + k + 1)_k ∈. Both are also independent of the word (γ_0,…,γ_a_1 - 1) by construction. Let (γ_n)_n < 0∼ν^⊗_- be independent of (γ_n)_n ≥ 0. Then by Lemma <ref> applied to g = γ_a_1⋯γ_a_1 + n_1 and to the random sequence (γ_a_1 +n_1 + k + 1)_k ∈ on the right (k ≥ a_1 + n_1 + l_0) and by (<ref>) on the left (j ≤ a_0), we have the following: (n_1 < +∞∩∀ j ≤ a_0,∀ k ≥ a_1 + n_1 + l_0,γ_j⋯γ_a_1-1Å^/4γ_a_1⋯γ_a_1+n_1-1Å^/4γ_a_1+n_1⋯γ_k-1) ≥β^2 δ. Let D := 4|log()|+10log(2). Then by Lemma <ref>, we have: (n_1 < +∞∩∀ j ≤ a_1,∀ k ≥ a_1 + n_1 + l_0, (γ_j⋯γ_k-1)≥ n_1(1+δ)σ - D) ≥β^2δ. Now we define the random integer: m_1 := min{k∈ | w_2k+1 > a_1 + n_1 + l_0}. Then by (<ref>) with k = w_2m_1+1-1 ≥ a_1 + n_1 + l_0 we have: (m_1 < +∞∩∀ j ≤ a_1, (γ_j⋯γ_w_2m_1+1-1)≥ n_1(1+δ)σ - D ) ≥β^2δ. Note also that with probability at least 1-β^2δ/3, we have w_0 ≤ a_1, therefore: (m_1 < +∞∩(γ_w_0⋯γ_w_2m_1+1-1)≥ n_1(1+δ)σ - D ) ≥β^2δ. Note also that by minimality of m_1, we have w_2m_1-1-a_1-l_0 ≤ n_1. Moreover, with our notations, we have γ_w_0⋯γ_w_2m_1+1-1 = γ^w_1⋯γ^w_2m_1. Hence, we have: (m_1 < +∞∩(γ_w_0⋯γ_w_2m_1+1-1)≥ (w_2m_1-1-a_1-l_0) (1+δ) σ - D) ≥β^2δ - β^2δ/3. Moreover, (m_1 < m_0) ≤β^2δ/3 by construction, so we have: ( m_0 ≤ m_1 < +∞∩(γ_w_0⋯γ_w_2m_1+1-1)≥ (w_2m-1-a_1-l_0) (1+δ) σ - D ) ≥β^2δ - 2 β^2δ/3. Hence, by taking k = m_1, we have: (∃ k ≥ m_0, (γ_w_0⋯γ_w_2k+1-1)≥ (w_2k-1-a_1-l_0) (1+δ) σ - D) ≥β^2δ/3. The above is true for all m_0, therefore: ( lim sup_k → + ∞(γ^w_1⋯γ^w_2k)+D/w_2k-1-a-1-l_0≥ (1+δ)σ) ≥β^2δ/3. Moreover w_2k-1≥ 2k-1 for all k so we can get rid of the constants in the lim sup and we have: ( lim sup_k → + ∞(γ^w_1⋯γ^w_2k)/w_2k-1≥ (1+δ)σ) ≥β^2δ/3. Now it remains to show that (<ref>) is in contradiction with Lemma <ref>. We know that (γ^w_1⋯γ^w_2k)/k→σ(κ) > 0 almost surely by Lemma <ref>. Moreover w_1+⋯ + w_2k - 2/k-1→(L_*κ̃) > 0 almost surely by the law of large numbers, hence w_2k - 1/k = w_0+⋯ + w_2k - 2/k→(L_*κ̃). lim_k → +∞(γ^w_1⋯γ^w_2k)/w_2k-1 = σ(κ)/(L_*κ̃) = σ. which contradicts (<ref>). Hence lim sup(γ_n)/n≤σ almost surely, which concludes the proof. §.§ Contraction property In this paragraph, we prove the following theorem. Given ν a strongly irreducible and proximal probability measure, we write σ(ν) for the quantity σ defined in Lemma <ref>. Let E be a Euclidean vector space, let Γ= End(E). We define the set of contracting sequences: Ω'(E) := {(γ_n)∈Γ^ | ∀ n ∈, γ_n ≠{0} and ∀ k ∈, ∀∈ (0,1), lim sup_m,n → +∞max_u ∈ U^(γ_k⋯γ_n) ∖{0}, u' ∈ U^(γ_k⋯γ_m) ∖{0}([u],[u']) = 0 }. We define T := Γ^→Γ^ to be the Bernoulli shift and we define l^∞ :Ω'(E) →𝐏(E) to be the only map such that: ∀ (γ_n) ∈Ω(E), lim sup_m,n → +∞max_u ∈ U^(γ_0⋯γ_n-1) ∖{0}([u],l^∞(γ)) = 0. Let: Ω'(E) := {γ∈Ω(E) | ∀ k∈, γ_k l^∞(T^k+1γ) = l^∞(T^kγ) }. Note that given E a Euclidean vector space, the space Ω(E) defined in Definition <ref> is measurable and T-invariant. Moreover l^∞ is T-equivariant on Ω(E) in the sense that l^∞(γ) = γ_0 l^∞(Tγ). Note also that Ω'(E) ≠Ω(E). For example let E = ^2, and let π_1 and π_2 be the orthogonal projections onto the first and second coordinates. If γ_0 = π_1 and γ_k = π_1 +2π_2 for all k ≥ 1, then γ_n = π_1 for all n ≥ 1 and [γ_k⋯γ_n] m →∞⟶ [π_2] for all k ≥ 1. So l^∞(γ) is the first coordinate axis and ∞(Tγ) is the second coordinate axis. Hence γ_0 l^∞(Tγ) = [0] ≠ l^∞(γ) so γ∈Ω'(E) ∖Ω(E). Let E be a Euclidean vector space, let Γ= End(E), let ν be a strongly irreducible and proximal probability distribution on Γ. Let γ = (γ_n)_n∈∼ν^⊗. Then γ∈Ω(E) almost surely. Let α < σ(ν). There exist constants C,β > 0 such that: (∃ u ∈ U^1(γ_n)∖{0}, ([u], l^∞(γ))≥exp(-α n)) ≤ Cexp(-β n). Moreover, for all v ∈ E, we have: (([γ_n v], l^∞(γ))≥exp(-α n) | γ_n v ≠ 0) ≤ Cexp(-β n). Let ∈ (0,1) and let κ̃_0,κ̃_1,κ̃_2 be as in Theorem <ref> for ρ = 1/4 and K = 10. Let (γ^w_k) ∼κ̃_0 ⊗(κ̃_1 ⊗κ̃_2)^⊗. By Corollary <ref>, there is a limit line l^∞ such that: ∀ m ∈, ∀ u ∈ U^1(γ^w_m)∖{0}, ([u], l^∞) ≤4/exp(-(γ^w_m)). Then we necessarily have l^∞ = l^∞(γ) whenever γ∈Ω(E). To all integer n ∈, we associate q_n =: max{q∈ | w_2q≤ n} and a random integer r_n∼𝒢_ρ such that γ^w_2q_n-2r_n-1Å^(γ_w_2q_n-2r_n⋯γ_n-1) or r_n ≥ q_n. Then by Lemma <ref>, if we assume that r_n < q_n, then γ^w_2q_n-2r_nÅ^(γ_w_2q_n-2r_n⋯γ_n-1), hence by Lemma <ref>, we have: ∀ u ∈ U^1(γ_n)∖{0}, ∀ u' ∈ U^1(γ^w_2q_n-2r_n)∖{0}, ([u],[u']) ≤4/exp(-(γ^w_2q_n-2r_n)). In this case, by triangular inequality: ∀ u ∈ U^1(γ_n)∖{0}, ([u],l^∞) ≤8/exp(-(γ^w_2q_n-2r_n)). By Corollary <ref>, and by Lemma <ref> applied to the sequence (w_k), the random variable (w_q_n - r_n - n) have a bounded exponential moment that does not depend on n. Therefore, by (<ref>) Lemma <ref>, the random sequence (w_q_n - r_n) satisfies large deviation inequalities around the speed 1. Then ((γ^w_2q_n-2r_n))_n satisfies large deviations inequalities below the speed σ(ν) by Theorem <ref> and by (<ref>) in Lemma <ref>. Hence we have (<ref>). The above reasoning also works for T^kγ for all k therefore γ∈Ω'(E) almost surely. Now to show (<ref>), we use the same reasoning. We identify E with Hom(, E). Let v ∈ E ∖{0}. Note that U^1(γ_n v) = γ_n v. For all n ∈, we define a random integer r_n^v such that γ^w_2q_n-2r_n^v-1Å^(γ_w_2q_n-2r_n^v⋯γ_n-1v) or r_n^v ≥ q_n. Then by the same reasoning as for the proof of (<ref>), for all n such that r_n^v < q_n and γ_nv≠ 0, we have: ([γ_nv],l^∞) ≤8/exp(-(γ^w_2q_n-2r_n^v)). Note moreover that ((γ_n v = 0))_n∈, v≠ 0 is bounded above by a constant by Lemma <ref>. Hence we have (<ref>). Let v ∈ E ∖(ν). The above reasoning implies that l^∞ = lim [γ_n v] almost surely so l^∞(γ) = γ_0 l^∞(Tγ) almost surely. Moreover, for all k ∈, the random sequence T^kγ has distribution ν^⊗, so we also have l^∞(T^kγ) = γ_k l^∞(T^k+1γ). Therefore γ∈Ω(E) almost surely. Given g ∈End(E) and v ∈ E, we write g[v] or [g][v] for [gv], this is an element of 𝐏(E) ⊔{[0]}. That way we have a measurable (but not everywhere continuous) semi-group action End(E) ↷𝐏(E) ⊔{[0]}, and a convolution product Prob(End(E)) ×Prob(𝐏(E) ⊔{[0]}) →Prob(𝐏(E) ⊔{[0]}). Note that given g ∈End(E) ∖{0} and v ∈ E ∖{0}, such that gv ≠ 0, the map ([h], [x]) ↦ [hx] is continuous at ([g], [v]). Let un now prove Corollary <ref>, which says that given E a Euclidean vector space and ν a strongly irreducible and proximal probability distribution on End(E), we have a unique ν-stationary probability measure ξ_ν^∞ on 𝐏(E). Moreover, for all probability measure ξ on 𝐏(E) ∖(ν), the sequence (ν^*n*ξ)_n∈ converges exponentially fast to ξ_ν^∞ for the dual of the Lipschitz norm. Let ν be any strongly irreducible and proximal distribution on End(E). Let ξ_ν^∞ be the distribution of l^∞_*(ν^⊗). Let (γ_n)∼ν^⊗. Let l := l^∞(γ) and let l' := l^∞∘ T(γ). Then by (<ref>) in Theorem <ref> applied to a vector v ∈ E ∖ker(ν), we have l = γ_0 l' almost surely. Moreover, l and l' both have distribution ξ_ν^∞ and γ_0 and l' are independent so ξ_ν^∞ is ν-stationary. Let ξ be a probability measure on 𝐏(E) that is supported on 𝐏(E)∖(ν), let λ≥ 0 and let f:𝐏(E)→ be λ-Lipschitz. Let (l,(γ_n))∼ξ⊗ν^⊗ and let l^∞ := l^∞(γ). Let 0 < α < σ(ν) and let C, β > 0 be as in Theorem <ref>. Note also that such β, C do not depend on ξ. Then note that 𝐏(E) has diameter 1. Therefore, for all δ∈(0,1) and for all n ∈, we have: |(f(l^∞)) - (f(γ_n l))| ≤(|f(l^∞)-f(γ_n l)|) ≤λ((l^∞,γ_nl) ≤λ((l^∞,γ_nl) ≥δ) + λδ. Moreover, since l is independent of γ, we know that ((l^∞,γ_n l) ≥exp(-α n)) ≤ C exp(-β n) for all n by (<ref>) in Theorem <ref>. So by (<ref>) applied to all n∈, with δ = exp(-α n), we have: ∀ n∈, |(f(l^∞)) - (f(γ_n l))| ≤λ C exp(-β n) + λexp(-α n) ≤λ (C+1) exp(-min{α, β} n). To conclude, note that for all n ∈, the random variable γ_n l has distribution law ν^*n*ξ. §.§ Asymptotic estimates for the spectral gap and dominant eigenspace We use exactly the same strategy as for the proofs of Theorems <ref> and <ref> but we use Lemma <ref> instead of Lemma <ref>. Note that given E a vector space and g,h ∈End(E), then (gh) = (hg) so gh is proximal if and only if hg is and in this case, E^+(gh) = g E^+(hg). Note that given g ∈End(E) a proximal matrix, the constant sequence γ : n ↦ g is in Ω(E) and l^∞(γ) = E^+(g). In this section we use Lemma <ref> applied to the extraction constructed in Theorem <ref>. We could use the extraction constructed in Theorem <ref> because we do not care about moments or independence. However need the notion of inductive alignment Å^S as defined in the beginning of Section <ref> to be able to use Lemma <ref> as we explained in Remark <ref>. Let ν be a strongly irreducible and proximal probability distribution over End(E). Let (γ_n)∼ν^⊗ and let l^∞ := l^∞(γ). Then we have: ∀α < σ(ν), ∃ C,β > 0, ∀ n∈, ((γ_n)≤α n) ≤ Cexp(-β n), ∀α < σ(ν), ∃ C,β > 0, ∀ n∈, ((E^+(γ_n), l^∞)≥exp(-α n) ) ≤ Cexp(-β n). Let 0 ≤≤ 1 and ν̃_s be as in Corollary <ref> applied to ν with ρ = 1/4 and K = 10. Let μ̃ be as in Theorem <ref> applied to ν and ν̃_s with S = 𝐬𝐮𝐩𝐩(Π_*ν̃_s) and Å = Å^. Let (γ^w_k) ∼μ̃. To all integer n ∈, we associate q_n =: max{q∈ | w_2q≤ n} and c_n the smallest integer such that: γ^w_2q_n-2c_n-1Å(γ_w_2q_n-2c_n⋯γ_n-1γ_0⋯γ_w_2c_n) and (γ_w_2q_n-2c_n-2⋯γ_n-1γ_0⋯γ_w_2c_n) Åγ^w_2c_n or 2c_n ≥ q_n. By Lemma <ref>, c_n has a bounded exponential moment that does not depend on n. Moreover γ^w_2q_n-2c_n-3Å^S γ^w_2q_n-2c_n-2 and γ^w_2q_n-2c_n-2Åγ^w_2q_n-2c_n-1. Therefore, we have γ^w_2q_n-2c_n-3Å^S(γ_w_2q_n-2c_n-2, …,γ_n-1, γ_0 , …, γ_w_2c_n) By Proposition <ref>, we have: γ^w_2q_n-2c_n-3Å^/2(γ_w_2q_n-2c_n -2⋯γ_n-1γ_0 ⋯γ_w_2c_n). By definition of c_n and by Theorem <ref> and Proposition <ref> applied to (γ^w_k)_2c_n < k < 2q_n -2c_n, which satisfies (<ref>) in Theorem <ref>, we also have: (γ_w_2q_n-2c_n -2⋯γ_n-1γ_0 ⋯γ_w_2c_n) Å^γ^w_2c_n +1Å^/2γ^w_2c_n +2Å^⋯Å^/2γ^w_2q_n-2c_n-4Å^γ^w_2q_n-2c_n-3 Moreover, each of these matrices have a squeeze coefficient larger than 4 |log()| + 7 |log(2)| by Proposition <ref> again. Let a_n := w_2q_n-2c_n-3 and let h_n := γ_a_n⋯γ_n-1γ_0⋯γ_a_n-1. Then by Lemma <ref> we have h_n Å^/4 h_n and γ_a_nÅ^/4 h_n when 2c_n + 2 ≤ q_n. Hence by Lemma <ref>, we have: (h_n) ≥(h_n) - 2|log()| - 6log(2) Moreover, we have (γ^w_2c_n +1⋯γ^w_2q_n-2c_n-3) Å^/4(γ_w_2q_n-2c_n-2⋯γ_n-1γ^w_0 ⋯γ^w_2c_n), so by (<ref>) in Lemma <ref>, we have: (h_n) ≥(γ^w_2c_n +1⋯γ^w_2q_n-2c_n-3) - 2|log()| - 4log(2). Now we claim that ((γ^w_2c_n +1⋯γ^w_2q_n-2c_n-3))_n∈ satisfies large deviations inequalities below the speed σ(ν). Note that this implies that we have (<ref>) because (h_n) = (γ_n) by conjugation. Let α < σ and let 0 < δ < 1/2 be such that α/1-2δ < σ. By Theorem <ref>, there are constants C, β > 0 such that: ∀ i < j, ((γ_i⋯γ_j-1) ≤ (j-i) α/1-2δ) ≤ C exp(-β (j-i)). Write C' := C (1-exp(-β))^-2 and β' := β(1-2δ) > 0, then we have: (∃ i ≤δ n, ∃ j ≥ n- δ n, (γ_i⋯γ_j-1) ≤ (j-i) α/1-2δ) ≤∑_i = 0^⌊δ n ⌋∑_j = ⌈ n-δ n ⌉^n C exp(-β (j-i)) ≤∑_i = -∞^⌊δ n ⌋∑_j = ⌈ n-δ n ⌉^+∞ C exp(-β (j-i)) ≤ C' exp(-β' n). Note that, if we take i = w_2c_n+1 and j = w_2q_n - 2c_n - 2, then γ_i⋯γ_j-1 = γ^w_2c_n +1⋯γ^w_2q_n-2c_n-3. Hence we have: ((γ^w_2c_n +1⋯γ^w_2q_n-2c_n-3) ≤α n) ≤(δ n < w_2c_n+1) + (w_2q_n - 2c_n - 2 < (1-δ) n) + (∃ i ≤δ n, ∃ j ≥ n-δ n, (γ_i⋯γ_j-1) ≤ (j-i) α/1-2δ) Moreover w_2c_n+1 has finite exponential moment so ((δ n < w_2c_n+1))_n∈ decreases exponentially fast Moreover, by (<ref>) applied to (w_m) and (q_n) and (<ref>) applied to (c_n) in Lemma <ref>, the random sequence (w_2q_n - 2c_n - 3)_n∈ satisfies large deviations inequalities below the speed 1 so ((w_2q_n - 2c_n - 2 < (1-δ) n))_n∈ decreases exponentially fast. Hence, by (<ref>), the sequence (((γ^w_2c_n +1⋯γ^w_2q_n-2c_n-3)≤α n))_n∈ decreases exponentially fast in n, which proves the claim so we have (<ref>). Now we prove (<ref>). By Lemma <ref> applied to h_n Å^ h_n, we have: ∀ u∈ U^/4(h_n)∖{0}, (E^+(h_n), [u]) ≤16/exp(-(h_n)). Let e_n be a random non-zero vector such that e_n ∈ E^+(h_n) when 2c_n + 2 ≤ q_n. By (<ref>) and by the above reasoning, ((h_n))_n satisfies large deviations inequalities below the speed σ(ν) > 0. Hence there exist constants C, β > 0 such that: 1 - C exp(-β n) ≤(∀ u∈ U^/4(h_n)∖{0} , (E^+(h_n), [u]) ≤/8) ≤(γ_a_nÅ^/8 e_n). Moreover, By Lemma <ref> and by the above reasoning, we have γ_a_nÅ^/4 h_n. Hence by lemma <ref>, we have: (∀ u∈ U^/4(h_n)∖{0} , (E^+(h_n), [u]) ≤/8) ≤(γ_a_nÅ^/8 e_n) Moreover, when γ_a_nÅ^/8 e_n, then by Lemma <ref>, we have: ∀ u∈ U^1(γ_a_n)∖{0}, ([γ_a_ne_n], [u])≤8/exp(-(γ_a_n)). Then by Corollary <ref> and by triangular inequality, we have: ([γ_a_ne_n], l^∞(γ))≤12/exp(-(γ_a_n)). By conjugacy, we have γ_a_ne_n = E^+(γ_n). Moreover, the random sequence (a_n) satisfies large deviations inequalities below the speed 1 so by Theorem <ref> and by (<ref>) in Lemma <ref>, the random sequence ((γ_a_n))_n satisfies large deviations inequalities below the speed σ(ν). This proves (<ref>). Let E be a Euclidean space. Let ν be a strongly irreducible and proximal probability distribution over End(E). Let (γ_n) ∼ν^⊗. Let σ(ν) := lim(γ_n)/n be as in Lemma <ref>. Then σ(ν) > 0 by Theorem <ref>. Then (<ref>) in Theorem <ref> is a direct consequence of (<ref>) in Theorem <ref> and (<ref>) in Theorem <ref>. The above proves Theorem <ref>. Moreover (<ref>) is (<ref>) and (<ref>) is (<ref>) which proves Theorem <ref>. §.§ Limit flag for totally strongly irreducible distributions Now we can give the following corollary which is a reformulation of the former results, written in a way that should remind of Oseledets' multiplicative ergodic Theorem. Let E be a Euclidean vector space and let ν be a probability distribution on End(E). We say that ν is totally strongly irreducible if for all k ∈{1, …, (E)-1} the measure ⋀^k_*ν is strongly irreducible. Let ν be a strongly irreducible probability distribution and let (γ_n) ∼ν^⊗. If ν is not proximal, we define σ(ν) := 0, note that then by <ref>, we have (γ_n)/n→ 0 almost surely. If ν is proximal, we define σ(ν) as in Theorem <ref>. Let ν be a distribution on End(E) that is totally strongly irreducible. For all 1≤ j ≤(ν), we define σ_j(ν) := σ(⋀^j_*ν). We define: Θ(ν) := {1≤ j ≤(ν) | σ_j(ν)≠ 0} Given h a matrix and j ≤(h), we define _j(h) := log(⋀^j h^2/⋀^j-1 h⋀^j+1 h) = (⋀^j h), for j > (h), we use the convention _j(h) = 0 and for all j ≥ 1, we define: _j(h) := (⋀^j h) = lim_n→ + ∞_j(h^n)/n. [Convergence of the Cartan projection with large deviations] Let E be a Euclidean spaces and let ν be a totally strongly irreducible probability distribution on End(E) of rank at least (E)-1. Let (γ_n)∼ν^⊗. For all 1 ≤ j ≤ d-1 we have almost surely _j(γ_n)/n→σ_j(ν). Moreover, for all j ∈Θ(ν) and for all α_j < σ_j(ν), there exist constants C,β >0 such that: ∀ n∈, (_j(γ_n)≤α_j n) ≤ Cexp(-β n). ∀ n∈, (_j(γ_n) ≤α_j n) ≤ Cexp(-β n). If we assume that j ∈Θ(ν), then (<ref>) is a reformulation of Theorem <ref> for ⋀^j_*ν and <ref> is a reformulation of Theorem <ref>. Otherwise σ_j(ν) = 0. We know from Lemma <ref> that there is a constant B such that _j(γ_n) ≤ B almost surely and for all n so _j(γ_n)/n→ 0 = σ_j(ν). Let E be a Euclidean space of dimension d ≥ 2. We denote by Gr(E) the set of vector subspaces of E. Given 0 ≤ k ≤ d, we denote by Gr_k(E) the set of subspaces of E that have dimension k (Gr_k(E))_1≤ k ≤ d are the level sets of the function : Gr(E) →{0,…, d}. Note that for all k, the set Gr_k(E) naturally embeds into 𝐏(⋀^k E) by the map V ↦ [v_1 ∧⋯∧ v_k] for v_1 ∧⋯∧ v_k any basis of V. Note that the image of Gr_k(E) by this embedding is a compact subset, we will abusively denote by Gr_k(E) the image of Gr_k(E) in 𝐏(⋀^k E). We write for the distance map on Gr_k(E) pulled back from the distance on 𝐏(⋀^k E) associated to the Euclidean metric on ⋀^k E. We call flag in E a totally ordered set of subspaces of E for all V, W ∈ F, we have V ⊂ W or W ⊂ V. We write Fl(E) for the space of flags in E. Given Θ⊂{1,…, d-1}, we denote by Fl_Θ(E) the space of flags F ∈Fl(E) such that (F) = Θ. Given k ∈Θ⊂{1,…, d-1} and given F ∈Fl(E), we write F_k for the single element of the set F ∩Gr_k(E). Note that End(E) naturally acts on the left on Fl(E). Moreover, for all Θ⊂{1,…, d-1}, the group GL(E) acts continuously on Fl_Θ(E). We remind that we denote by T the Bernoulli shift. Let E be a Euclidean vector space and let Θ⊂{1, …, (E)} We define: Ω'_Θ(E) := ⋂_k∈Θ∖{0}((^k)^⊗)^-1Ω(^k E), Ω_Θ(E) := {γ∈Ω'_Θ(E) | ∀ k ∈Θ, l^∞∘(^k)^⊗ (γ) ∈Gr_k(E)}. We also define the T-equivariant measurable map: F^∞_Θ = (F^∞_k)_k∈Θ: Ω_Θ(E)⟶Fl_Θ(E), with F_k^∞ = l^∞∘ (^k)^⊗ for all k. Using the Cartan decomposition on the exterior product, we can show that in fact Ω'_Θ(E) = Ω_Θ(E) but that is not the purpose of this article. [Convergence to the limit flag] Let E be a Euclidean vector space. Let d := (E), assume that d ≥ 2. Let ν be a totally strongly irreducible probability distribution on GL(E). Let Θ := Θ(ν) and let γ∼ν^⊗. Then γ∈Ω_Θ(E) almost surely. Let (α_k)_k∈Θ∈∏_k∈Θ(0, σ_k(ν)) be a non-random family of real numbers. Then there exist constants C, β > 0 such that for all non-random flag F ∈Fl_Θ, we have: (∃ k ∈Θ, (F^∞_k(γ),γ_n F_k)≥exp(-α_k n) ) ≤ C exp(-β n). This is a reformulation of Theorem <ref> in terms of flags. Let ν be a totally strongly irreducible probability measure on GL(E) and let k∈Θ(ν). Then ⋀^k_*ν is strongly irreducible and proximal. Let (γ_n) ∼ν^⊗. By Theorem <ref> applied to (⋀^kγ_n)_n∈, we have (⋀^k)^⊗(γ)∈Ω(^k E). Then we claim that l^∞∘(⋀^k)^⊗(γ) ∈Gr_k(E) almost surely. Hence γ∈Ω'_Θ(E) almost surely. Let (v_1, …, v_k)∈ E^k be a non-random free family. By Theorem <ref>, we have : ⋀^k γ_n(v_1, …, v_k) n → +∞⟶ l^∞∘(⋀^k)^⊗(γ) almost surely. Moreover, for all n ∈, we have ⋀^k γ_n[v_1∧⋯∧ v_k] = [γ_n v_1 ∧⋯∧γ_n v_k] ∈Gr_k(E), and Gr_k(E) is closed in 𝐏(⋀^k E). Hence l^∞∘(⋀^k)^⊗(γ) ∈Gr_k(E). Hence γ∈Ω_Θ(E) almost surely. To conclude, note that (<ref>) is simply the reformulation of (<ref>) for all the measures ⋀^k_*ν with k ∈Θ. Note that in the case of ν an totally strongly irreducible probability distribution on SL(E), one can actually take the pivotal extraction to be aligned in all Cartan projections. With a correct adaptation of the works of <cit.>, one should be able to prove the following. [Poisson boundary] Let E be a Euclidean space od dimension d ≥ 3. Let ν be an totally strongly irreducible probability distribution supported on a discrete subgroup of SL(E). Assume that ν has finite entropy then the Poisson boundary of ν is isomorphic to Fl_Θ(ν)(E) endowed with the ν-stationary probability distribution F^∞_*ν^⊗. It follows directly from <cit.> when E = ^2 because any discrete subgroup in SL(^2) is either non-elementary hyperbolic and therefore we can apply <cit.>, or virtually cyclic and therefore not strongly irreducible or finite. §.§ Law of large numbers for the coefficients and for the spectral radius We remind that in Corollary <ref>, we have shown that given ν a strongly irreducible and proximal probability distribution on GL(E) and given 0 < ρ < 1/5, there is an integer m∈, a constant 0< <1 and a probability distribution ν̃_s that is absolutely continuous with respect to ν^⊗ m, has compact support and whose product is ρ-Schottky for Å^. We use the notations of Definitions <ref> and <ref> for the trunking ⌈·⌉ and the coarse convolution ·^↑ k. We formulate Theorems <ref> and <ref> in the following technical way to explicit the dependency of the constants C and β in Theorem <ref> in terms of ν. In particular, we can note that the lower bound of all β, so that Theorem <ref> is satisfied (for C = β^-1), is given by a function of ν that is lower semi-continuous for the weak-* topology on the space of strongly irreducible and proximal probability distributions on GL(E). [Strong law of large numbers for the coefficients] Let E be a Euclidean vector space and let ν be a probability measure on GL(E). Let (γ_n)∼ν^⊗. Let α, ∈ (0,1), let ρ∈ (0,1/5) and let ν̃_s be a compactly supported probability distribution on GL(E)^m such that Π_*ν̃_s is ρ-Schottky for Å^ and supported on the set {≥ 10 |log(/2)|} and αν̃_s ≤ν^⊗ m. Let B := max_kmax N∘χ_k^m(𝐬𝐮𝐩𝐩(ν̃_s)). Then there exist constants C,β > 0 that depend only on α, ρ, m and such that: ∀ f ∈ E^*∖{0}, ∀ v ∈ E∖{0}, ∀ n∈, ∀ t≥ 0, (logfγ_nv/| f γ_n v| > t) ≤ Cexp(-β n) + ∑_k=1^∞ Cexp(-β k) ⌈B ∨ N_*ν/1-α⌉^↑ k(t - 2|log()| - 3log(2),+∞). Let 0 < ≤ 1 and ν̃_s be as in Corollary <ref> for K = 10 and ρ = 1/4. Let μ̃ be as in Theorem <ref> for Γ := GL(E), for Å = Å^ and S = {≥ 10 |log(/2)|}. Let (γ^w)∼μ̃. By Proposition <ref>, we have γ^w_2kÅ^γ^w_2k+1Å^/2γ^w_2k+2 for all k ∈. Let n ∈ be fixed. Let q_n := max{q∈ | w_2q≤ n}. Then by (<ref>) in Theorem <ref>, the sequence fγ^w_0,γ^w_1, …, γ^w_2q_n - 1,γ_w_2q_n⋯γ_n-1 v is ρ-ping-pong. By Lemma <ref> applied to that sequence, we construct a random integer r_n ∼𝒢_ρ that is independent of (γ^w_2k)_k∈ and such that r_n ≥ q_n or: γ^w_2q_n - 2r_n - 1Å^(γ_w_2q_n -2r_n⋯γ_n-1 v). By the transpose of Lemma <ref>, we construct a random integer l ∼𝒢_ρ that is independent of (γ^w_2k)_k∈ and such that : (f γ^w_0⋯γ^w_2l) Å^γ^w_2l+1. Note that if l + r_n < q_n, then: (f γ^w_0⋯γ^w_2l) Å^γ^w_2l + 1Å^/2⋯Å^γ^w_2q_n - 2r - 1Å^(γ_w_2q_n - 2r_n⋯γ_n-1 v). Hence, by lemma <ref>, we have: (f γ^w_0⋯γ^w_2l) Å^/2(γ^w_2l+1⋯γ^w_2q_n-2r-1) Å^/2(γ_w_2q_n -2r⋯γ_n-1 v). Hence by Lemma <ref>: |fγ_n v|≥^2/8f γ^w_0⋯γ^w_2lγ^w_2l+1⋯γ^w_2q_n-2r_n-1γ_w_2q_n -2r_n⋯γ_n-1 v. Moreover by sub-multiplicativity: γ_n≤γ^w_0⋯γ^w_2lγ^w_2l+1⋯γ^w_2q_n-2r_n-1γ_w_2q_n -2r_n⋯γ_n-1. By definition of N, and by sub-additivity, we have: logγ^w_0⋯γ^w_2l + logf - logf γ^w_0⋯γ^w_2l ≤log(γ^w_0⋯γ^w_2l) - log(γ^w_0^-1⋯γ^w_2l^-1) ≤ N(γ^w_0⋯γ^w_2l) ≤∑_k=0^w_2l+1-1 N(γ_k) and by the same argument: logγ_w_2q_n -2r_n⋯γ_n-1 + logv - logγ_w_2q_n -2r_n⋯γ_n-1 v≤∑_k = w_2q_n - 2 r_n^n-1 N(γ_k). Therefore: logγ_n - log |fγ_n v| ≤∑_k=0^w_2l+1-1 N(γ_k) + ∑_k = w_2q_n - r_n^n-1 N(γ_k) + 2|log()| + 3log(2). Moreover, for all t ≥ B, and for all k ∈, by (<ref>) in Theorem <ref> with A = {N > t}, one has: (N(γ_k) > t | (w_j)_j∈) ≤N_*ν(t,+∞)/1-α. Note that, q_n only depends on (w_j)_j∈, moreover l and r_n are independent of (γ^w_2j)_j∈. Note also that when the sequence (w_j)_j∈ is so that the index k appears in an oddly indexed group (there exists j ∈ such that w_2j+1≤ k < w_2j+2), then N(γ_k) ≤ B so (N(γ_k) > t | (w_j)_j∈) = 0 on that set and trivially, we also have (N(γ_k) > t | (w_j)_j∈, l, r_n) = 0 on that set and for all values of l and r_n. Therefore, we have: (N(γ_k) > t | l, q_n, r_n) ≤N_*ν(t,+∞)/1-α. By Corollary <ref>, n -w_2q_n has a bounded exponential moment, with a bound that depends only on (α, ρ, m) and not on n. By Lemma <ref>, the random variables n - w_2q_n - r_n and w_2l+1 both have bounded exponential moment, with a bound that depends only on (α, ρ, m). In other words, there are constants C, β > 0 that only depend on (α, ρ, m) by construction and such that: ∀ k∈, (n - w_2q_n - 2r_n + w_2l+1 = k ∩ r_n ≤ q_n) ≤ C exp(-β k). Moreover by Lemma <ref>, the random variable w_2l+2r_n also has bounded exponential moment so we may also assume that: (l+r_n ≥ q_n) = (n ≤w_2l+2r_n) ≤ C exp(-β n). Hence, by Lemma <ref>, we have for all t ≥ 0: (logfγ_nv/| f γ_n v| > t | l+r_n < q_n) ≤∑_k=1^n Cexp(-β k) ⌈B ∨ N_*ν/1-α⌉^↑ k(t - 2|log()| - 3log(2),+∞). This proves (<ref>). [Strong law of large numbers for the spectral gap] Let E be a Euclidean vector space and let ν be a probability measure on GL(E). Let (γ_n)∼ν^⊗. Let α, ∈ (0,1), let ρ∈ (0,1/5) and let ν̃_s be a compactly supported probability distribution on GL(E)^m such that Π_*ν̃_s is ρ-Schottky for Å^ and supported on the set {≥ 10 |log(/2)|} and αν̃_s ≤ν^⊗ m. Let B := max_kmax N∘χ_k^m(𝐬𝐮𝐩𝐩(ν̃_s)). Then there exist constants C,β > 0 that depend only on α, ρ, m and such that: ∀ n∈, ∀ t≥ 0, (logγ_n/ρ_1(γ_n) > t) ≤∑_k=1^∞ Cexp(-β k) ⌈B ∨ N_*ν/1-α⌉^↑ k(t - 2|log()| - 5log(2),+∞). Let 0 < ≤ 1 and ν̃_s be as in Corollary <ref> for K = 10 and ρ = 1/4. Let μ̃ be as in Theorem <ref> for Γ := GL(E), for Å = Å^ and S = {≥ 10 |log(/2)|}. Let (γ^w)∼μ̃. Let n ∈. Let q_n : = max{k∈ | w_2k≤ n}. Let c_n be as in Lemma <ref> applied to the pivotal sequence (γ^w_0, γ^w_1, …, γ^w_2q_n - 1,γ_w_2q_n⋯γ_n-1). Then c_n has a finite exponential moment and is independent of (γ^w_0, γ^w_2, …, γ^w_2q_n - 2,γ_w_2q_n⋯γ_n-1). By the same argument as in the proof of Theorem <ref>, using (<ref>) in Lemma <ref>, we have: ρ_1(γ_n) ≥γ_w_2q_n-2c_n-2⋯γ_n-1γ_0 γw_2c_n+1-1γ^w_2c_n + 1⋯γ^w_2q_n-2c_n-3^2/32 Hence, with the same reasoning as in the proof of Theorem <ref>, we have: logγ_n-log(ρ_1(γ_n)) ≤∑_k = w_2q_n-2c_n-2^n-1 N(γ_k) + ∑_k = 0^w_2c+1-1 N(γ_k) + 2|log()| + 5log(2) However (<ref>) holds without conditions on c_n (with the convention w_-k = 0 for all k ∈), because: logγ_n-log(ρ_1(γ_n)) ≤∑_k = 0^n-1 N(γ_k). Moreover, for all t≥ B, and for all k ∈, one has: (N(γ_k) > t | c_n, q_n) ≤N_*ν(t,+∞)/1-α. Moreover by Lemma <ref> and Theorem <ref>; w_2c_n+1 + n- w_2q_n-2c_n-2 has a bounded exponential moment with a bound that depends only on (α, ρ, m). Theorems <ref> and <ref> Let E be a Euclidean vector space, let ν be a strongly irreducible and proximal probability measure on GL(E) and let (γ_n) ∼ν^⊗. Let ρ = 1/4 and K = 10 and let α, ∈ (0,1) and m∈ be as in Corollary <ref>. Let B, C, β > 0 be as in Theorem <ref>. Up to taking the minimal β and the maximal C, we may assume that B, C, β > 0 also satisfy the conclusions of <ref>. Let D := 2|log()| + 5log(2). Note that if N_* ν = δ_0, then N_* ν^*m = δ_0 for all k by sub-additivity of N. Moreover ≤ N on GL(E) so N_* ν≠δ_0 by proximality of ν. Let C_0, β_0 be as in Lemma <ref> for η = N_* ν. Then for all t ≥ 0, we have: ∑_k=1^∞ Cexp(-β k) ⌈B ∨ N_*ν/1-α⌉^↑ k(t - 2|log()| - 5log(2),+∞) ≤∑_k=1^∞ C_0exp(-β_0 k) N_*ν(t, + ∞) Therefore (<ref>) implies (<ref>) and (<ref>) implies (<ref>). Let V be a proper subspace of E, let 0< r ≤ 1. Let v ∈ E ∖{0} and let f∈ E^*∖{0} be such that f(V) = {0}. Let (γ_n)∼ν^⊗ and let l^∞ = l^∞(γ). We claim that: (l^∞∈𝒩_r(V)) ≤lim inf_n→ +∞(|f γ_n v| < r fγ_nv) Indeed, if l^∞∈𝒩_r(V), then by Theorem <ref>, we construct a random integer n_0 which has finite exponential moment and such that [γ_n v] ∈𝒩_r(V) for all n ≥ n_0. Note also that if [γ_n v] ∈𝒩_r(V), then by Lemma <ref> |f γ_n v| < r fγ_nv. Hence, for all n ∈, we have: (|f γ_n v| < r fγ_nv) ≥(l^∞∈𝒩_r(V)) - (n ≤ n_0). Moreover (n ≤ n_0) n → +∞⟶ 0, hence we have (<ref>). So by Theorem <ref>, we have (<ref>). We said in the introduction that <ref> is an amelioration of a result by Benoist and Quint in <cit.>. We show that the polynomial regularity of the invariant measure in Corollary <ref> is actually optimal. Let E = ^d and let ν := ν_A * ν_K where ν_K is the Haar measure on the (compact) group of isometries O(E) and A is the distribution of the matrix M := [ exp(T) 0 ⋯ 0; 0 1 ⋱ ⋮; ⋮ ⋱ ⋱ 0; 0 ⋯ 0 1 ], Where T is any non-negative, real valued random variable. Then ν is strongly irreducible and proximal and it actually has full support in PGL(E). Then write ξ^∞_ν for the invariant distribution on 𝐏(E) and write e_1 for the first base vector. Then we have ξ^∞ := ν_A * ξ_K for ξ_K the Lebesgue measure on 𝐏(E). Indeed ν_K * ξ = ξ_K for all ξ, by property of the Haar measure. As a consequence, we have for all r≥ 0: 1/d(T ≥log(d) - log(r)) ≤ξ^∞_ν(ℬ(e_1,r)). Indeed a random variable of distribution law ξ_K has first coordinate larger than 1/d with probability at least 1/d. Note also that T = N_*ν and (<ref>) implies that the distribution log_*(e_1,·)_*ξ_∞^ν is at least in the same polynomial integrability class as T. Another interesting question that is asked in <cit.> is whether Corollary <ref> still works if we drop the proximality assumption and replace it by a total strong irreducibility assumption. Indeed theorem <ref> tells us that if we take a distribution ν and write p(ν) its proximality rank p(ν) := minΘ(ν) for Θ(ν) as in Definition <ref>, then we can construct a random limit space of dimension p(ν). Then with the same trick as in the proof of <ref>, we can show that the coefficient w γ_n u is up to an exponentially small error the product of a linear form w' and a vector u' whose norms are controlled in law by the same ζ_ν^C, β. However, the fact that the kernel of w' cuts orthogonally the p(ν)-dimensional limit space that contains u' does not give a lower bound on the product |w'u'|/w'u'. For example in dimension 2, we can take ν to be the law of a random rotation of angle 2^-kπ with probability exp(-exp(exp(k))) for all k ∈ and the identity otherwise. Then the random walk (γ_n) is recurrent so if we take w and u such that w u =0, then we almost surely have |w γ_n u | = 0 for infinitely many times n∈. The question remains open if we consider the conditional distribution of log|w γ_n u |/γ_n with respect to the event (w γ_n u ≠ 0) or simply assume that w γ_n u ≠ 0 almost surely and for all n ∈. § PROBABILISTIC TOOLS In this appendix, we will give the proofs of some classical results about sums of independent random variables. We will use the following notations. By probability space, we mean a space Ω endowed with a σ-algebra 𝒜, that is isomorphic to the Borel algebra of a compact metric space, and a probability measure that has no atoms. We call events or measurable subsets of Ω the elements of 𝒜. Let (Ω, 𝒜, ) be such a probability space. Given ℬ a subalgebra of 𝒜, and ϕ an 𝒜-measurable, real valued function such that (ϕ) is well defined (meaning that the positive part or the negative part of ϕ has finite expectation), we define (ϕ | ℬ) as the equivalence class of all ℬ-measurable random variables ψ taking values in ∪{(ϕ)} such that for all B∈ℬ, we have (1_B ϕ) = (1_B ψ) where 1_B is the indicator function of B. If ℬ is the σ-algebra generated by a measurable map γ : Ω→Γ, we also write (ϕ | γ) for (ϕ | ℬ). We call filtration over (Ω, 𝒜) a nested sequence of sub-σ-algebras of 𝒜 a sequence (ℱ_k)_k∈ such that for all k, we have ℱ_k ⊂ℱ_k+1. We will write := _≥ 0 for the set of non-negative integers and given a sequence (w_m)∈^, we define the sequence of partial sums as (w_m)_m∈ := (w_0 +⋯ +w_m-1)_m∈. §.§ About exponential large deviations inequalities In this section, we show that all the probabilistic constructions that we use in the main body of the article preserve the property of having a finite exponential moment. Note however that a product of two random variables that have a finite exponential moment may not have a finite exponential moment. [Distribution of the current step] Let (Ω, ) be a probability space and let (ℱ_k)_k ∈ be a filtration on Ω. Let (w_k)_k∈ be a random sequence of positive integers such that w_k is ℱ_k+1-measurable for all k. For all n ≥ 0, we define r_n := max{r ∈ | w_r_n≤ n}. Let h : →_≥ 0 be a function. Assume that: ∀ t ∈, ∀ k≥ 0, (w_k = t | ℱ_k)≤ h(t). Then we have: ∀ t∈, ∀ n∈, (w_r_n = t)≤ t h(t). Let t and n be two non-negative integers. We have (w_r_n = t) = ∑_r=0^+∞((r=r_n) ∩ (w_r=t)) = ∑_r=0^+∞((n-t ≤w_r < n)∩ (w_r=t)) = ∑_r=0^+∞∑_u = 0^t(w_r = n-u)(w_r = t | w_r = n-u). Probabilities are non negative so the order of summation does not matter. hence: (w_r_n=t) = ∑_u = 0^t∑_r=0^∞(w_r = n-u)(w_r = t | w_r = n-u) For all u∈ and all r ∈, the event (w_r = n-u) is in ℱ_r. So by hypothesis, we have (w_r = t | w_r = n-u)≤ h(t) for all t and all r, u ∈ such that (w_r = n-u) > 0. Moreover, for all u ∈, we have: ∑_r=0^∞(w_r = n-u) = (#{r∈ | w_r = n-u}) ≤ 1, because the map r ↦w_r is almost surely injective. Therefore, we have (w_r_n=t)≤ t h(t), which proves the claim. Let (w_k)_k∈ be a random sequence of positive integers and (ℱ_k) be a filtration such that w_k is ℱ_k+1-measurable for all k. Assume that for some C, β > 0, we have: ∀ t∈, ∀ k≥ 0,(w_k ≥ t | ℱ_k)≤ C exp(-β t). For all n ∈, define the random integer r_n := max{r ∈ | w_r_n≤ n}. Then: ∀ t≥ 0, ∀ n≥ 0, (w_r_n≥ t | ℱ_k) ≤C(1+tβ)/β^2exp(-β t). Define h(t) := Cexp(-β)t for all t ∈. Then we have: ∀ t∈, ∀ k≥ 0,(w_k = t | ℱ_k)≤ h(t). So by Lemma <ref>, for all t,u∈, we have (w_r_n=u) ≤ uh(u). Now let n ∈ and t ∈. We have : (w_r_n≥ t) = ∑_u = t^∞(w_r_n=u) ≤∑_u = t^∞ uh(u) ≤∑_u = t^∞ uCexp(-β u) ≤ C exp(-β t) (∑_u = 0^∞ u exp(-β u) + t∑_u = 0^∞exp(-β u)) ≤ C exp(-β t) ( (1/1-exp(-β))^2 + t/1-exp(-β)). Moreover β≥ 1-exp(-β) by convexity. So we have (w_r_n≥ t)≤C(1+tβ)/β^2exp(-β t). We now give a nice formulation of standard large deviations inequalities for sums of random variables. [Sum of random variables that have finite exponential moment] Let (Ω,) be a probability space endowed with a filtration (ℱ_n)_n∈. Let w be an -valued random variable. Let (x_n) be a random sequence of non-negative real numbers. Assume that x_n is (ℱ_n+1)-measurable for all n. Let C,β>0 be non-random constants. Assume that: (exp(β w)) ≤ C ∀ n∈,(exp(β x_n) | ℱ_n)≤ C. Then the random variable x_w := ∑_k=0^w-1x_k has a finite exponential moment in the sense that: ∃ C', β' >0, ∀ t ≥ 0, (x_w ≥ t)≤ C'exp(-β' t). First note that without loss of generality, we may assume that C >1. Fix 0 < < β/log(C). For every j ∈, write x_j:=∑_k=0^j-1x_k. Then for all non-random t ≥ 0, one has: (x_w ≥ t)≤(w≥ t)+(x_⌊ t⌋≥ t). For all j ∈, we define z_j:=exp(βx_j). We claim that (z_j)≤ C^j. Indeed, z_0=1 and for all j ≥ 0, the random variable z_j is ℱ_j measurable. Now by looking at the conditional expectation and using (<ref>), we have: (z_j+1) = ((z_j+1|ℱ_j)) = (z_j(exp(β x_j)|ℱ_j)) ≤ C(z_j). This proves the claim. By Markov's inequality, we have (z_j≥exp(β t))≤C^j/exp(β t) for all j ∈ and for all t > 0. Let t > 0. By the above argument and because the x_n's are non-negative, we have: (x_⌊ t⌋≥ t) ≤ C^⌊ t ⌋exp(-β t) ≤ C^ texp(-β t) = exp(-(β-log(C))t). By Markov's inequality, applied to (<ref>), we have (w≥ t)≤ Cexp(-β t). So if we write β':= β-log(C)>0 and C':=C+1, then we have (Y≥ t)≤ C'exp(-β' t), which proves (<ref>). [Exponential moments approximate the expectation] Let M,σ,C,β>0. For all α<σ, there is a constant β_α > 0, that depends on (M,σ,C,β,α), such that for all random variable x, that satisfies (min{x,M})≥σ and (exp(-β x))≤ C, we have: (exp(-β_α x))≤exp(-β_αα). Let x be a random variable such that (exp(- β x)) ≤ C. Then (x≤ t)≤ Cexp(β t) for all t∈. For all 0 < β' < β and for all m∈ we have: (exp(-β'x)1(x≤ m)) = ∫_exp(-β' m)^+∞(exp(-β'x)≥ t)dt ≤∫_exp(-β' m)^+∞ C t^-β/β'dt ≤ C β'/β-β'exp((β-β')m) =: F(m,β') To all 0< β≤β'/2, we associate the number m_β' := min{0, 2β^-1log(β'β/2C)}. Note that for all 0< β≤β'/2, we have β'-β≥β/2 and m_β≤ 0, hence (β-β')m_β'≤log(β'β/2C) and C β'/β-β'≤2C β'/β. Therefore F(m_β', β') ≤β'^2 for all 0< β≤β'/2, hence F(m_β', β')/β'β'→ 0⟶ 0. Now assume moreover that (min{x,M})≥σ and let m ≤ M. Write x' := max{m, min{x, M}}. Then m ≤ x' ≤ M. Moreover, by convexity, we have for all m ≤ y ≤ M: exp(-β'y) ≤y-m/M-mexp(-β' M)+M-y/M-mexp(-β'm) ≤Mexp(-β' m)-mexp(-β'M)/M-m - exp(-β' m)-exp(-β'M)/M-m y. Note moreover that (x') ≥σ and exp(-β' m)-exp(-β'M)/M-m≥ 0. Hence: (exp(-β x')) ≤Mexp(-β' m)-mexp(-β'M)/M-m - exp(-β' m)-exp(-β'M)/M-mσ=:L(m,β'). Moreover, β'm_β'β'→ 0⟶ 0 and β'Mβ'→ 0⟶ 0 so Mexp(-β' m_β')-m_β'exp(-β'M)/M-m_β'β'→ 0⟶ 1. Moreover exp is derivable at 0 so we have: exp(-β' m_β')-exp(-β'M)/β'(M-m_β')β'→ 0⟶ 1. Hence we have L(m_β',β') -1/σβ'β'→ 0⟶ 1. By the previous arguments, we have L(m_β',β') + F(m_β',β') - 1/β' σβ'→ 0⟶ 1. Then for all α < σ, there exists 0 < β' ≤β/2 such that L(m_β_α,β_α) + F(m_β_α,β_α) - 1/β' σ≥α/σ and therefore L(m_β',β') + F(m_β',β') ≥ 1-αβ', let β_α be the maximal such β'. Then for all α < σ, we have L(m_β_α,β_α) + F(m_β_α,β_α) ≥ 1-αβ_α by continuity of F, m, and L. Let α < σ. Then for all random variable x that satisfies the hypothesis of Lemma <ref>, we get: (exp(-β_α x)) ≤ L(m_β_α,β')+F(m_β_α,β_α) ≤ 1 -β_αα≤exp(-β_αα). [Classical large deviations inequalities from below] Let (Ω,) be a probability space endowed with a filtration (ℱ_n)_n∈. Let (x_n)_n∈ be a random sequence of real numbers such that x_n is ℱ_n+1-measurable for all n ∈. Let C,β > 0. Assume that for all n∈, we have (exp(-β x_n) | ℱ_n)≤ C. Let (σ_M)_M ∈ be a non-random, real valued, non-decreasing sequence such that (min{x_n,M} | ℱ_n)≥σ_M for all M,n∈. Write σ := lim_t→ +∞σ_t. Then we have: ∀α<σ, ∃β_α>0, ∀ n∈, (x_n≤α n)≤exp(-β_α n). Let α< σ and let α<α'<α”<σ. Let M∈ be such that (min{x_n,M}|ℱ_n)≥α” for all n∈. Then by Lemma <ref>, there is a constant β'>0 such that for all n∈, we have: (exp(-β' x_n) | ℱ_n)≤exp(-β'α'). Then by induction on n, we have (exp(-β' x_n))≤exp(-nβ'α') for all n∈. Then by Markov's inequality, we get (x_n≤α n)≤exp(-nβ'(α'-α)), which proves the claim for β_α := β' (α'-α). Note that the existence of the sequence (σ_M) such that (min{x_n,M} | ℱ_n)≥σ_M for all M,n∈ is satisfied when (x_n) is i.i.d. and (x_0) ≥σ. However, if we only assumed that (x_n | ℱ_n)≥ 1 for all n, then we may assume that for all n, x_n takes value n^2 with probability n^-2 and 0 otherwise and in this case (∀ n∈, x_n = 0) = ∏_n = 1^∞ (1-n^-2) > 0. Let (x_n)_n be a random independent sequence of real numbers. Assume that there exists β > 0 such that (exp(-β x_0)) < +∞. Then the random sequence (x_n)_n∈ satisfies large deviations inequalities below the speed (x_0). Let β >0 be such that (exp(-β x_0)) < +∞ and let C = (exp(-β x_0)). For all M ∈, let σ_M := (min{x_0,M}). Let (ℱ_n) be the cylinder filtration associated to (x_n). Then σ_M →(x_0) so by Lemma <ref>, the random sequence (x_n)_n∈ satisfies large deviations inequalities below the speed (x_0). [Large deviations inequalities] Let (x_n)_n∈ be a random sequence of real numbers and let σ∈∪{+∞} be a constant. We say that (x_n)_n∈ satisfies large deviations inequalities below the speed σ if we have: ∀α<σ, ∃ C,β>0, ∀ n∈, (x_n≤α n)≤ C exp(-β n). We say that (x_n)_n∈ satisfies large deviations inequalities above the speed -σ if (-x_n)_n∈ satisfies large deviations inequalities below the speed σ. Let (x_n) be a random sequence of real numbers and let σ∈∪{+∞}. Assume that (x_n) satisfies large deviations inequalities below the speed σ. Then by Borel Cantelli's Lemma, we have almost surely lim infx_n/n≥α for all α < σ so lim infx_n/n≥σ almost surely. If we moreover assume that σ is finite and if (x_n) satisfies large deviations inequalities above the same speed σ, then limx_n/n = σ almost surely. [Convenient reformulation of Definition <ref>] We call decreasing exponential function a function of type n ↦ C exp(-β n) with C > 0 and β > 0. Note that saying that a random sequence (x_n) satisfies large deviations inequalities below a speed σ means that for all α < σ, the function n ↦(x_n < α n) is bounded above by a decreasing exponential function. It is equivalent to saying that for all α < σ, the function n ↦(∃ m ≥ n, x_m < α n) is bounded above by a decreasing exponential function. Note also that a sum of finitely many decreasing exponential functions is bounded above by a decreasing exponential function. Now we show that random sequences that satisfy large deviations inequalities behave well under some compositions. Let (Ω, ) be a probability space. Let σ, σ'∈∪{+∞}. Let (x_n)_n∈ and (x'_n)_n∈ be two random sequences of real numbers that satisfy large deviations inequalities below the speeds σ and σ' respectively. Let (y_n)_n∈ be a random sequence of real numbers. Let C_y, β_y > 0. Assume that (y_n ≤ -t) ≤ C_y exp(-β_y t) for all n ∈ and all t ≥ 0. Let (k_n)_n∈ be a random non-decreasing sequence of non-negative integers and let κ∈ (0, + ∞). Then: * The shifted sequence (x_n + y_n)_n∈ satisfies large deviations inequalities below the speed σ. * The minimum (min{x_n,x'_n})_n∈ satisfies large deviations inequalities below the speed min{σ,σ'}. * The maximum (max{x_n,x'_n})_n∈ satisfies large deviations inequalities below the speed max{σ,σ'}. * For all λ,λ'≥ 0, the sum (λ x_n+ λ' x'_n)_n∈ satisfies large deviations inequalities below the speed λσ + λ'σ'. * Assume that (k_n)_n∈ satisfies large deviations inequalities below the speed κ. Then the composition (x_k_n)_n∈ satisfies large deviations inequalities below the speed κσ. * Let (r_m)_m∈ be the reciprocal function of (k_n)_n∈, defined by r_m := max{n∈ | k_n≤ m} for all m∈. Assume that (k_n)_n∈ satisfies large deviations inequalities below the speed κ. Then (r_m)_m∈ satisfies large deviations inequalities above the speed κ^-1. We first prove (<ref>). Let α < α' < σ. By assumption, there are two constants C_x, β_x > 0 such that (x_n≤α' n) ≤ C_xexp(-β_x n). Write β:=min{β_x,β_y(α' - α)} and C := C_x +C_y. Then we have: ∀ n∈, (x_n + y_n ≤α n) ≤(x_n ≤α' n)+(y≤(α - α')n) ≤ C_xexp(-β_x n) + C_yexp(-β_y(α'-α) n) ≤ Cexp(-β n). In other words the function n ↦(y_n+x_n≤α n) is bounded by the sum of the functions n ↦(x_n≤α' n) and n ↦(y≤(α-α')n) which are themselves bounded by decreasing exponential functions so their sum also is by Remark <ref>. Now to prove (<ref>) assume that σ≤σ'. This is not restrictive since (σ, (x_n)_n∈) and (σ',(x'_n)_n∈) play symmetric roles in Lemma <ref>. Then for all α < σ, we have: ∀ n∈, (min{x_n, x'_n}≤α n) ≤(x_n ≤α n) + (x'_n ≤α n). Both terms of the sum on the right are bounded above by decreasing exponential functions of n so (min{x_n,x'_n}≤α n) is bounded above by a decreasing exponential function of n. To prove (<ref>), we again assume that σ≤σ'. Then for all α < σ', we have (max{x_n,x'_n}≤α n)≤(x'_n≤α n) and by assumption (x'_n≤α n) is bounded above by a decreasing exponential function of n. Now we prove (<ref>), let α_+ < λσ + λ'σ'. Let α <σ and α'<σ' be such that α_+ = λα + λ'α'. Such α, α' always exist. Now note that: (λ x_n+λ'x'_n≤α_+ n)≤(x_n≤α n)+(x'_n≤α' n) an both terms are bounded by decreasing exponential functions, which proves (<ref>). Now we prove (<ref>). Let α < κσ and let α'<σ and α”<κ be such that α = α'α”. Now let C_x,β_x>0 be such that (x_n≤α' n)≤ C_x exp(-β_x n) for all n∈ and let C_k,β_k be such that (k_n≤α” n)≤ C_kexp(-β_k n) for all n∈. Such C_x, β_x, C_k,β_k exist by assumption. For all n∈, we have: (x_k_n≤α n) ≤(x_k_n≤α' k_n ∩ k_n≥α” n)+(k_n≤α” n) ≤∑_k≥α” n(x_k≤α' k)+(k_n≤α” n) ≤∑_k≥α” nC_xexp(-β_x k)+C_kexp(-β_k n) ≤C_x/β_xexp(-β_x α” n)+C_kexp(-β_k n) ≤(C_x/β_x+C_k)exp(-min{β_xα”,β_k} m_0), Which proves (<ref>). To prove (<ref>), we use a similar method. Let α<κ and C,β>0 be such that (k_n≤α n)≤ Cexp(-β n) for all n∈. Such C, β exist for all α < κ. Then for all m_0∈, we have: (r_m_0≥α^-1 m_0) ≤(∃ m≥ m_0, r_m≥α^-1 m) ≤(∃ m≥ m_0, ∃ n∈, (n≥α^-1 m)∧(k_n≤ m)) ≤(∃ n≥α^-1m_0, k_n≤α n) ≤C/βexp(-βα^-1 n). Now note that for all α'>κ^-1, we have α^-1<κ. The above reasoning tells us that for all α'>κ^-1, we have constants C', β'> 0 such that (r_m ≥α'_m) ≤ C'exp(-β m') for all m (namely C' = C/β and β' = βα^-1 with C, β as above for α := α'^-1). §.§ About moments In this section, we prove useful results about sums of random variables that have a finite polynomial moment. Note that a probability distribution η on _≥ 0 is characterized by the right-continuous and non-increasing map t ↦η(t, +∞). [L^p-integrability] Let p∈(0,+∞) and let η be a probability distribution on _≥ 0. We define the strong L^p moment of η as: strong-L^p M_p(η) := ∫_0^+∞t^p-1η(t,+∞)dt. We define the weak L^p moment of η as: weak-L^p W_p(η) := sup_t≥ 0t^pη(t,+∞) < +∞. We say that η is strongly L^p if M_p(η) < + ∞ and we say that η is weakly L^p if W_p(η) < + ∞. [Trunking] Let η be a non-negative measure on _≥ 0 that has finite total mass a non-negative multiple of a probability distribution. We call trunking of η the distribution ⌈η⌉ characterized by: ∀ t≥ 0, ⌈η⌉(t,+∞)=min{1,η'(t,+∞)}. Note that if η has total mass less that one, then ⌈η⌉ = η + (1 - η(_≥ 0)) δ_0. [Push-up] Let η be a probability distribution on ≥ 0 and let B ≥ 0 be a constant. We define the push-up of η by B as the probability distribution B ∨η on _≥ B characterized by: ∀ t ≥ B, (B ∨η)(t, +∞) = η(t, +∞). In other words, for any random variable x ∼η, we have max{x,B}∼ B ∨η. [Coarse convolution] Let η be a probability distribution on _≥ 0 and let k≥ 1 be an integer. We define the coarse convolution η^↑ k as: ∀ t ≥ 0, η^↑ k(t,+∞) := min{1, k η(t/k, +∞)}. Let k≥ 1 be an integer, let η be a probability distribution on _≥ 0 and let x_1,…,x_k be random variables such that: ∀ t ≥ 0, ∀ i∈{1,…,k}, (x_i >t)≤η(t,+∞). Then we have: ∀ t≥ 0, (x_1 + … + x_k > t)≤η^↑ k(t,+∞). Let t≥ 0. We have: (x_1+⋯+ x_k >t) ≤(∃ i∈{1,…,k}, x_i > t/k) ≤ k η(t/k,+∞). Let η be a probability distribution on _≥ 0 and let k ∈_≥ 1. We have: W_p(η^↑ k) ≤ k^p+1 W_p(η) M_p(η^↑ k) ≤ k^p+1 M_p(η) For the weak moment, we have: W_p(η^↑ k) = max t^pη^↑ k(t,+∞) ≤max t^p k η(t/k,+∞) ≤max (kt')^p k η(t',+∞) ≤ k^p+1 W_p(η). This proves (<ref>). For the strong moment, by integration by parts, we have: M_p(η^↑ k) = ∫_0^+∞t^p-1η^↑ k(t,+∞)dt ≤∫_0^+∞ t^p-1 k η(t/k,+∞)dt. Then by the linear change of integration variable t = ku, we have: ∫ t^p-1 k η(t/k,+∞)dt = ∫ u^p-1 k^p+1η(u,+∞)du = k^p+1M_p(η). This proves (<ref>). Let n be a random integer and let x_1,…, x_n be non-negative real random variables. Let B,C_1 be such that: ∀ t≥ B, ∀ k∈, ∀ m ≤ k, (x_m≥ t | n = k)≤ C_1 η(t). Let C_2,β>0 be such that: ∀ k∈, (n=k)≤ C_2exp(-β k). Then for C := C_1 C_2, we have: ∀ t≥ 0, (x_1 + ⋯ + x_n > t)≤∑_k=0^∞ Cexp(-β k)(B ∨η)^↑ k. First note that (<ref>) with Definition <ref> and Lemma <ref> implies that for all k∈, we have: ∀ t≥ 0, (x_1 + ⋯ + x_k > t | n=k)≤ C_1 (B ∨η)^↑ k(t,+∞). We do the computation, for all t≥ 0: (x_1 + ⋯ + x_n > t) = ∑_k=0^∞( n = k )( x_1 + ⋯ + x_k > t | n = k ) ≤∑_k=0^∞ C_2 exp(-β k) C_1 (B ∨η)^↑ k(t,+∞). Let η be a non-trivial probability distribution on _≥ 0 η≠δ_0. Then, for all B, C, D, β>0, there are constants C_0,β_0 such that for all t > 0, we have: ∑_k=0^∞ Cexp(-β k)(B ∨η)^↑ k(t - D,+∞)≤∑_k=0^∞ C_0exp(-β_0 k)η(t/k,+∞). Note that (B ∨η)^↑ k(t - D,+∞)≤ k η(t/k-B - D,+∞) for all t, B, D and for all k > 0. Let B' = B + D. Note that for all β”<β, we have lim_k kexp((β”-β) k) = 0 and exp((β-β”) k)≥ 1 so for C” large enough, we have kexp(-β k)≤ C”exp(-β” k) for all k. Take such a β”>0 and such a C”. Now we re-index the sum by taking k'= 2 k and write β':=β/2 and C'= C C”, then we have: ∑_k=0^∞ Cexp(-β k)(B ∨η)^↑ k(t - D,+∞) ≤∑_k=0^∞ C C”exp(-β” k)η(t/k-B',+∞) ≤∑_k'=0^∞ C'exp(-β' k')η(2t / k'-B',+∞) ≤∑_k'= 0 ^⌈ t/B' ⌉ - 1 C'exp(-β' k')η(t / k',+∞) + ∑_k'= ⌈ t/B' ⌉^∞ C'exp(-β' k') ≤∑_k'= 0 ^+ ∞ C'exp(-β' k')η(t / k',+∞) + C'/β'exp(-β' t/ B'). Now we use the fact that η≠δ_0 and take a>0 such that η(a,+∞)>0. Then for all t > 0 and all C_0 ≥ 0 and all β_0 > 0, we have: ∑_k= 0 ^+ ∞ C_0exp(-β_0 k)η(t / k,+∞)≥ C_0 exp(-β_0⌈ t / a ⌉) η(t/⌈ t / a ⌉,+∞) ≥ C_0 exp(-β_0(t / a +1))η(a,+∞). Let 0 < β_0 ≤β' be small enough, so that -β_0⌈ t / a ⌉≤β' t/B' + log(2) for all t ≥ 0. Then for all C_0 > 0, we have: C'/β'exp(-β' t/ B')≤(C'/β'C_0η(a,+∞))∑_k = 0 ^+ ∞ C_0exp(-β_0 k)η(t / k, +∞) Let 0 < C_0 be large enough so that C'/C_0+C'/β'C_0η(a,+∞)≤ 1. Then we have: ∑_k'= 0 ^+ ∞ C'exp(-β' k')η(t / k',+∞) + C'/β'exp(-β' t/ B') ≤(C'/C_0+C'/β'C_0η(a,+∞))∑_k = 0 ^+ ∞ C_0exp(-β_0 k)η(t / k, +∞) ≤∑_k = 0 ^+ ∞ C_0exp(-β_0 k)η(t / k, +∞) Hence: ∑_k=0^∞ Cexp(-β k)(B ∨η)^↑ k(t - D,+∞)≤∑_k = 0 ^+ ∞ C_0exp(-β_0 k)η(t / k, +∞). Let η and κ be probability distributions on _≥ 0. Let C,β>0 be constants. Assume that for all t > 0, we have κ(t, +∞) ≤∑_k=0^∞ Cexp(-β k)η(t/k, +∞). Let p∈_>0. Assume that η is strongly or weakly L^p, then κ also is and we have: M_p(κ) ≤ M_p(η) ∑_k=0^∞ Cexp(-β k) k^p W_p(κ) ≤ W_p(η) ∑_k=0^∞ Cexp(-β k) k^p. Let p >0. We claim that M_p(κ)≤∑_k=0^∞ Cexp(-β k) k^p M_p(η), which is finite when M_p(η) is. To prove that claim, we simply compute the moments, using the fact that all the quantities we look at are non-negative: M_p(κ) = ∫_0^∞ t^p-1κ(t,+∞) dt ≤∫_0^∞ t^p-1∑_k=0^∞ Cexp(-β k)η(t/k, +∞) dt ≤∑_k=0^∞ Cexp(-β k)∫_0^∞ t^p-1η(t/k, +∞) dt ≤∑_k=0^∞ Cexp(-β k)∫_0^∞ (ku)^p-1η(u, +∞) k du ≤∑_k=0^∞ Cexp(-β k) k^p M_p(η). This proves the claim. We now claim that W_p(κ)≤∑_k=0^∞ Cexp(-β k) k^p W_p(η). For that claim, we do the same computation: W_p(κ) = sup_t > 0 t^pκ(t,+∞) ≤sup_t > 0( t^p∑_k=0^∞ Cexp(-β k)η(t/k, +∞)) ≤∑_k=0^∞ Cexp(-β k)sup_t > 0 t^pη(t/k, +∞) ≤∑_k=0^∞ Cexp(-β k)sup_t > 0 (ku)^pη(u, +∞) ≤∑_k=0^∞ Cexp(-β k) k^p W_p(η). This proves the claim, which concludes the proof of Lemma <ref>. alpha
http://arxiv.org/abs/2408.11452v1
20240821091439
A topological contribution to Bogoliubov coefficient for cosmological particle production
[ "Daniel J. H. Chung", "Nidhi Sudhir" ]
hep-th
[ "hep-th", "gr-qc", "hep-ph" ]
danielchung@wisc.edu Department of Physics, University of Wisconsin-Madison, Madison, WI 53706, USA kandathpatin@wisc.edu Department of Physics, University of Wisconsin-Madison, Madison, WI 53706, USA § ABSTRACT Particle production in cosmology is often efficiently computed in terms of Bogoliubov transforms. Restricting to a particular class of dispersion relationships, we identify a map between the number of particles produced in a special kinematic limit and a Stokes phenomena related topology of analytic continuation of the Bogoliubov coefficient functions. Intuitively, this kinematic limit corresponds to the long wavelength limit although a more precise description depends on the nature of the curved spacetime. To identify the topology, we reformulate the usual Bogoliubov computations as a type of SU(1,1) gauged differential equation and utilize a special gauge together with a discrete symmetry that naturally characterizes the dispersion relationship. Using a dark matter model and a nonzero constant spatial curvature model, we estimate how such topological contributions will arise in physical applications. A topological contribution to Bogoliubov coefficient for cosmological particle production Nidhi Sudhir ========================================================================================= § INTRODUCTION Stokes phenomena (see e.g. <cit.>) is mathematically striking because it leads to abrupt changes in the coefficients of analytic continuations of asymptotic expansions, such as those used in computing particle production through Bogoliubov transformations. Relatively recently, there has been some interest in applying Stokes phenomena in particle production in cosmology <cit.>.[For earlier applications to dS space see for example <cit.>. Also many other usage of Stokes phenomena to particle production exists (for a sample of recent work, see e.g. <cit.>).] These previous works focused mostly on providing a way to approximate the large momentum k limit of the Bogoliubov coefficients β_k characterizing the spectrum of particles produced. In this article, we point out that in situations where the produced particle dispersion relationship squared has the form ω^2(η)=C+Aη^n as a function of time η for positive even integer n, we can relate the C→0 limit (intuitively the small momentum limit) of the magnitude of the Bogoliubov coefficient |β_k| to the number of Stokes sectors each of which supports approximately constant asymptotic WKB expansion coefficients.[We restrict ourselves to the even n because that corresponds to adiabatic propagating particle vacua at η=±∞ in contrast with odd n cases where there is an exponentially decaying particle wave function in one of the end regions.] Since each of these Stokes sectors represent a region of functional continuity (i.e. approximately constant asymptotic expansion coefficients) and their number is insensitive to variations in A and C, the number of Stokes sectors can be viewed as a topological index characterizing the analytic continuation of the Bogoliubov coefficients which are functions of time. In order to identify this topology, we reformulate the usual Bogoliubov computation in terms of an SU(1,1) gauged differential equation. This will help us to use a technique from the math literature <cit.> and utilize an apparent discrete symmetry to compute the topological index. In physical applications, there will often be piecewise time regions where the topological contribution will be relevant. We present couple of such cosmological scenarios, where in one of the scenarios, C→0 is achieved by taking k→0 and in the other C→0 is achieved by canceling k^2 against constant spatial curvature in an Friedmann–Lemaître–Robertson–Walker (FLRW) spacetime. We can interpret the first scenario as a dark matter model embedded in an inflationary cosmology, but the topological contribution will be shown to be suppressed due to the phase space vanishing as k→0. In the second scenario, the topological contribution can be significant because of the fact that k^2 can be as large as the tunable background potential generated spatial curvature. It is interesting to note that there is an analogy of the Stokes sector number to the Chern-Simons number carried by anomalous currents <cit.>. Both the gauge field description and the Bogoliubov analytic continuation description employed in this paper are fictitious. The Chern-Simons number change can be related to a physical charge production induced by an instanton induced vacuum to vacuum transition. The Bogoliubov coefficient related number of particles produced can also be viewed as arising from a vacuum to vacuum transition. The Chern-Simons number describes a topological characterization of the gauge field while the Stokes sector number describes a topological character of the analytically continued field mode function. It is also interesting that unlike in typical steepest descent computations, the topological number (partly owing to their integer nature) produces a large Bogoliubov coefficient magnitude |β_k|. From a mathematical side, one of what we have identified can be viewed as a novel identity of a path ordered matrix integral which we will make explicit. The topological nature can also be viewed as a special conformal property of the Bessel functions together with their being solutions to the mode equation in the special kinematic limit. The order of presentation is as follows. In Sec. <ref>, we define the class of models, standard particle production through Bogoliubov transforms, and then define Stokes lines in this context. In Sec. <ref>, we show that the SU(1,1) based complexification of the particle production in a first order formalism naturally has a gauge symmetry. We will define the F^2-gauge and the 0-gauge in this section. In Sec. <ref>, we show that a certain soft limit of the Bogoliubov coefficient in this class of models considered in this paper corresponds to measuring the Stokes sector topology (defined explicitly in this section). In Sec. <ref>, we show how the topology can also be viewed as a particular property of Bessel function mode functions. The physical embedding of the topological contributions is investigated in Sec. <ref>. We then conclude with a summary. Appendix <ref> gives the details of the 1-loop correction to the tree-level potential and the dark matter abundance computation used in Sec. <ref>. Appendix <ref> gives the details of the discrete symmetry representation used in the paper. § PARTICLE PRODUCTION SCENARIO Consider a non-minimally coupled scalar field χ on flat FLRW spacetime S_χ= 1/2∫dη d^3x √(-g)[∂_μχ∂^μχ-m_ϕ^2(η)χ^2+ξ Rχ^2] ds^2=a^2(η)(dη^2-|dx⃗|^2). Expanding the χ field as χ(η)=∫d^3k/(2π)^3a(η)(a_kχ_k(η)e^ik⃗·x⃗+h.c.) the Heisenberg equation of motion (E.O.M) yields the mode equation χ_k”(η)+ω^2(η)χ_k(η)=0 ω^2(η)=k^2+a^2m_χ^2(η)+(6ξ-1)a”/a where ξ=1/6 corresponds to the conformal coupling and m_χ^2(η) is an effective time-dependent mass that can arise from couplings to other fields. For example, if there exists another scalar field ϕ which couples through the interaction ℒ_I=g/2Λ^2ϕ^4χ^2 then if the ϕ has a time-dependent background ϕ(η), the effective mass term contribution to m_χ^2(η) would be gϕ^4(η)/Λ^2. Whenever this mode frequency becomes time-dependent, some particle production occurs because time-translation symmetry is broken. That usually leads to an ambiguity in the choice of the vacuum. One popular formalism to construct a vacuum is the adiabatic vacuum <cit.> relying on the WKB formalism. An adiabaticity parameter δ_k can be defined as δ_k(η)≡ω'(η)/4ω^2(η) which can be seen as counting the adiabatic order defined according to the formal multiplication of every time derivative on ω with 1/T and the convention T→∞. For example δ_k(η) defined above is an adiabatic order 1 quantity while ω”/ω^3 is an adiabatic order 2 quantity. Adiabatic time region is defined as when this formal adiabatic limit is an approximate description of the time dependences of the frequencies. Let χ_k,1(η) and χ_k,2(η) be two solutions of the mode equation satisfying the following adiabatic quantization boundary conditions χ_k,1(η)≈exp(-i∫_η_-∞^ηdη'ω(η'))/√(2ω(η))η→η_-∞ and χ_k,2(η)≈exp(-i∫_η_-∞^ηdη'ω(η'))/√(2ω(η))η→η_∞ where η_±∞ correspond to time values in the past and future adiabatic regions.[In practice, we define η_±∞ to be times when the nonadiabaticities are sufficiently small for the desired accuracy of the computation.] Because of the completeness of the basis of two independent solutions of a second order differential equations, we can define two constant coefficients {α_k(η_∞),β_k(η_∞)} and write χ_k,1(η)=α_k(η_∞)χ_k,2(η)+β_k(η_∞)χ_k,2^*(η). Later, when we define a time-dependent α_k(η) and β_k(η), these constants will turn into boundary values of the time-dependent functions. As is well known, these time-dependent coefficients asymptote to constants as ω'/ω^2→0. At all such adiabatic time periods, canonical quantization implies |α_k(η_±∞)|^2-|β_k(η_±∞)|^2=1 and later we will define α_k(η) and β_k(η) functions such that this SU(1,1) normalization is always maintained. Particle production between the time η_-∞ and η_∞ can be written in terms of β_k=(χ_k,1,χ_k,2^*)=-i(χ_k_1∂_ηχ_k,2-χ_k,2∂_ηχ_k,1) where the number density in the comoving volume is n=1/a^3∫d^3k/(2π)^3|β_k|^2. These are thus far completely standard and well known. In this paper, we focus on physical situations in which ω^2 can be approximated as ω^2≈ k^2+A(η-η_0)^n+B in the interval (η_-∞,η_∞) containing η_0, with n a positive even integer and constants A and B. For example, with conformal coupling, the dispersion relationship Eq. (<ref>) becomes ω^2(η)=k^2+a^2(η)m_χ^2(η) and in situations where m_χ^2(η)∝ϕ^q(η) dominates the time-dependence (i.e. a(η) time-dependence being subdominant) and goes through a zero, the approximate dispersion relationship of Eq. (<ref>) can be achieved with an appropriate choice of the potential governing the homogeneous ϕ dynamics. We will explicitly apply our general formalism to such a scenario in Sec. <ref>. We can also find situations where the dispersion relationship is approximately Eq. (<ref>) in a much larger time range in very special cosmological periods. For example, with minimal gravitational coupling and an engineered potential for the background cosmology driving scalar field ϕ coupled to χ, one can obtain dispersion relationships of the form ω^2(η)=k^2+a^2f(ϕ)-a”/a for which k^2 term cancels a constant a”/a term to yield Eq. (<ref>). This is studied in Sec. <ref> and demonstrates that this special kinematic point corresponding to the topological description that we define need not correspond to the ultra-IR. For these scenarios, we will find that with k^2+B=0 the β_k coefficient has a simple relationship with the number of asymptotic expansion sectors. By asymptotic expansion sectors, we mean the number of contiguous regions in the analytically continued η plane coordinated by z where the WKB basis functions have either a uniform exponential suppression or a divergence in the asymptotic radial limit. The boundaries of these regions can be defined as anti-Stokes lines coming from the study of Stokes phenomena. Stokes phenomena occurs when the basis of an asymptotic expansion has an analytic property that is mismatched with the analytic property of the function that it resolves. For example suppose the analytic continuation of χ_k(η)→χ_k(z) with z∈ℂ is an entire function. Because of the approximate Lorentz group representation properties reflected in the adiabatic boundary conditions above, the mode function χ_k is decomposed in terms of WKB basis functions as χ_k(z)=α_k(z)exp(-iθ(z,z_0))/√(2ω(z))+β_k(z)exp(iθ(z,z_0))/√(2ω(z)) θ(z,z_0)≡∫_z_0^zdz'ω(z') where ω^2(z) is an analytic function of z causing a branch cut to appear in the WKB basis functions exp(± iθ(z,z_0))/√(2ω(z)). The curve z(s) where iθ(z(s),z_0) is real (imaginary) is called (an)a (anti-)Stokes line.[Here s is parameterizes the curve. We will later discuss the subtleties of the branch cut resolution when k^2+B→0 limit is taken.] The Stokes lines are further classified as “+”(“-”) type depending upon whether iθ(z,z_0) increases (decreases) as |z(s)-z_0| is increasing. Note that on the anti-Stokes lines, the magnitudes of each basis functions are equal while on the Stokes lines, the ratio of the basis functions can have a large hierarchy. This means when the function χ_k(z) is evaluated across a Stokes line, the coefficient of the suppressed basis function can shift by a large number without violating the smooth behavior of χ_k. In fact, with an exponential suppression, the suppressed basis function has a representation of exactly zero in the asymptotic expansion. This shifted coefficient of the (exponentially) suppressed basis function can then become important in the asymptotic expansion once an anti-Stokes line is crossed which exchanges the roles of suppressed and unsupressed basis functions before crossing. This then leads to a shift in the asymptotic expansion representation. Such a shift in the coefficients {α_k,β_k} is called a Stokes phenomena. Our aim in this paper is to show that β_k in this special kinematic limit (k^2+B→0) in a class of special models may be dominated by topological information associated with the counting of the Stokes sectors in the dispersion relationship. To demonstrate this, we utilize the F-matrix formalism developed by <cit.>. Furthermore, we show how this F-matrix formalism is related to the formalism used in the conventional cosmological literature by constructing a unified matrix formalism which connects different formalisms through a gauge transformation. It is important to note that the unified formalism that we develop below is independent of Eq. (<ref>). However, to our current knowledge, the greatest utility of the unified formalism will be to elucidate the topological nature of the Bogoliubov transform in the special kinematic limit of interest in this paper. One other byproduct of the gauge transformation formalism will be a derivation of a novel integral identity Eq. (<ref>). § GAUGE PICTURE Because the mode equations are second order ordinary differential equations (ODEs), the equations can be rewritten as a first order vector ODE as is typical in Hamiltonian dynamics. This will help us to formulate a gauged set of ODEs that will allow us to elucidate the relationship between the usual parameterization found in typical physics literature (see e.g. <cit.>) and a parameterization that is useful to obtain bounds on the propagator matrices as we will explain below. As a first step, we write the first order formulation using the ansatz ∂_ηV_k(η)=M(η)V_k(η) where V_k specifies the mode functions χ_k through a projection onto a WKB basis F_k(η), i.e. χ_k(η)=α_k(η)f_-(k,η)+β_k(η)f_+(k,η)=F_k(η)· V_k(η) where f_±(k,η)=exp(± i∫_η_(*)^ηdη'ω(η'))/√(2ω(η))F_k(η)=(f_-(k,η),f_+(k,η))V_k(η)=(α_k(η),β_k(η)). Here, η_(*) is the origin with respect to which the WKB modes are defined and is taken as a general point on the real line. The Bogoliubov coefficients defined in Eqs.(<ref>) and (<ref>) correspond to taking η_(*)=η_-∞. For dispersion relations which are positive definite on the real line, as the ones we will be interested in here, a general choice of η_(*) results in a phase shift in the functions α_k(η)and β_k(η). We keep this general since taking η_(*)=0 will become convenient in sections <ref> for symmetry reasons. To take advantage of the WKB solutions f_±(k,η) being an approximate solution to the mode equations in the adiabatic region, we impose the condition M(η)=O(δ_k) in the adiabatic region. Next, to specify M(η), note that it has 8 real functional degrees of freedom. Since we only require 2 real functional degrees of freedom to match a general mode function, we have freedom to restrict M(η). We choose M(η)∈Lie Algebra of su(1,1) parameterized as M(η)=[ iM_1(η) M_2^*(η); M_2(η) -iM_1(η) ] where M_1(η) and M_2(η) are real and complex valued functions, respectively. This ensures |α_k(η)|^2-|β_k(η)|^2=1 for all values of η, including Eq.(<ref>) at the boundaries η_±∞. Furthermore, this choice implies the existence of one real functional gauge degree of freedom in specifying M(η) (as made explicit below). The dynamical information governing M(η) is provided by the mode equation χ_k”(η)+ω^2(η)χ_k(η)=0 where ω^2(η)=k^2+a^2m^2+(6ξ-1)a”/a written in conformal time coordinates ds^2=a^2(η)(dη^2-|dx⃗|^2). The mode equation in terms of M(η) is then F_k(η){ M'+(3ω'^2/4ω^2-ω”/2ω)𝕀_2×2+2[ -iω-ω'/2ω 0; 0 +iω-ω'/2ω ]M+M^2} V_k(η)=0. Since the above should be satisfied for arbitrary values of V_k(η), corresponding to arbitrary boundary conditions, this equation implies F_k(η){ M'+(3ω'^2/4ω^2-ω”/2ω)I_2×2+2[ -iω-ω'/2ω 0; 0 +iω-ω'/2ω ]M+M^2} =0. Since one of the two complex equations here is the conjugate of the other, this equation provides two real constraints on the 3 real components of M(η). Eq. (<ref>) implies that for physical applications, we are interested in the solutions of Eq. (<ref>) with vanishing boundary conditions in the adiabatic region. At this point, M has been specified up to the gauge transformations that we discussed below Eq. (<ref>). Let's construct the gauge transform explicitly. Let V̅(η)=(α̅(η),β̅(η)) and Ṽ(η)=(α̃(η),β̃(η)) be related by a gauge transform (where we suppress the k subscript for brevity) Ṽ(η)=T(η)V̅(η) where the matrix T(η) belongs to a group element of the vector representation of SU(1,1) T=[ T_1 T_2; T_2^* T_1^* ] with |T_1|^2-|T_2|^2=1. Since T(η) should preserve Eq. (<ref>) and leave the vectors V̅(η) and Ṽ(η) unchanged in adiabatic regions, the matrix components should be of the form T_1=1+g_1(δ) andg_2(δ) where g_i(δ) are functions of adiabatic order at least 1. Now, since χ_k remains invariant under such a gauge transformation χ(η)=F(η)V̅(η)=F(η)T(η)V̅(η) this implies, for arbitrary values of V̅(η) F(η)[𝕀_2×2-T(η)]=0. The above constraint along with (T)=1 implies g_1(δ)=ig(δ), g_2(δ)=ig(δ)exp(+2i∫_η_(*)^ηω) with g(δ) real valued function of adiabatic order of at least unity, allowing us to conclude T(η)=[ 1+ig(δ) ig(δ)exp(2i∫_η_(*)^ηdη'ω(η')); -ig(δ)exp(-2i∫_η_(*)^ηdη'ω(η')) 1-ig(δ) ]. Given our real functional gauge degree of freedom, we can choose a gauge M̅_1(η)=0 to solve Eq. (<ref>) f_-(k,η)(3ω'^2/4ω^2-ω”/2ω+|M̅_2|^2)+f_+(k,η)((2iω-ω'/ω)M̅_2+M̅'_2)=0 and c.c. for M̅_2. Combining these two equations give d/dt(f_+^2M̅_2)=d/dt(f_-^2M̅_2^*) whose solution is f_+^2M̅_2=f_-^2M̅_2^*+C and the unit not unit under absolute value modular nature of √(2ω)f_± on the real line allows us to set C=0. Substituting this into Eq.(<ref>) and solving for M̅_2(η) gives M_0(η)≡M̅(η)=ω'/2ω[ 0 exp(2i∫_η_(*)^ηdη'ω(η')); exp(-2i∫_η_(*)^ηdη'ω(η')) 0 ] which is what we will call the 0-gauge. The explicit gauge transformation of M can then be obtained from the invariance of Eq.(<ref>) as M→ TMT^-1-T∂_ηT^-1. Given the above solution, the most general M_g(η) which satisfies Eq.(<ref>) can be obtained as a general gauge transform of M̅(η) M_g(η) =[-T(η)∂_ηT^-1(η)+T(η)M̅(η)T^-1(η)] =[ iγ_1(η) e^+2i∫_η_(*)^ηdη' ω(η')(γ_2(η)+iγ_1(η)); e^-2i∫_η_(*)^ηdη' ω(η')(γ_2(η)-iγ_1(η)) -iγ_1(η) ] where γ_1(η)=(-g(δ)+∂_ηg(δ)/2g(δ)ω+2δ)2g(δ)ω,γ_2(η)=(-g(δ)+δ)2ω and δ(η)≡ω'(η)/4ω^2(η). This together with Eq. (<ref>) represents the first order formulation of any second order ODE of the mode function type with a SU(1,1) basis choice and adiabatic boundary conditions. A particularly convenient gauge is something motivated by a mathematical formalism called the F-matrix formalism <cit.> which in our present gauge theory language corresponds to choosing g(δ)=δ leading to M_F(η)=iϵ_r(η)ω(η)/2[ -1 -exp(+2i∫_η_(*)^ηdη' ω(η')); exp(-2i∫_η_(*)^ηdη' ω(η')) 1 ] where ϵ_r(η)≡3ω'(η)^2/4ω^4(η)-ω”(η)/2ω^3(η). We will call this the Fröman-Fröman gauge or the F^2-gauge. It is interesting to note that M_F=0. To summarize this section, we have constructed a gauged first order ODE formalism (Eqs. (<ref>) and (<ref>) where M=M_g for a gauge choice g) of computing Bogoliubov transformations where the usual formalism found in the literature <cit.> is the 0-gauge obtained with Eq. (<ref>) with g(δ)=0 and the F-matrix formalism found in <cit.> is obtained with g(δ)=δ . § TOPOLOGICAL CONTRIBUTION TO PARTICLE PRODUCTION In the following two subsections, we review the mathematical ideas of <cit.> to first make an asymptotic expansion constraint for propagator matrices between two adjacent anti-Stokes lines- and across a single Stokes line. In section <ref>, we will see that these propagator matrices may be expressed in terms of a perturbative parameter μ and one of the cross diagonal elements. This constraint on the asymptotic properties are based on analyticity, matrix integral bounds, and properties of multiplying low dimensional matrices. While this indicates an underlying structure in μ, it is insufficient to determine any matrix element without bounds on the relevant cross diagonal matrix element. This is mitigated in section <ref> for a class of models with dispersion relations ω^2∼ Az^n, for which a ℤ_n+2 phase rotation symmetry in z of the differential equation satisfied by χ(z) bounds and fixes the relevant cross diagonal element. We then use the derived propagator matrices to compute β_k in the kinematic limit described at the end of Sec. <ref> and show that it has a simple relationship to the topological index counting the number of asymptotic expansion sectors. §.§ Reduction of d.o.f in F^2-gauge In this section, we use the logic of <cit.> to restrict the form of the propagator matrix in the F^2-gauge. As we will see, gauges (such as the 0-gauge) in which M_g is insufficiently suppressed will not allow us to make similar restrictions. After analytic continuation of the conformal time η to the complex plane parameterized by z, the integrated version of the propagation Eq. (<ref>) is V_k(z_1)=U(z_1,z_0)V_k(z_0) where U(z_1,z_0)≡ℙ[𝕀_2×2 exp(∫_Γ(z_0,z_1)dz' M(z'))] which relates the values taken by V_k(z) at two different points z_0 and z_1. Even though U is a functional of the curve Γ, because we will mostly be interested in circular arcs, we will suppress the Γ dependence in the notation. Whenever we evaluate this in a particular gauge, we will put a subscript such as U_F (Eq. (<ref>)) or U_0 (Eq. (<ref>)). Since M(z) is proportional to δ(z) or it's derivatives (Eq.(<ref>)) in all gauges, we may expect a simplification in the form of U_g(z_1,z_0) in the adiabatic limit. The propagator matrix U_g(z_1,z_0) =ℙ(𝕀_2×2exp(∫_z_0,Γ^z_1dz' M_g(z'))) =𝕀_2×2+∫_z_0,Γ^z_1dz_1̅ M_g(z_1̅) +∫_z_0,Γ^z_1dz_1̅ M_g(z_1̅)∫_z_0,Γ^z_1̅dz_2̅ M_g(z_2̅) +∫_z_0,Γ^z_1dz_1̅ M_g(z_1̅)∫_z_0,Γ^z_1̅dz_2̅ M_g(z_2̅)∫_z_0,Γ^z_2̅dz_3̅ M_g(z_3̅)+... involves nested, oscillatory integrals which are difficult to estimate on the complex plane. Moreover in a general gauge choice, all terms in the above series make comparable contributions to the sum, adding to the difficulty in estimating U_g(z_1,z_0). The F^2-gauge Eq.(<ref>) gives a better power series expansion in Eq. (<ref>) through its proportionality to a second order adiabatic parameter M_F(z)∝ϵ_r(z): i.e. ∫_z_0,Γ^z_1dz_a M_F(z_a) is still δ^1 suppressed unlike ∫_z_0,Γ^z_1dz_a M_0(z_a) which is δ^0 suppressed. For propagator matrices between points connected by a path Γ(z_1,z_0) along which |exp(i∫_z_(*)^zdz' ω(z'))| increases montonically (here z_(*) is the origin with respect to which the WKB basis functions are defined in the complex plane), the infinite sum in Eq.(<ref>) may be understood as a perturbative series in terms of μ(z,z_0)=∫_z_0,Γ^z_1| dz'ϵ_r(z')ω(z')|≪1. For propagator matrices between two adjacent anti-Stokes lines (bounding a region containing at least one Stokes line) a path connecting the two end points can be constructed out of two monotonic paths, where the two paths share one point on the Stokes line. The estimates for propagator matrices on monotonic paths can then be used to make estimates of the matrix elements of the propagator matrix relative to each other. In the following we describe this in more detail. Consider a path of integration Γ along which |exp(i∫_z_(*)^zdz' ω(z'))| increases monotonically from z_0 to z_1. Also, let the contour be such that for z∈Γ the function μ(z,z_0)≪1. In the F^2-gauge, a typical term in Eq. (<ref>) can be written as _F(z_1̅)_F(z_2̅).._F(z_n̅) =(1/2i)^nϵ_r(z_1̅)ω(z_1̅)ϵ_r(z_2̅)q(z_2̅)...ϵ_r(z_n̅)ω(z_n̅) ×(1-exp[-2i∫_z_2̅^z_1̅dz_a_1ω(z_a_1)])(1-exp[-2i∫_z_3̅^z_2̅dz_a_2ω(z_a_2)])... ×(1-exp[-2i∫_z_n̅^z_n̅-1̅dz_a_n-1ω(z_a_n-1)]) ×[ -exp[2i∫_z_n̅^z_1̅dz_a_nω(z_a_n)] -exp[2i∫_z_(*)^z_1̅dz_a_nω(z_a_n)]; exp[-2i∫_z_(*)^z_n̅dz_a_nω(z_a_n)] 1 ]. Due to monotonicity in |exp(i∫_z_(*)^zdz' ω(z'))| exponential factors which contribute to the integrals in Eq.(<ref>) may be bounded as 1/2|1-exp[-2i∫_z_i̅+1̅^z_i̅dz' ω(z')]|≤1 where z_i̅ lies in between z_i̅+1̅ and the end point z_1 (as seen from the integration limits in Eq.(<ref>)). This implies that the absolute value of the (n+1)th contribution in Eq.(<ref>) is bounded by ≤μ^n(z_1,z_0)A_ij(z_1,z_0)/(2n!), where A_ij(z_1,z_0) depends on the matrix component considered:[Without going through a general gauged framework, <cit.> uses the vector (β,α) ((a_+,a_-) in their notation) to define the propagator matrix (which they call the F-matrix) instead of the vector V_k=(α,β) used here. Hence the expressions derived in this subsection are the same as those in <cit.> up to a 1↔2 switch in the matrix element indices.] |U_F22(z_1,z_0)-1|≤μ/2+O(μ^2) |U_F21(z_1,z_0)|≤|exp[-2i∫_z_(*)^z_0dz'ω(z')]|(μ/2+O(μ^2)) |U_F12(z_1,z_0)|≤|exp[2i∫_z_(*)^z_1dz'ω(z')]|(μ/2+O(μ^2)) |U_F11(z_1,z_0)-1|≤μ/2+|exp[2i∫_z_0^z_1dz'ω(z')]|(μ^2/4+O(μ^3)) which gives asymptotic expansion constraints for propagator matrices across monotonic paths. For example, if one can give an upper bound on |exp[2i∫_z_0^z_1dz'ω(z')]| then one can conclude μ^2 times this will vanish as fast as μ^2 in the adiabatic region. The above can now be used to constrain the propagator matrices between two adjacent anti-Stokes lines (bounding a region that contains at least one Stokes line). Let, z_0(anti-Stokes), z_1(Stokes), and z_2(anti-Stokes) be points on the anti-Stokes and Stokes lines as in Fig. <ref>. Now, the path connecting these points can be divided into two monotonic paths Γ_01(z_0,z_1) and Γ_12(z_1,z_2). Depending on whether the Stokes line is the “-” or “+” kind, the point z_1 is a minimum or a maximum of the two monotonic paths. Consider first the case of a “-” Stokes line corresponding to a minimum at z_1. The 22- and 12-components of the composition property U_F(z_2,z_1)=U_F(z_2,z_0)U_F(z_0,z_1) and su(1,1) property of M det[U_F(z_2,z_0)]=1 can be used to get the following relations U_F_22(2,0) =U_F_22(2,1)/U_F_22(0,1)-U_F_12(0,1)/U_F_22(0,1)U_F_21(2,0) U_F_11(2,0) =U_F_22(0,1)/U_F_22(2,1)+U_F_12(2,1)/U_F_22(2,1)U_F_21(2,0) U_F_12(2,0) =U_F_12(2,1)/U_F_22(0,1)-U_F_12(0,1)/U_F_22(2,1) -U_F_12(0,1)U_F_12(2,1)/U_F_22(0,1)U_F_22(2,1)U_F_21(2,0) where U_F_ab(i,j)≡ U_F_ab(z_i,z_j). Eqs. (<ref>)-(<ref>) then imply U_F_11(2,0) =1+O(μ)+O(μ)U_F_21(2,0) U_F_22(2,0) =1+O(μ)+O(μ)U_F_21(2,0) U_F_12(2,0) =O(μ)+O(μ^2)U_F_21(2,0). A similar set of equations and estimates can be found for the second case of a maximum at z_1, using the 22- and 21-components of U_F(1,0)=U_F(1,2)U_F(2,0) and the determinant condition. These are U_F_22(2,0) =U_F_22(1,0)/U_F_22(1,2)-U_F_21(1,2)/U_F_22(1,2)U_F_12(2,0) U_F_11(2,0) =U_F_22(1,2)/U_F_22(1,0)+U_F_21(1,0)/U_F_22(1,0)U_F_12(2,0) U_F_21(2,0) =U_F_21(1,0)/U_F_22(1,2)-U_F_21(1,2)/U_F_22(1,0) -U_F_21(1,0)U_F_21(1,2)/U_F_22(1,0)U_F_22(1,2)U_F_12(2,0). Eqs. (<ref>)-(<ref>) in this case then imply U_F_11(2,0) =1+O(μ)+O(μ)U_F_12(2,0) U_F_22(2,0) =1+O(μ)+O(μ)U_F_12(2,0) U_F_21(2,0) =O(μ)+O(μ^2)U_F_12(2,0). Here, we see that the perturbative structure in Eqs. (<ref>)-(<ref>) helps us make estimates of the relative magnitudes of the elements of the F-matrix. Note that because the above discussion leaves the off-diagonal elements such as U_F21 and U_F12 unconstrained, without further analysis, Eqs. (<ref>) and (<ref>) do not actually state that the diagonal terms have a leading magnitude of unity and one of the off-diagonal elements is suppressed. On the other hand, if one can give an additional condition that these off-diagonal elements are at most order unity, then the number of U_F matrix elements that need to be determined to leading order in μ become one. This is the main advantage of using the F^2-gauge. The additional constraint on the propagator elements U_F21 and U_F12 (across a - and a + Stokes lines respectively), fixing them to be O(μ^0) to leading order, is obtained from the leading order WKB approximation of solutions to the mode equation. In the annulus and away from the Stokes lines, WKB approximation fixes V(z) to be approximately constant (up to higher orders in μ). This in turn bounds the propagator elements. A stepwise derivation of this result, i.e. μ U_F21→0, and μ U_F12→0 in the limit μ→0 is detailed in appendix <ref>. Now, taking the limit μ→0 of Eq. (<ref>) and Eq. (<ref>), the propagator matrices reduce to U_Fsgn(s)=[ 1 (1+s/2)lim_μ→0U_F12; (1-s/2)lim_μ→0U_F21 1 ] where s=±1 corresponds to the positive and negative Stokes lines respectively. Interestingly, U_F+(U_F-) is an element of an additive unipotent group. To compute the Bogoliubov coefficient β the cross diagonal elements of Eq.(<ref>) need to be determined. We will see in Sec.(<ref>) that in the case of dispersion relations of the form ω^2=Az^n these coefficients are fixed by symmetries of the mode equation. §.§ Stokes constants from symmetries In this section, we will work in the F^2-gauge to compute the leading adiabatic order propagator matrices U(z_1,z_2) across two adjacent anti-Stokes lines in systems where ω^2∼ z^n. We will see that in such systems the mode equation is symmetric under a discrete set of coordinate rotations. This in turn induces a symmerty representation in the propagators. Combining this with the reduced form of the propagator matrices described in the previous section and analyticity properties of solutions to the mode equation will allow us to determine the unknown cross diagonal propagators. Analytically continuing η to the complex plane, the operator governing the mode equation in Eq. (<ref>) becomes 𝒪_z≡∂_z^2+ω^2(z). For dispersion relationships which may be approximated as ω^2(z)=Az^nwhere n∈ℤ near the zero of ω^2 coordinatized here to be at z=0, the mode equation operator 𝒪_z is symmetric under the ℤ_n+2 discrete symmetry z→ R^qz≡exp(2π iq/n+2)z. This can be used to find a ℤ_n+2 symmetry representation of the propagator as discussed in Appendix <ref>. The propagator connecting anti-Stokes boundaries take the form V(z_1)=U_F(z_1,z_0)V(z_0) where V(z_0)=(α(z_0),β(z_0)) and V(z_1)=(α(z_1),β(z_1)). Note that U_F(z_1,z_0) and F_k(z) now depend on n because of their ω dependences. To see how the 𝒪_z symmetry representation in U_F can be used to compute the Bogoliubov coefficient, consider the following identity obtained from single valuedness of the mode functions, i.e. χ(z)=χ(e^2π iz). Single valuedness implies that propagating the vector V(z) in a full circle should return the same value i.e V(z)=V(e^2π iz) for any z∈ℂ. In this paper, we will focus on |Az^n|≫ k^2 which is satisfied for |(zA^1/n+2)^n/2|≫(n/2)^n/2+n≫k/A^1/n+2 where the (n/2)^n/(2+n) related condition comes from adiabaticity condition ω'/ω^2≪1. To stay away from the essential singularities at ∞, it is useful to view the z region of interest to be bounded by |z|<|z_max| where |z_max| can be chosen to be arbitrarily large. Hence, we are interested in U_F propagation in an annulus. This annulus is shown in green in Fig. <ref> for n=4 case. Within this annulus, propagating around a closed path implies the single valuedness condition XU_F(n+2)...U_F(2)U_F(1)=𝕀 where X is a transformation associated with a branch cut that we define more precisely below (corresponding to the analytic structure displayed in the right Fig. <ref>). The U_F multiplication for n=4 is illustrated in the right Fig. <ref>. Note that for k/A^1/(n+2)=0 we choose an analytic continuation that is distinct from the analytic continuation with k/A^1/(n+2)>0 as illustrated in Fig. <ref>. With k/A^1/(n+2)>0, we put the branch cuts along the anti-Stokes lines going away from the real axis (Fig. <ref>a). As shown in Fig. <ref>b-d, the branch cuts are deformed such that for the z>max z_ region, the branch cuts can merge and sometimes disappear in the k/A^1/(n+2)=0 limit as the branch points z_m(k)=e^iπ(2m-1)/n(k^2/A)^1/n vanish. Whether or not the merging branch cuts disappear or not depend on n mod 4 since each branch cut for the k/A^1/(n+2)≠0 case (leftmost Fig. <ref>) contributes a transformation V_k(z)=i^-1BV_k(ze^i2π) where B≡([ 0 1; 1 0 ]) as V_k(z) jumps infinitesimally counterclockwise across a branch cut. This means that X in Eq. (<ref>) for k/A^1/(n+2)=0 is V_k=0(ze^2π i)=X^-1V_k=0(z) where X^-1=(-iB)^-n where X is obviously unity for n being multiples of 4 in contrast with Eq. (<ref>). However, the branch cuts that are physical do not correspond to Fig. <ref> c) and d), but instead they correspond to Fig. <ref> b) in the limit k/A^1/(n+2)→0. This will be accounted for with n/2 branch cuts accounted for in the computation of the β_k coefficient later. The point of this explanation is not the final answer associated with the branch cut which is obvious if one considers F_k=0(z) expression in Eq. (<ref>).[That will show that the X representation is the same as the representation of multiplying by -i coming from the denominator of the WKB basis function choice.] The point of this explanation is how the analytic continuations with different branch cut choices (say with n branch cuts and zero branch cuts) are distinct even though they are related. We are taking the limit k/A^1/6→0 with the branch cut choice shown in Fig. <ref> b). In Appendix <ref>, we derive U_F(z̅,z̅_0)=B^-1U_F(z,z_0)B where here the bar is not a complex conjugation but a discrete rotation defined as (z̅,z̅_0)≡(exp(2π i/n+2)z,exp(2π i/n+2)z_0). To understand the U_F representation, it is useful to note that each region divided by the anti-Stokes lines has a Stokes line on which the absolute value of the WKB phase |exp(i∫_z_(*)^zdz' ω(z'))| increases or decreases, labeled by "+" and "-" Stokes lines, respectively (introduced in Eq. (<ref>)). If we label each sector containing a Stokes line as j and write the propagator as U_F,j, then Eq. (<ref>) can be used to write U_F,j+1=B^-1U_F,jB U_F,j+2=B^-2U_F,jB^2=U_F,j. Therefore, along a circular contour, the propagator matrices between any two adjacent anti-Stokes lines where the propagation crosses a “+(-)” Stokes line are the same. Now defining the propagation matrix U_F across a “±” Stokes lines as U_F±, we can write U_F±=B^-1U_F∓B. More specifically, if there are n+2 anti-Stokes lines then the number of U_F matrix multiplications required to return to identity is n+2. The explicit form of the U_F± matrices is given by Eqs. (<ref>). Defining lim_μ→0U_F12=S this can be written as U_F+=([ 1 S; 0 1 ]) and an analogous expression for U_F-determined by Eq.(<ref>). In the appendix <ref>, we show that μ^n>0U_F21→0 (and similarly for μ^n>0U_F12) in the annulus. To obtain S, we use single valuedness of the mode function rewritten as a propagation around a closed contour as written in Eq. (<ref>). Now, traversing a closed loop in the counter-clockwise direction, Eq. (<ref>) implies The order of U_F+,U_F-depends on the choice of the branch cut. In the following we have assumed a branch cut choice. The calculation below is independent of this choice. (U_F+U_F-)^n/2+1(-iB)^n=𝕀. Using Eq. (<ref>), we have the condition (U_F+B)^n+2=(-iB)^-n=i^n for n=2m, m∈ℤ_>0. To solve this equation for S (valid for μ=0), we diagonalize Λ^-1(U_F+B)Λ=([ S-√(4+S^2)/2 0; 0 S+√(4+S^2)/2 ]) where . This and Eq.(<ref>) can be used to obtain the following condition: (S±√(4+S^2)/2)^2m+2=(-1)^m which can be solved to obtain S=2icos(π/n+2). Eq. (<ref>) is derived in Eq. (7.11) of <cit.> using mostly the same ideas except that reference constructs and uses the explicit power series general solution information whereas we here have not made any reference to an explicit solution construction. Note that the appearance of n+2 which counts the number of regions bounded by two anti-Stokes lines with at least one Stokes line in between. We emphasize that this n+2 counting identification is manifest in S^n+2 that arises from the n+2 factors of U_F in Eq. (<ref>). Eq. (<ref>) thus gives U_F+ to O(μ^0) and U_F- as U_F-=B^-1U_F+B=([ 1 0; S 1 ]) from Eq. (<ref>). Note that even though we derived this result using the F^2-gauge, this propagator result is also valid in the 0-gauge due to the form of the gauge transformation. One way to view this result is as a mathematical identity when comparing the U_0 and U_F to zeroth order in the non-adiabaticity: μ→0. In this limit, we can write U_0+ as U_0+=lim_R→∞ℙ[𝕀_2×2 exp(n/4∫_2π/n+2^4π/n+2dθ[ 0 e^i4/2+nR^1+n/2e^i(1+n/2)θ; e^-i4/2+nR^1+n/2e^i(1+n/2)θ 0 ])] where we have taken the integral from an anti-Stokes to anti-Stokes line with at least one “+” Stokes line in between. Hence, the mathematical identity is lim_R→∞ℙ[𝕀_2×2 exp(n/4∫_2π/n+2^4π/n+2dθ[ 0 e^i4/2+nR^1+n/2e^i(1+n/2)θ; e^-i4/2+nR^1+n/2e^i(1+n/2)θ 0 ])]=([ 1 2icos(π/n+2); 0 1 ]). This is one of the key nontrivialities that the F^2-gauge affords us compared to the 0 gauge of Eq. (<ref>). As we will see, this will play a role in the computation of particle production in cosmology in a particular kinematic limit. Assuming n∈even, we identify the discrete group ℤ_q that U_F belongs to by imposing the condition that the phase ((U_F+U_F-)^n/2+1)^q=B^nqexp(-inπ q/2)n returns to unity. This means q=min_p∈ℕ(4p/n)∈ℤ. Hence, we can summarize the representation as U_F belonging to ℤ_q and X belonging to ℤ_4. In summary, we have computed the propagator matrix U in the F^2-gauge for general z^n using the methods of <cit.> and used our gauge formalism to attribute it also to the 0-gauge to leading order in nonadiabaticity in the ω^2∼ z^n model. The computation the reduced form of the propagators Eq.(<ref>) derived in the F^2-gauge and the discrete symmetries explained in this section and Appendix <ref>. One way to view the nontriviality of having a gauge picture which allows us to compare the propagator expressions in different gauges is that it allows us to derive a mathematical identity of Eq. (<ref>). §.§ Computing particle production with k→0 We will use Eqs. (<ref>), (<ref>), and (<ref>) to compute the Bogoliubov coefficients for a special class of kinematic context introduced below Eq. (<ref>). Although the V_k(η) in Eq. (<ref>) can be approximately interpreted directly as Bogoliubov coefficients in the adiabatic region, their map to particle production acquires additional permutation structure for cases when n/2 in Eq. (<ref>) is odd. This is because on the real axis, the positive frequency modes are defined with respect to ∼exp(-i∫^ηdη' ω(η')) where ω(η)>0 (positivity of energy) whereas the analytic continuation of the frequency by itself does not contain the positivity constraint. For example, when we write ∫_0^ηdη' ω(η')=∫_0^ηdη' √(A)(η')^n/2 for odd n/2 (which is implicit in the analytic continuation), we are treating ω to be negative on the negative real axis, thereby effectively flipping the definition of negative and positive frequencies. Hence, the map to the physical particle production requires an additional permutation that can be written as (α(η),β(η))=P(η)V(η) P(η)≡Θ(-η)B^n/2+Θ(η)𝕀 where B is the matrix defined in Eq. (<ref>). This gives β(z_+∞)=-i[π/n+2] In deriving this result, we used the branch cut information described in Fig. <ref> using an effective branch matrix X_1≡(-iB)^-n/2. Eq. (<ref>) shows that the particle production at the special kinematic point depends only on (π/(n+2)) and not on √(A). The factor n+2 counts the number of Stokes sectors on the complex plane for Eq. (<ref>) where we define a Stokes sector to be the region in the annulus bounded by two anti-Stokes lines with at least one Stokes line in the region. For example, one can see that there are six Stokes sectors in Fig. <ref> which illustrates the case of n=4. We also see in Fig. <ref> that k≠0 case sometimes has two Stokes lines in a single Stokes sector. That is why the it more convenient to define the topology as counting the Stokes sectors which is invariant under the k deformations away from zero, unlike the number of Stokes lines in the annulus. Note that since the number of Stokes sectors depend on the basis choice of Eq. (<ref>) (weakly since the k deformations do not change this quantity) which is left invariant under gauge transformations by construction, this definition of number of Stokes sectors is also gauge invariant. Furthermore, since these sectors partition the annulus into regions where each region has a common characteristic (of having an approximately fixed asymptotic expansion), one can view each sector as a connected region with respect to the asymptotic expansions, and thus a natural notion of topology characterized by Stokes sectors exist.[Although related, this is distinct from the notion of change in the topology of steepest descent paths discussed in <cit.>.] Moreover, we have already emphasized below Eq. (<ref>) that n+2 comes from the number of U_F matrices in the single valuedness condition. Hence, the count is not directly about the number of branch points (or equivalently the zeroes of ω^2) but directly about the number of Stokes sectors. In fact, the branch cut factor X_1 of Eq. (<ref>) does not change the results for |β_k|. To intuitively understand the notion of connectedness provided by the Stokes sector (and Stokes phenomena in general), consider what happens to the smooth function χ_k(z)=F_k(z)· V_k(z) as any one given Stokes line is crossed. There is a jump in the coefficient component in V_k(z) of the exponentially suppressed component of the mode function in WKB basis F_k(z). However, that component does not contribute to the asymptotic expansion in any given Stokes sector (since as an asymptotic expansion, e^-1/|δ| is exactly zero as δ→0) but manifests itself later (as one continues towards the next anti-Stokes boundary) as the boundary anti-Stokes lines are crossed. In that sense, the asymptotic expansion has a connected character in any given Stokes sector, and the number of Stokes sectors correspond to a topological characterization of the asymptotic expansion. Furthermore, each sector is insensitive to the continuous deformation of the A parameter. Finally, as one can see in the left Fig. <ref>, the number of Stokes sectors is insensitive to the changes in k. One can also see from the last Fig. <ref> that there is a pinched singularity of the original integration of Eq. (<ref>) along the real axis for V_k(η). Unfortunately, that only indicates some type of derivative singularity because of the pinching comes from branch points. In reality, at least in the F^2-gauge and 0-gauge, M is singular at the origin with k=0 while it is not singular for any k>0. This means our computation of U_F effectively evaluated the Cauchy principle value of the integral but indirectly through the analytic continuation and symmetries. The branch point information (from which the Stokes lines emanate) is still felt by the original contour integral from the pinched singularity. That is why intuitively the k/A^1/(n+2)→0 limit is the topological limit (as the influence on the integral is maximal). § COMPARISON OF THE F-MATRIX METHOD WITH THE EXACT SOLUTION Since the differential equation of the form χ”(z)+z^nχ(z)=0 is exactly solvable in terms of Bessel functions, we can compare the F-matrix results with the exact solutions. What this means is that the F-matrix methods are not necessary to compute the k→0 limit of β_k variables in this paper, but it does elucidate the topological structure hidden in the Bessel function solutions. For specificity and intuitive clarity, we focus on n=4. Exact solutions to the mode Eq. (<ref>) take the form χ(z)=C_1/6z^1/2J_1/6(z^3/3)+C_-1/6z^1/2J_-1/6(z^3/3) where C_±1/6 is determined by the boundary conditions. The Bogoliubov coefficient β may be computed using Eq.(<ref>) and the exact solutions χ_1(z) and χ_2(z) satisfying the asymptotic boundary conditions on the real axis χ_1(z)∼ f_-(z)=1/√(2z^2)exp(-iz^3/3) asz→ z_-∞ χ_2(z)∼ f_-(z)=1/√(2z^2)exp(-iz^3/3) asz→ z_+∞ respectively which matches Eqs. (<ref>) up to a phase. Here z_-∞<0 and z_+∞>0 correspond to times when the non-adiabaticity becomes negligible. To obtain χ_1(z) and χ_2(z), we match the lowest order terms of the asymptotic series expansion of Eq.(<ref>) to Eq.(<ref>) and Eq.(<ref>) respectively. Being dependent on the J_±1/6(κ) Bessel functions, the lowest order terms of the asymptotic series of the exact solutions are determined by (3κ)^1/6J_ν(κ) ∼ b_ν,+(24/π^3κ^2)^1/6exp(i(κ-νπ/2-π/4)) +b_ν,-(24/π^3κ^2)^1/6exp(-i(κ-νπ/2-π/4)) where κ=z^3/3 and ν=±1/6. The coefficients b_ν,± in different sectors of the complex κ plane are b_ν,+=1/2exp(2p(ν+1/2)π i) b_ν,-=1/2exp(2p(ν+1/2)π i) for (2p-1)π<(κ)<(2p+1)π b_ν,+=1/2exp(2(p+1)(ν+1/2)π i) b_ν,-=1/2exp(2p(ν+1/2)π i) for 2pπ<(κ)<(2p+2)π where p∈ℤ. For χ_1(z), the relevant approximation corresponds to the sector to which z_-∞ belongs. The fact that (z_-∞)=π implies (κ_-∞)=3π∈(2π,4π). Thus, the correct asymptotic series approximation is obtained from Eq.(<ref>) with p=1. Matching with the boundary condition gives χ_1(z)=C_1,1/6z^1/2J_1/6(z^3/3)+C_1,-1/6z^1/2J_-1/6(z^3/3) where C_1,1/6=D_1exp(-7π i(1/12+1/4)), -D_1exp(-7π i(-1/12+1/4)) and D_1=(exp(-π i(1/6+1/2))-exp(-π i(-1/6+1/2))/2)^-1(12/π)^-1/2. Similarly for χ_2(z), the sector to which z_+∞ belongs is (κ_+∞)∈(-π,π) because (z_+∞)=0=(κ_+∞). Matching the asymptotic series obtained from Eq.(<ref>) with p=0 to Eq.(<ref>) gives χ_2(z)=C_2,1/6z^1/2J_1/6(z^3/3)+C_2,-1/6z^1/2J_-1/6(z^3/3) where C_2,1/6=D_2exp(iπ(1/12+1/4)), -D_2exp(-iπ(1/12-1/4)) and D_2=(12/π)^-1/2(exp(iπ(1/6+1/2))-exp(-iπ(1/6-1/2))/2)^-1. Therefore, Eq.(<ref>) with the use of the Wronskian identity W≡ J_1/6(Az^ξ)∂_zJ_-1/6(Az^ξ)=-ξ/π z where A is a constant gives the β coefficient as β_k̅=0=3i(-C_2,1/6C_1,-1/6+C_2,-1/6C_1,1/6)/π=i√(3). Because of the physical branch cut resolution discussed in Subsec. <ref>, we need to map this to β_k̅→0 using Eq. (<ref>). The result is β_k̅→0=i^n/2β_k̅=0 with n=4. Here we obtain the topological result of Eq.(<ref>) in the k̅→0 limit in terms of the Wronskian identities satisfied by the Bessel functions. In other words, in the k̅→0 limit, the Wronskian of the Bessel function solutions to the mode equation count the number of Stokes sectors defined below Eq. (<ref>). One way to understand this link between the Bessel solution and the topology is that the Wronskian of the the Bessel function satisfies a differential equation dW/dz=-1/zW which is invariant under scaling z. In other words, conformal invariance naturally erases geometrical information, which happens to leave the topological information left in Eq. (<ref>). § ILLUSTRATIVE MODEL In this section we present couple of physical models for which the topological limit of the particle production computation is manifestly relevant. The first model corresponds to the mass of the dark matter χ conformally coupled to gravity being controlled by a dimension 6 coupling to a spectator field ϕ whose time evolution causes the frequency squared ω^2 of χ to go through a zero (approximately) analytically. This first model is easily embeddable in the context of inflationary cosmology during the reheating phase. Our second model is presented as a purer mathematical match of the cosmology and the topological production scenario. It involves three scalar fields and a background FLRW spacetime with constant spatial curvature. The second model will have a large fraction of the particle production coming from the topological contribution unlike the first model. §.§ Tanh model In this subsection, we will consider a scenario in which the dark matter field χ obtains its mass through a dimension 6 coupling to a spectator scalar ϕ which is rolling down a tanh potential in a post-quasi-dS phase of inflation. When the mass of the χ field goes through a zero, there will be an approximately topological contribution to the χ particle production because of the nonadiabaticity of the dispersion relations. The boundary conditions and parameters are chosen to separate other sources of nonadiabaticities such that Eq. (<ref>) approximately applies. Given that the vacua change is responsible for the particle production here, and given that the production amplitude has a topological character, there is a semblance to the usual anomalous current equation ∂_μj_A^μ=-g^2/8π^2TrFF̃ where j_A^μ is the current anomalous with respect to the gauge group whose field strength is F. Consider a spectator scalar field ϕ governed by the following potential V(ϕ)=ρ_0[1-tanh(ϕ/M)] and the χ field coupling ℒ⊃g/2Λ^2ϕ^4χ^2 where χ particles will be produced through the classical field motion of ϕ. The classical equation of motion for ϕ is ϕ̈+3Hϕ̇-ρ_0/Msech^2(ϕ/M)=0 which depends on 3 scales H, M, and ρ_0/M. We can use the freedom to scale time and the field to define a single dynamical parameter ρ_0/M=10^-6MH_I^2 where we will explain later that 10^-6 ultimately comes from ensuring that there is at least an order of magnitude separation between when k/(a_eH_I)∼ O(1) becomes nonadiabatic and k/(a_eH_I)≪ O(1) becomes nonadiabatic owing to the fact that ϕ∝η^6 during the tanh(ϕ/M)∼ϕ/M phase. For example, if we had chosen 10^-3 here, the corresponding number (10^-3)^1/6∼ O(1) will not lead to a hierarchy. Note that Eq. (<ref>) automatically ensures that ϕ is a spectator field since ρ_0/3M_P^2H_I^2=10^-6M^2/3M_P^2≪1 (where M<M_P owing to the EFT validity condition). The field value at the beginning and the end of inflation are ϕ_p<-0.50016M ϕ_e=-0.500M respectively. The first condition ensures at least 60 e-folds of inflation. The expansion rate is parameterized as H(t) = H_I t<t_e H_I/1+3H_I/2(t-t_e) t≥ t_e where t_e is the time at the end of inflation. During inflation, the field trajectory of Δϕ obeys slow roll 3H_Iϕ̇-ρ_0/Msech^2(ϕ/M)=0 whose solution is ϕ/2+Msinh(2ϕ/M)/4-(ϕ_p/2+Msinh(2ϕ_p/M)/4)=ρ_0/3MH_I(t-t_p). If inflation ends in the linear section of the potential, as in Eq.(<ref>) then the field solution for t>t_e to Eq. (<ref>) is ϕ/M=1/9(2ρ_0/3M^2H_I^2[1+3/2H_I(t-t_e)]^2-3c_1/2/1+3/2H_I(t-t_e))+c_2 where c_1≡8/9ρ_0/M^2H_I^2(2-cosh(2ϕ_e/M)/1+cosh(2ϕ_e/M)) c_2≡ϕ_e/M-2/9ρ_0/M^2H_I^2(cosh(2ϕ_e/M)-1/1+cosh(2ϕ_e/M)). Note that c_2 should be interpreted as the field displacement needed to reach the nonadiabatic point where ϕ=0. The conformal time is related to the comoving observer's proper time as 1+3/2(H_It-H_It_e)=(H_Ia_eη/2+1)^3 where η=0 corresponds to the end of inflation. The effective mode frequency of Eq. (<ref>) in conformal time is ω^2(η)=k^2+g/Λ^2a^2(η)ϕ^4(η). For long wavelengths characterized by k≪g/Λ^2a^2(η)ϕ^4(η), for generic values of η, nonadiabaticity occurs near ϕ=0 in this model corresponding to the time η=η_0. This nonadiabaticity is taken to be well separated from that at the end of inflation by choosing the parameter ρ̅≡ρ_0/M^2H_I^2≪1, and maximizing |ϕ_e| while still lying within the linear approximation range of the potential V(ϕ). The nonadiabaticity for this frequency can be defined through Eq. (<ref>). Parameterizing g/Λ^2a^2(η)ϕ^4(η)=A(η-η_0)^4 near η=η_0 where A now contains all the scales we can write ω'/ω^2=1/ω1/k^2/Aa^2(η)(η-η_0)^n-1+(η-η_0)(n/2+a'/a(η-η_0)) with n=4 and the corresponding time region for large nonadiabaticity might be defined to have the width Δη satisfying [ω'/ω^2]_η_max+Δη=0.1×[ω'/ω^2]_η_max In the k→0, this expression would formally give Δη→0. One of the main points of this paper is the topological nature of the Bogoliubov coefficient in this parametric limit. Such situations generically cannot be characterized by a scale a(η_0)Δη unlike for k≳ O(aH) for which this width does capture the qualitative aspects of the nonadiabatic physics.[The cases with modes with k≳ aH will not be well approximated by the topological production computation. This constraint will play role in our discussion in Sec. <ref>.] Fig. <ref> explicitly illustrates the qualitative behavior of the frequency time dependence we are trying to model with the current physical scenario. Note that |ω'/ω^2| has a double peak structure surrounding the zero-crossing time η_0. For the k→0 case, let us define a time region during which the system be considered nonadiabatic differently. One criteria that can be chosen is to define η_c where (ω'/ω^2)_η_c=±1 which has a solution a(η_0)η_c=a(η_0)η_0+[2Λ/√(g)[1/a(η_0)∂_ηϕ(η_0)]^2]^1/3. Hence, one of the remarkable simplification that occurs in the Bogoliubov coefficient computations at the complex frequency threshold is that the time scale associated with the nonadiabaticity in Eq. (<ref>) disappears. It is this topological character that the F-matrix formalism allows us to make precise. The topological index will be associated with the counting the number of Stokes regions in this limit. §.§.§ Bogoliubov coefficient from U_F propagators in F^2-gauge The non-adiabaticity δ=ω'/(4ω^2) of the dispersion relation ω^2(η)=k^2+g/Λ^2a^2(η)ϕ^4(η) peaks in the neighborhood of the critical point η_0 in the long wavelength limit. The width of this non-adiabatic region, defined as the interval outside which |δ(η)|≪|δ|_max, can be seen to be directly proportional to k^ν where ν is a positive power. Therefore, for small values of k, we may approximate the dispersion relation as ω^2(η)≈ k^2+g/Λ^2a^2(η_0)(ϕ'(η_0))^4(η-η_0)^4 if the non-adiabatic width lies within the width of the linear approximation. More explicitly, the time region Δη_l of the linear approximation satisfies 1/4×2f'(η_0)/f”(η_0)≈1/7(27|c_2|/2ρ̅)^1/6≫ a_eH_IΔη_l where f(η)≡ a^1/2(η)ϕ(η)/M and the factor of 1/4 is related to the power of f(η) in the dispersion relation. In deriving Eq.(<ref>), we have used Eq.(<ref>) in conformal time η and 2ρ_0/3M^2H_I^2[H_Ia_eη/2+1]^6≫3c_1/2/(H_Ia_eη/2+1)^3 . The above is justified for cases where the ϕ energy is sufficiently suppressed (see Eq. (<ref>)), ϕ_e≈-0.5M, c_1∼ρ̅, and H_Ia_eη/2≫1. In terms of ϕ(η)-ϕ(η_0) around η_0, Eq.(<ref>) implies ϕ(η)-ϕ(η_0)/M≪3/7|c_2|≈3ϕ_e/7M. Within the approximation Eq.(<ref>), the width of the non-adiabatic region for a particular value of k may be estimated as Δη_w∼3(k^2Λ^2/ga^2(η_0)(ϕ'(η_0))^4)^1/4 by considering when ω^2 that controls the denominator of δ(η) is dominated by η-η_0.[The factor of 3 here is an ansatz that works well near the fiducial parametric point. In other words, with the fiducial parametric choices, the non-adiabaticity function has become δ≲ O(0.1) at a time η=η_0+Δη_w.] Requiring Δη_w≪Δη_l, we find k/a_eH_I≲4×10^-4(gM^4/Λ^2H_I^2/10^-2)^1/2(|ϕ_e|/M/0.5)^7/3(𝒜/10^-1)^2 where 𝒜 is the desired accuracy and ϕ_e measures the field distance from the end of inflation to zero of ω^2. The fiducial value of 10^-2 for gM^4/(Λ^2H_I^2) can be interpreted as the square of time ratio measuring the isolation of the nonadiabatic time region compared to the linear ϕ approximation time region. This is a model-dependent limitation to the topological contribution. Within this range of k values, Eq.(<ref>) estimates the Bogoliubov coefficient as |β_k|^2≈3. Numerical computation of |β_k|^2 for k=10^-6a_eH_I, shown in Fig. <ref> shows agreement with this estimate. Although the modes satisfying Eq. (<ref>) are superhorizon at the end of inflation, they are subhorizon at a time 8 efolds before the end of inflation. This establishes that these topological contributions can be physical. There is a constraint on the M parameter from the resolution of the classical field ϕ being Δϕ∼ H/(2π). Since Eqs. (<ref>) and (<ref>) require a resolution of Δϕ/M∼20ρ_0/M^2H_I^2(N/60) we require H_I/M≪ O(100)ρ_0/M^2H_I^2(N/60)∼10^-4(N/60). Although this condition can be violated without drastically upsetting the phenomenology, we impose this here for illustrative convenience. Next, let's compute the total cosmological dark matter density produced from the tanh model. We will see that the topological production contribution makes up a negligible part of the total production density. Before we discuss the detailed computation, let's see what the main nontriviality of the analysis will be. We generically expect that the number density contribution to the dark matter density will take the form of an integral that is cutoff at Λ_2: ∫_0^Λ_2dk k^2/(2π)^3|β_k|^2∼ fΛ_2^3 where f represents the strength of non-adiabaticity. For the non-topological contribution, the cutoff Λ_2 is expected to be distinguishable from k=0, unlike the k values for which ω^2=k^2+m^2∼ m^2 (i.e. k values for which the topological approximation will be valid). We will compute Λ_2 numerically and find that it is much larger than the bound given by Eq. (<ref>). The cosmological dark matter energy density ρ_χ is obtained through the k integral ρ_χ=1/2π^21/a^3∫_0^∞dkk^2ω/a|β_k|^2 where the effective cutoff in Eq. (<ref>) occurs at k/a_e∼ H_I. The observable relic abundance depends on the details of the ϕ evolution and cosmology after the particle production. As worked out in Appendix <ref>, the final relic abundance is Ω_χh^2=0.27(T_rh/10^7GeV)(H_I/10^2GeV)(M/10^9GeV)(1+0.21log(ρ_0/(M^2H_I^2)/10^-6)) and the mass of this dark matter is m=1.6×10^11(M/10^9GeV) GeV where the mass M can be easily increased from this fiducial value without upsetting the assumptions of the computation. Note that the chosen fiducial parametric values of H_I=10^2 GeV and M=10^9 GeV, the cutoff scale is of the order Λ/√(g)∼10^17GeV as explained more in Appendix <ref>. To understand this result qualitatively (and from comparison with numerical exploration), the dominant contribution to ρ_χ in Eq. (<ref>) comes from the upper part of the integration with f∼ O(1) in the notation of Eq. (<ref>) and Λ_2 that is determined by approximately the exponential cutoff controlled by exp(-C∫^Δη̃dη k)∼exp(-CkΔη̃) where C is presumably an order unity coefficient and Δη̃ corresponds to the k-dependent time period when the given k mode is most nonadiabatic. For the fiducial parameters shown in Eq. (<ref>), we expect Δη̃ ∼(k/a_e)^1/2(gM^4/Λ^2)^-1/41/c_2a_e(1/H_I) Hence, the non-topological contribution to |β_k|^2 integral is expected to be ∫d^3k/(2π)^3|β_k|^2∼ O(c_2^2a_e^3H_I^2/2π^2(gM^4/Λ^2)^1/2). Let's now compare this explicitly with the topological contribution which we parameterize as ∫d^3k/(2π)^3|β_k|^2∼ O(1)fΛ_3^3 where Λ_3 corresponds to the cutoff of the integral associated with the topological contribution. There are two length scales which determine this cut-off: i) the length scale Λ_3^lin within which the linear approximation is good, determined by Eq.(<ref>) and ii) the length scale Λ_3^corr within which the small k̅ corrections to β_k̅ is negligible, determined by Fig. <ref>.. These length scales are given by Λ_3^lin/a_eH_I≲4×10^-4(gM^4/Λ^2H_I^2/10^-2)^1/2(|ϕ_e|/M/0.5)^7/3(𝒜/10^-1)^2 and given that the effective A parameter in ω^2≈ k^2+Aη^4 is ga^2(η_0)(ϕ'(η_0))^4/Λ^2, we can evaluate Λ_3^corr/a_eH_I≲0.6× O(10^-1)(𝒜/10^-1)(gM^4/H_I^2Λ^2/10^-2)^1/6 using similar steps as Eq. (<ref>). From the above we see that Λ_3^corr≫Λ_3^lin, and therefore, the topological scale determined by the minimum of the above two is set by Λ_3^lin Λ_3=min(Λ_3^corr,Λ_3^lin)=Λ_3^lin∼4×10^-5(gM^4/Λ^2H_I^2/10^-2)^1/2(|ϕ_e|/M/0.5)^7/3(𝒜/10^-1)^2 Hence, the topological contribution for this model is estimated to be less than O(10^-10) fraction of the total particles produced. On the other hand, it is interesting that Λ_3^corr is larger which indicates that in a different scenario in which the linear approximation can be extended, the topological contribution can be more significant. We present an extreme version of this in the next model. §.§ Curvature model The tanh model discussed above has the topological production as an approximation for k satisfying Eq. (<ref>) coming partially from the equation governing the time range for which tanh(ϕ/M)∼ϕ/M during which ϕ-ϕ_e∼(t-t_e)^2 with ϕ_e<0. One can instead also set up a nonlinear potential and cosmology for which ϕ=c_1(t-t_0)^2 nearly exactly. Furthermore, in this scenario the exact topological limit will nearly correspond to an arbitrarily large k that is matched to the spatial curvature scale of cosmology. Consider the Lagrangian of 3 real scalar fields ϕ,χ, and ψ minimally coupled to gravity: ℒ=1/2(∂ϕ)^2-V(ϕ)+1/2(∂χ)^2-1/2f(ϕ)χ^2+1/2(∂ψ)^2-2h^2M_P^2e^-√(2)ψ/M_P V(ϕ)=-8c_1ϕ f(ϕ)=c_1(E/a_0^2+n_2)/ϕ(K/a_0)^2+n_2(ln[K√(ϕ/c_1)/a_0])^n_2 where both χ and ϕ are spectators in a universe driven by ψ. The free parameters in this model are {c_1,K/a_0,h,E/a_0^2+n_2,n_2} where when ϕ=ϕ_0≡ c_1(a_0/K)^2 the mode frequency ω^2=k^2+a^2(η)f(ϕ)-a”(η)/a(η) vanishes for the one particular k-mode of k=K. This is the main attractive feature of this model since one can produce physical non-vanishing k-mode particles with arbitrarily large momentum matched to the parameter K which we will see below is the spatial curvature of this cosmology. Note c_1 in Eq. (<ref>) has units of mass cubed and [E]=[k]^2+n_2 such that the units of f(ϕ) are determined by c_1/ϕ. We will later see that h is a parameter that determines the origin of ψ. The background equations are ϕ̈+3Hϕ̇-8c_1+1/2f'(ϕ)χ^2=0 χ̈+3Hχ̇+gf(ϕ)χ=0 ψ̈+3Hψ̇-2√(2)h^2M_Pe^-√(2)ψ/M_P=0 3M_P^2(ȧ(t)/a(t))^2≈1/2ψ̇^2+2h^2M_P^2e^-√(2)ψ/M_P to which there exists an explicit solution χ=0 ϕ=c_1(t-t_1)^2 ψ=√(2)M_Pln[h(t-t_1)] a(t)=K(t-t_1). The approximation made in Eq. (<ref>) is that χ and ϕ fields do not contribute significantly to the background energy density. This yields in conformal coordinates Kη+C_2=ln(K(t-t_1)) yielding a=a_0e^Kη corresponding to an open universe dominated by the spatial curvature H^2=(K/a)^2. Hence, if one were to embed this model into a realistic cosmology some work needs to be done, but we will not pursue that here since our point in this section is to illustrate how the topological particle production is not limited to the k=0 approximation. This spatial curvature will cancel the k^2 in Eq. (<ref>) to yield an analytic dispersion relationship ω^2=Eη^n_2 going through a zero at time η=0.[It is clear from Eq. (<ref>) that the spacetime is regular at η=0 in this coordinate system.] In light of our interest topological particle production, we will assume n_2 is an even positive integer. The nonadiabaticity corresponding to this dispersion relationship is ω'/ω^2=n_2/2√(E)η^-1-n_2/2 indicating that a smaller E and larger n_2 generates a steeper approach to the non-adiabatic singularity. If we define the width of this region to be where the absolute value of this nonadiabaticity reaches 1/2, we find η_1/2=(n_2^2/E)^1/2+n_2 which shows that the width decreases only modestly with larger E and is largely insensitive to n_2. Let's estimate the topological contribution to the total dark matter production. The dispersion relation of the χ_k modes for general k values is ω^2=k^2-K^2+Eη^n_2. The corresponding Bogoliubov coefficient β_k takes the topological value |β_K|=[π/n_2+2] at k=K. For k>K, the dispersion relation is positive definite and β_k may be estimated as |β_k|∼|exp(O(1)i∫_0^η_*dη ω)|∼exp(-O(1)(k^2-K^2/E^2/(2+n_2))^(n_2+2)/2n_2) whereas for k<K, the dispersion relation is tachyonic for a finite interval of η, and we expect exponentially enhanced particle production β_k∼exp(∫_-η_*^η_*dη |ω|)∼exp(2(K^2-k^2/E^2/(2+n_2))^(n_2+2)/2n_2). Now including the phase space contribution, the integrand in Eq.(<ref>) defining n_χ, i.e., k^2|β_k|^2 has a peak around k_p given by (2(2+n_2)/n_2)^2n_2/n_2-2 (k_p/E^1/(2+n_2))^4n_2/n_2-2+(k_p/E^1/(2+n_2))^2=(K/E^1/(2+n_2))^2 Hence, for n_2>2, this implies k_p≈ K, for k_p/E^1/(2+n_2)<1, i.e. the integrand peaks at its topological value K^2|β_K|^2. The width of the integrand around this peak may be estimated to be Δ k_w∼ O(1)E^1/(2+n_2). We then expect significant contributions to the total number density n_χ from the topological quantity β_K. Testing this numerically for n_2=6, we see that K̅=K/E^1/(2+n_2)=0.6 corresponds to a peak k̅_p=k_p/E^1/(2+n_2)≈K̅.The k width within which the topological contribution becomes significant can then be estimated from Eq. (<ref>) as Δk̅_topo∼ O(0.1). With this, the fraction of topological contribution to the total particle production (the latter found numerically) is n_χ,topo/n_χ≈0.2 which indicates that cosmological models where the zero of the dispersion relationship can occur at large k can have a large topological contribution. Even though this model has not been fully embedded into a realistic cosmological setting, it is encouraging that the topological production can give a physically significant contribution. We defer the the embedding of this type of model into a realistic cosmology to a future work. § SUMMARY Previous literature using Stokes phenomena to compute particle production in the cosmological context focused mostly on considering large k/(ma(t_c)) limit at the time t_c of particle production. In the present work, we considered the k/(ma(t_c))→0 region using a Stokes phenomena inspired method of computation and showed that one can relate the topology in the form of Stokes sectors of the analytic continuation of the (α_k(η),β_k(η)) to the non-perturbatively large |β(z_+∞)| as given in Eq. (<ref>). Since the WKB asymptotic expansion in each of the Stokes sectors can be viewed as a choice of vacuum, this is analogous to the Chern-Simons number separating different gauge vacua. From the perspective of a topological quantity being rigid in the presence of continuous deformations, the n+2 count of the number of Stokes sectors (defined below Eq. (<ref>)) is insensitive to continuous variations of the strength of the time dependence characterized by the parameters A and C in ω^2=C+Az^n. The key mathematical ingredients that determine the topology in the C→0 limit is the single-valuedness and the nature of Stokes phenomena (reviewed above Eq. (<ref>)). One of the key technological apparatus to derive the concrete results was a mathematical technique of <cit.> which constrains the form of the propagator matrix (reviewed in subsection <ref>). To use that technique and relate that to the standard complexification basis <cit.>, we developed a gauge formulation of the equations governing (α_k(η),β_k(η)). From a purely mathematical perspective, our result can be viewed as a novel identity of Eq. (<ref>). An intuitive understanding of why a topological limit exists in the C→0 limit is the special conformal property of the Bessel function Wronskian Eq. (<ref>). We presented two cosmological scenarios illustrating the topological contribution to β_k. One scenario involves the β_k amplitude describing a dark matter χ number spectrum where the time dependence of the dark matter mass is controlled by a scalar field ϕ rolling in a tanh potential during the inflationary coherent oscillations period. During a time interval surrounding η_0 when ϕ is in the linear part of the tanh potential, the dispersion relationship of χ takes the form of ω^2≈ k^2+A(η-η_0)^4. The 1-loop correction to the potential generates a global minimum of the potential at a finite ϕ value, determining the final post-inflationary heavy mass of the χ particle in this scenario. In this scenario, the topological contribution is naturally suppressed since the phase space is proportional to k^3 whereas the topological contribution is in the kinematic region k/(ma(η_0))→0. In a second illustrative scenario, we constructed an FLRW solution with a nonzero constant spatial curvature which enters the dispersion relationship of χ. In such cases, the kinematic point corresponding to the topological contribution can be at a large value of the momentum k. We have shown that the fractional contribution to the particle production in such scenarios can be O(0.2). There are many future directions to consider in extending the present work. One can extend the discrete symmetry representation to nonvanishing k values in this class of models leading to constraints on the Bogoliubov coefficients. Given that the S-matrices in the background field driven vacuum transitions can be expressed in terms of (α_k,β_k), and given that the background fields can be resolved in terms of quantum fields, it would interesting to identify the full quantum S-matrix interpretation of the Stokes sector topological charges. An interesting direction would be to embed the finite constant curvature FLRW scenario into a phenomenologically viable cosmological scenario. Yet another interesting direction to explore is to understand what constraints can be imposed on the (α_k,β_k) in the intermediate k ranges based on the fact that the functional dependence on k is constrained in the k→0 by our present work and k→∞ by exactly solvable model of <cit.>. § 1-LOOP CORRECTIONS AND DARK MATTER RELIC ABUNDANCE The 1-loop effective potential seen by ϕ around the background fields (ϕ_cl,χ_cl=0) V_eff(ϕ,χ) =ρ_0(1-tanh(ϕ/M))+g^2/4(4π)^2(ln(g/Λ̅^2Λ^2ϕ^4)-3/2)ϕ^8/Λ^4 +1/(4π)^2ρ_0/M^4(ln(2ρ_0/Λ̅^2M^2sech^2(ϕ/M)tanh(ϕ/M))-3/2)ρ_0(sech^2(ϕ/M)tanh(ϕ/M))^2 where Λ̅ is the scale at which the coupling constants are defined. Since evolution in the tree level potential implies ϕ(a_p)∼100M, we will set Λ̅=100M. The above potential may be understood as the loop corrections generating ϕ^8/Λ^4 and ρ_0(sech^2(ϕ/M)tanh(ϕ/M))^2 non-renormalizable interaction terms. To ensure perturbativity, their respective (running) couplings should be ≲1: g^2/64π^2(ln(g/Λ̅^2Λ^2ϕ^4)-3/2) ≲1 1/64π^2(4ρ_0/M^4)(ln(2ρ_0/Λ̅^2M^2sech^2(ϕ/M)tanh(ϕ/M))-3/2) ≲1. For our choice of parameters ρ̅=10^-6 and g̅=10^-2, the above are satisfied for g≲1 and H_I/M≪1 (which is also required by the resolution condition Eq.(<ref>)), for the range of values of ϕ during it's evolution in the 1-loop corrected potential Eq.(<ref>). As we will see below, evolution in this potential implies ϕ(a_p)∼10^4M. To compute the evolution of ϕ on the corrected potential, note that the third term in Eq.(<ref>) has a negligible effect and may be ignored. This is because for ϕ/M≫1, the third term is exponentially suppressed relative to the other two whereas for ϕ/M≲1, the third term is supressed by ρ̅(H_I/M)^2≪1 and ρ̅^2/g̅^2≪1 w.r.t the first two terms respectively. Hence the potential seen by ϕ is V_eff(ϕ)≈ρ_0(1-tanh(ϕ/M))+g^2/64π^2(ln(g/Λ̅^2Λ^2ϕ^4)-3/2)ϕ^8/Λ^4. For small ϕ values, the coupling of the ϕ^8 interaction term is negative and gradually grows positive giving the potential the shape as in figure (<ref>.a)). The logarithmic dependence of the coupling therefore introduces a minimum in the potential seen by ϕ around which the field is trapped and oscillates as seen in the numerical computation of figure (<ref>.b)). Since the first term of the in Eq.(<ref>) becomes exponentially suppressed for ϕ/M≫1, the minimum is determined by the second term as ϕ_min/M =e^1/4(gM^2/Λ^2/10^4)^-1/4. For H_I/M=10^-5(N/60) satisfying the resolution condition in Eq.(<ref>) and the coupling choice gM^4/(H_I^2Λ^2)=10^-2 (from the isolation of the nonadiabaticty explained in Eq. (<ref>)), the above implies ϕ_min/M∼1.3×10^4(N/60)^-1/2. If the ϕ oscillations die down and the field takes values ϕ(a_p)≈ϕ_min today, then the density of the χ-dark matter estimate is given by Ω_χh^2 =0.27(T_rh/10^7GeV)(H_I/10^2GeV)(M/10^9GeV)(1+0.21log(ρ_0/(M^2H_I^2)/10^-6)) where the mass of the dark matter is √(∂_χ^2V(ϕ_min,χ=0))=1.6×10^11(M/10^9GeV) GeV. The fiducial M choice of 10^9 GeV can be associated with an arbitrary association with the intermediate scale. It is easy to slide this number higher without upsetting the conditions required for the validity of Eq. (<ref>). For every choice of M and H_I, there exists a range of g/Λ^2 scale bounded by the isolation of the nonadiabaticty explained in Eq. (<ref>), and for the fiducial values of M∼10^9GeV and H_I∼10^2GeV, we have Λ/√(g)∼10^17GeV which implies that the designer coupling of ϕ to the dark matter may come from a UV model construction near the GUT scale. Note, if the oscillations in the field ϕ do not die down sufficiently, then it could make a significant contribution to the dark matter density today. To estimate a bound on Ω_ϕh^2, we numerically solve for the evolution of the averaged energy density of the field ⟨ρ_ϕ⟩, where the average is taken over the time duration τ of a few oscillations but such that τ≪ H^-1. For systems like the example here, where the time period of coherent oscillations T≪ H^-1, averaging over the duration of several oscillations defines an approximate equation of state for ρ_ϕ - dependent on the potential V_eff(ϕ) and given by w(⟨ρ_ϕ⟩)=⟨ϕ̇^2/2-(V_eff(ϕ)-V_eff(ϕ_min))⟩/⟨ϕ̇^2/2+(V_eff(ϕ)-V_eff(ϕ_min))⟩ where the average in the above is also taken w.r.t τ and the potential has been shifted by a constant such that the shifted potential is positive definite.[For our scenario with parameters, gM^4=10^-2H_I^2Λ^2 , ρ_o=10^-6M^2H_I^2 and H_I/M=10^-5, the ratio of the oscillation frequency to the Hubble constant may be estimated as (V_eff”(ϕ_min))^1/2/H(t_m)≈10^9≫1 where the Hubble constant has been evaluated using a numerical solution at the earliest times when the oscillations begin.] The equation of state w(⟨ρ_ϕ⟩) as defined above is a function of the energy density ⟨ρ_ϕ⟩ and is only approximately constant for Δ t≪ H^-1. The expression for w(⟨ρ_ϕ⟩) may be further simplified using the equation of motion (EOM) of ϕ. For the duration τ for which the Hubble friction can be neglected, the EOM implies ϕ̈+V'(ϕ)≈0. In our situation, it is easy to estimate that the number of oscillations in 1/H time period is large. In such situations, we can derive the following virial theorem ⟨ϕ̇^2⟩≈⟨ϕ V'(ϕ)⟩ where the average is over several oscillations. This implies w(⟨ρ_ϕ⟩) ≈⟨ϕ V'(ϕ)⟩/2-⟨ V_eff(ϕ)-V_eff(ϕ_min)⟩/⟨ϕ V'(ϕ)⟩/2+⟨ V_eff(ϕ)+V_eff(ϕ_min)⟩ and the energy conservation equation d⟨ρ_ϕ⟩/dt+3H⟨ρ_ϕ⟩(1+w(⟨ρ_ϕ⟩)) =0 . Numerically estimating w(⟨ρ_ϕ⟩) followed by numerically solving the above differential equation gives Fig. <ref>. Here we see that as Hubble friction removes energy from the field oscillations, the potential seen by ϕ reduces to a quadratic around ϕ_min and the equation of state approaches that of a matter dominated universe, i.e. w→0. For the parameters used here, w≈0 for t_m∼10^6H_I^-1. Using the matter-like evolution of the field density, the energy density today may be estimated from the numerically found energy density around t_m as Ω_ϕh^2∼1.3×10^14(10^7GeV/T_rh)^-1(H_I/10^2GeV)^2(M/10^9GeV)^2≫1. To ensure Ω_ϕh^2≲1, one can introduce the following coupling to photons ℒ_ϕ⊃ϕ/2M_2F_μνF^μν. When the oscillations of the scalar field become matter-like, the field decays dominantly through the ϕ→γγ channel into radiation. The corresponding decay rate is given by Γ=1/32πm^3/M_2^2 where m=(V_eff”(ϕ_min))^1/2is the mass of the non-relativistic dark matter particle. To ensure that the field does not contribute significantly to the energy density during BBN, it should ideally decay away before this time: Γ≳100MeV^2/M_p≫ H(t_BBN). Including the effects of this decay, the energy density in the coherently oscillating field now dilutes as ρ_ϕ(t)≈[a(t_m)/a(t)]^3ρ_ϕ(t_m)exp[-Γ_ϕ(t-t_m)]. Comparing this to the energy density in radiation at the time of BBN, we find ρ_ϕ(t_BBN)/ρ_rad(t_BBN)∼exp[-Γ_ϕ(t_BBN-t_m)+42.3](T_rh/10^7GeV)(M/10^9GeV)^2≪1 since for Γ_ϕ∼100MeV^2/M_p, the exponent of the decay factor is Γ_ϕ(t_BBN-t_m)∼3.8×10^3(H_I/10^2GeV)^7/3(T_rh/10^7GeV)^-8/3. § COVARIANCE OF LINEAR DIFFERENTIAL OPERATOR INDUCING A SYMMETRY REPRESENTATION In this section, we will show how the covariance of a non-first-order linear differential operator 𝒪_x under coordinate change x̅=Lx will induce a symmetry representation of the propagator governing a homogeneous differential equation rewritten in a first order formalism. We will do this in two steps. In the first step, we show how such covariance leads to a generation of new solutions. In the second step, we use this solution generation technique together with a judicious basis choice to find a first order formalism propagator symmetry. Finally, we apply this to our differential equation of interest. The key nontriviality will be the transformation property of a judiciously chosen set of basis functions under the L transform. §.§ Generating new solutions In this subsection, we show how the covariance of a linear differential operator 𝒪_x under a coordinate transform can generate new solutions to the homogeneous differential equation governed by 𝒪_x. Let 𝒪_x be a linear differential operator and let 𝒪_xχ(x)=0 be a homogeneous linear differential equation. Define a linear transformation x̅=Lx which when substituted into Eq. (<ref>) gives 𝒪_x=𝒪_L^-1x̅. We then see Eq. (<ref>) by algebraic substitution of the transformations that 𝒪_L^-1x̅χ(L^-1x̅)=0. We define there to be a homogeneous differential equation covariance representation if the operator satisfies 𝒪_L^-1x̅=D(L)𝒪_x̅. where D(L) is invertible and commutes with 𝒪_x̅. Eq. (<ref>) turns Eq. (<ref>) into D(L)𝒪_x̅χ(L^-1x̅)=0. Note that if we now drop the bar on x̅ in Eq. (<ref>), we are considering a different solution: the new solution χ(L^-1x) to 𝒪_xχ(L^-1x)=0 satisfies the boundary condition χ(L^-1x_P)=χ_P∂_xχ(x)|_x=L^-1x_P=(∂_xχ)_P where the right hand side contains the same values that would have been imposed in the original solution at x_P (and not L^-1x_P). Note that by simply dropping the bar, we have assumed χ is an object whose definition does not correspond to components of a coordinate dependent basis. In contrast, we could have had D(L)𝒪_x̅χ^μ(L^-1x̅)=0 coming from D(L)𝒪_x̅(χ^μ(L^-1x̅)e_μ)=0 which is equivalent to D(L)𝒪_x̅(χ^μ(L^-1x̅)e̅_ν(R^-1(L))_νμ^ν)=0 giving rise to D(L)𝒪_x̅(χ̅^ν(x̅)e̅_ν)=0χ̅^ν(x̅)≡(R^-1(L))_νμ^νχ^μ(L^-1x̅) where e_μ is a coordinate dependent basis and R is a matrix that accounts for its coordinate dependence. In the latter case, we would have the new solution being(R^-1(L))_νμ^νχ^μ(L^-1x) instead of χ^ν(L^-1x) assuming x̅ and x cover the same set of points. Summarizing, according to Eq. (<ref>), because D^-1(L) leaves the 0 on the right hand side of a homogeneous linear differential equation invariant, χ(L^-1x̅) also satisfies the differential equation (Eq. (<ref>)). Since χ_P and (∂_xχ)_P represents an arbitrary boundary condition data, this solution generating mechanism may be used to generate a propagator symmetry. We will turn to this next. §.§ 1st order formalism propagator symmetry Here we will apply Eq. (<ref>) to find a propagator symmetry in a first order differential equation rewriting of a second order complex differential equation. Suppose we rewrite the second order differential equation [∂_z^2+ω^2(z)]χ(z)=0 where z is a complex number as a first order differential equation ∂_zV(z)=M(z)V(z) where χ(z)=F(z)V(z) and F is a fixed basis of functions defined to be F(z)=(ℱ_+(z),ℱ_-(z)). For the purposes of our induced representation construction, we choose F to satisfy a particular representation F(L^-1z)=D_F(L^-1)F(z)E_F(L^-1) where D_F(L^-1) is a complex number and E_F(L^-1) is a matrix.[Note that different choices of D_F(L^-1) can lead to different choices of E_F(L^-1). We will see that the end result depends on E_F^-1(L^-1)...E_F(L^-1) which is invariant under the choice made for D_F(L^-1).] Whether or not this choice can be made for a nontrivial matrix E_F(L^-1) is the key nontriviality of the construction. We will see that in our application of this formalism to a particular class of ω^2, the WKB basis for F will generate a nontrivial E_F belonging to a nontrivial S_2 representation. With a different boundary condition as discussed in Eqs. (<ref>) and (<ref>), we generate a new solution by identifying χ at L^-1z with χ_2 at z: χ_2(z)=χ(L^-1z). In the F basis, this becomes F(z)V_2(z)=F(L^-1z)V(L^-1z) where χ_2(z)=F(z)V_2(z) sharing the same basis function as χ(z). Using Eq. (<ref>), Eq. (<ref>) becomes F(z)V_2(z)=D_F(L^-1)F(z)E_F(L^-1)V(L^-1z). Suppose V_2(z) corresponds to data propagated from z_0 denoted as V_2(z_0): V_2(z)=U(z,z_0)V_2(z_0) where U is the propagator solution to Eq. (<ref>): U(z,z_0)≡ P[e^∫_C(z_0,z)dz'M(z')] with the path ordering symbol P along the path C starting at z_0 and ending on z. Putting this into Eq. (<ref>) gives F(z)U(z,z_0)V_2(z_0)=D_F(L^-1)F(z)E_F(L^-1)V(L^-1z). Similarly, let V(L^-1z) correspond to data V(L^-1z_0) propagated from L^-1z_0: F(z)U(z,z_0)V_2(z_0)=D_F(L^-1)F(z)E_F(L^-1)U(L^-1z,L^-1z_0)V(L^-1z_0). Setting z=z_0 in this equation, we find F(z_0)V_2(z_0)=D_F(L^-1)F(z_0)E_F(L^-1)V(L^-1z_0) where we used U(z_0,z_0)=1. The general solution to this equation is V_2(z_0)=Z+D_F(L^-1)E_F(L^-1)V(L^-1z_0) where Z solves the zero mode equation F(z_0)Z=0. Writing more explicitly F(z_0)=(ℱ_+,ℱ_-) we can parameterize the general solution to Eq. (<ref>) as Z=f_s(z_0)F_⊥(z_0) where f_s is an arbitrary scaling function of z_0 and F_⊥(z_0)≡(ℱ_-(z_0),-ℱ_+(z_0)). Putting Eq. (<ref>) into Eq. (<ref>) therefore becomes D_F(L^-1)F(z)[E_F(L^-1)U(L^-1z,L^-1z_0)-U(z,z_0)E_F(L^-1)]V(L^-1z_0) =F(z)U(z,z_0)F_⊥(z_0)f_s(z_0) Choosing f_s(z_0) to vary independently of V(L^-1z_0), we find D_F(L^-1)F(z)[E_F(L^-1)U(L^-1z,L^-1z_0)-U(z,z_0)E_F(L^-1)]V(L^-1z_0)=0 Since V(L^-1z_0) is arbitrary, we find F(z)[E_F(L^-1)U(L^-1z,L^-1z_0)-U(z,z_0)E_F(L^-1)]=0. Hence, up to ambiguities of the projection, we are motivated to define a symmetry transformation U(z,z_0)=E_F(L^-1)U(L^-1z,L^-1z_0)E_F^-1(L^-1) which as anticipated before does not depend on different choices of the phases D_F(L^-1) since E_F...E_F^-1 cancels any such factors. Expanding U to linear order in M, this also implies a differential relationship of dzM(z)=E_F(L^-1)L^-1dzM(L^-1z)E_F^-1(L^-1). In summary, we considered situations where the second order ordinary differential equation governed by the differential operator 𝒪_x has a symmetry representation D(L) under the coordinate transform x̅=Lx. This can be written in terms of the first order formalism of Eq. (<ref>) with a judicious basis choice of F, and one can compute the representation of L acting on F as Eq. (<ref>) which involves the matrix E_F(L^-1). If E_F(L^-1) is nontrivial, it induces a useful symmetry of the propagator through Eq. (<ref>). §.§ Our model Consider 𝒪_(z-z_0)=∂_(z-z_0)^2+A(z-z_0)^n. Under the rotation z̅-z̅_0=L(z-z_0) where L≡ e^iθ, the operator transforms as 𝒪_(z-z_0)=e^i2θ[∂_z̅-z̅_0^2+Ce^-i(n+2)θ(z̅-z̅_0)^n] Hence, we see if θ=2π/n+2 L in Eq. (<ref>) has been constructed. With a first order formalism written in terms of the WKB basis functions we choose ℱ_±(z)=f_±(z) in Eq. (<ref>) where f_±(z) are WKB basis functions of Eq. (<ref>) with (η complexified and) the origin taken as (z_*)=0. Under L, the basis vector F transforms as F(L^-1z)=iL^-1/2F(z)B where B=([ 0 1; 1 0 ]) which allows us to choose E_F(L^-1)=B in Eq. (<ref>). Hence, Eq. (<ref>) becomes U̅_g(z̅,z̅_0)=BU_g(z,z_0)B^-1. where U_g is the propagator matrix of Eq. (<ref>) in any gauge g. It is important to recognize that the representation given by Eq. (<ref>) is not generated by a coordinate transformation Eq. (<ref>) acting on Eq. (<ref>) in every gauge. This is stemming from the ambiguities of the projection effects in going from Eq. (<ref>) and (<ref>). The representation given by Eq. (<ref>) is generated by the coordinate transformation in the 0-gauge and the F^2-gauge. The linearized version corresponding to Eq. (<ref>) becomes dz̅M(z̅)=BdzM(z)B^-1. § AN ASYMPTOTIC PROPERTY OF OFF DIAGONAL PROPAGATOR In this section, we present an argument for the vanishing of μ^p>0U_21 in the limit μ→0 (also applicable to μ^p>0U_12) where U_21 and U_12 are off-diagonal propagators connecting adjacent anti-Stokes lines. Start with χ(z)=F(z)V(z)=F(z)U(z,z_0)V(z_0) which is postulated to be a solution to the mode equation (where we have suppressed the wave vector k to reduce notational clutter). Now, choose V(z_0) to be V(z_0)=([ 1; 0 ]) such that the right hand side of Eq. (<ref>) becomes F(z)U(z,z_0)V(z_0)=F(z)([ U(z,z_0)_11; U(z,z_0)_21 ]). The left hand side of Eq. (<ref>) on the annulus is known to have an asymptotic expansion of χ(z)∼ F(z)V_r where V_r=O(μ^0) as long as z is either in a single Stokes sector (defined to be a region on the annulus bounded by two anti-Stokes lines with at least one Stokes line in between) or is on an (anti-)Stokes line as the asymptotic expansion is taken. In other words, all solutions including Eq. (<ref>) can be matched with an asymptotic expansion satisfying Eq. (<ref>) F(z)O(μ^0)∼ F(z)([ U(z,z_0)_11; U(z,z_0)_21 ]) as long as z is restricted to a particular region in the complex plane. Let z_0 be an anti-Stokes line and let z=z_0γ where γ=exp(2π i/n+2): F(z_0γ)O(μ^0)∼ F(z_0γ)([ U(z_0γ,z_0)_11; U(z_0γ,z_0)_21 ]) which makes U(z_0γ,z_0)_21=O(μ^0). This implies lim_μ→0μ^n>0U_21(=0. Note that although this conclusion is also implied in <cit.>, our line of reasoning is distinct from what is presented there in that <cit.> uses the properties of the exact power series solution and in our language utilizes a particular gauge. JHEP tocsection
http://arxiv.org/abs/2408.12080v1
20240822024021
Exploring the Feasibility of Automated Data Standardization using Large Language Models for Seamless Positioning
[ "Max J. L. Lee", "Ju Lin", "Li-Ta Hsu" ]
eess.SP
[ "eess.SP", "cs.AI", "cs.NI" ]
Exploring the Feasibility of Automated Data Standardization using Large Language Models for Seamless Positioning Max J. L. Lee Department of Aeronautical and Aviation Engineering The Hong Kong Polytechnic University Hong Kong maxjl.lee@connect.polyu.hk Ju Lin Department of Aeronautical and Aviation Engineering The Hong Kong Polytechnic University Hong Kong ju.lin@connect.polyu.hk Li-Ta Hsu Department of Aeronautical and Aviation Engineering The Hong Kong Polytechnic University Hong Kong lt.hsu@polyu.edu.hk August 26, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT We propose a feasibility study for real-time automated data standardization leveraging Large Language Models (LLMs) to enhance seamless positioning systems in IoT environments. By integrating and standardizing heterogeneous sensor data from smartphones, IoT devices, and dedicated systems such as Ultra-Wideband (UWB), our study ensures data compatibility and improves positioning accuracy using the Extended Kalman Filter (EKF). The core components include the Intelligent Data Standardization Module (IDSM), which employs a fine-tuned LLM to convert varied sensor data into a standardized format, and the Transformation Rule Generation Module (TRGM), which automates the creation of transformation rules and scripts for ongoing data standardization. Evaluated in real-time environments, our study demonstrates adaptability and scalability, enhancing operational efficiency and accuracy in seamless navigation. This study underscores the potential of advanced LLMs in overcoming sensor data integration complexities, paving the way for more scalable and precise IoT navigation solutions. Data compatibility, data standardization, Extended Kalman Filter (EKF), heterogeneous sensor integration, indoor navigation, Internet of Things (IoT), Large Language Models (LLMs), positioning systems, sensor data fusion, UWB. § INTRODUCTION Accurate seamless positioning is paramount in the era of ubiquitous computing and the Internet of Things (IoT), enabling critical applications such as navigation, asset tracking, and location-based services <cit.>. The proliferation of IoT devices has exponentially increased the demand for precise and reliable positioning systems. Researchers have explored various indoor positioning techniques, including Bluetooth Low Energy (BLE) beacons <cit.>, Ultra-Wideband (UWB) ranging <cit.>, and inertial sensor-based dead reckoning <cit.>. Each method presents unique strengths and limitations regarding accuracy, coverage, scalability, and cost. BLE beacons provide a cost-effective solution with reasonable accuracy but require dense deployment for high precision <cit.>. UWB offers high accuracy and low latency, suitable for applications needing precise location information, such as industrial asset tracking <cit.>, but is generally more expensive and has a limited range compared to BLE-based systems. Inertial sensor-based dead reckoning relies on accelerometers, gyroscopes, and magnetometers to estimate position changes but suffers from cumulative errors over time, necessitating periodic calibration with other positioning methods <cit.>. Fusing multiple positioning technologies has become a promising approach to enhance performance and robustness, especially in complex urban environments <cit.>. Sensor fusion leverages the complementary strengths of different technologies to provide a more accurate and reliable positioning solution. For instance, combining BLE beacon data with inertial sensor data can compensate for the weaknesses of each method, resulting in improved accuracy and robustness. However, integrating diverse positioning sensors poses significant challenges due to heterogeneous data formats and the need for sophisticated algorithms to handle uncertainties, noise, and interdependencies among different data sources <cit.>. Traditional sensor fusion systems often rely on manual feature engineering and domain expertise, limiting their scalability and adaptability to new sensor types and environments <cit.>. Recent advancements in artificial intelligence (AI) and natural language processing (NLP), particularly Large Language Models (LLMs) like GPT-4-0613, offer promising solutions for the standardization of heterogeneous sensor data. These models have demonstrated remarkable capabilities in understanding and generating human-like text, inspiring researchers to explore their potential in other domains. Leveraging LLMs for automated data standardization can significantly reduce manual intervention and enhance scalability. §.§ Contributions This work makes several significant contributions to the field of seamless positioning and IoT applications: * Innovative Application of LLMs: Using LLMs for automating the standardization of heterogeneous sensor data is a new application extending their capabilities beyond traditional natural language processing tasks <cit.>. * Enhanced Scalability and Adaptability: Automating the data standardization process reduces the need for manual feature engineering and domain expertise, making the study more scalable and adaptable to new sensor types and environments. * Improved Accuracy: Integrating standardized data with the Extended Kalman Filter (EKF) enhances the accuracy of the positioning system, providing more accurate and reliable positioning estimates. § FEASIBILITY STUDY OVERVIEW The flowchart of the proposed feasibility study, depicted in Fig. <ref>, illustrates the iterative standardization and validation process essential for enhancing seamless positioning systems. The study begins with collecting unstandardized sensor data from sources like smartphones, IoT devices, and UWB tags. This heterogeneous data is processed by the Intelligent Data Standardization Module (IDSM), leveraging a fine-tuned Large Language Model (LLM) to automate standardization. The IDSM segments incoming data by sensor type and normalizes complex elements (e.g., timestamps to UNIX nanoseconds). The standardized data is formatted according to predefined specifications, as shown in Table <ref>. Unit tests ensure the accuracy and integrity of the standardized data before progressing. Following standardization, the Transformation Rule Generation Module (TRGM) creates transformation rules, generating scripts for new sensor data standardization, enhancing scalability. The scripts undergo unit testing for functionality and reliability. Finally, the Extended Kalman Filter (EKF) integrates standardized data from multiple sensors, improving the system's precision and robustness. Covariance matrices represent uncertainties, which are manually specified but may require adaptive approaches for real-world scenarios. § INTELLIGENT DATA STANDARDIZATION MODULE (IDSM) The IDSM's primary objective is to transform heterogeneous sensor data 𝒟 into a standardized format 𝒮. The raw data collected from various sensors is represented as: 𝒟 = {d_1, d_2, …, d_n} where d_i denotes the data from the i-th sensor. The standardization process is expressed as: 𝒮 = ℱ_IDSM(𝒟) The IDSM was fine-tuned using a curated dataset with 100 complex examples, addressing specific challenges in sensor data standardization. The standardized data schema structure is detailed in Table <ref>. Table <ref> highlights various edge cases and scenarios. The IDSM was developed and fine-tuned using the Azure platform and the GPT-4-0613 model. Azure's robust infrastructure provided the computational resources necessary for handling large datasets and performing iterative validations efficiently. This integration allowed for seamless scaling and deployment of the IDSM, ensuring it could handle real-time data processing requirements. §.§ Training and Validation Performance The IDSM's performance was assessed through training and validation metrics. Initially, the training loss was 0.3484 (Fig. <ref>), decreasing steadily to zero by step 27. Training accuracy started at 93.38%, reaching 100% by step 14 and maintaining this level. Validation metrics showed similar trends. The validation loss started at 0.1724, briefly increased to 0.6984 at step 2, then declined to nearly zero by step 27. Validation accuracy began at 96.14%, achieving near 100% by step 27. These results highlight the model's robust learning and generalization capabilities, effectively standardizing diverse sensor data in real-world scenarios. § STANDARDIZED DATASET UNIT TESTS The unit tests validate the standardized dataset 𝒮 against predefined JSON schemas for various sensor types. The validation function is expressed as: (ν, e) = 𝒱_IDSM(𝒮, σ) where ν indicates whether 𝒮 conforms to schema σ, and e contains details of validation errors. The iterative process ensures 𝒮 conforms to the schema through multiple cycles if necessary, up to a maximum of five iterations, based on empirical evidence and practical considerations. Most datasets (24 out of 30) required only one iteration for successful validation. Two datasets required up to four iterations, while four datasets did not achieve successful validation within the iteration limit of five. This rapid initial convergence supports five iterations as an optimal limit. These results underscore the robustness and effectiveness of the IDSM in standardizing diverse sensor data in most cases, ensuring high data quality and consistency. However, the datasets that did not validate within the iteration limit highlight areas for potential improvement in handling more complex data entries. § TRANSFORMATION RULES GENERATION MODULE (TRGM) The TRGM automates deriving transformation rules as detailed in Table <ref> for JSON structures using the GPT-4-0613 model. It converts input JSON files into a specified output format, reducing manual intervention. The process is represented as: ℛ = ℱ_TRGM(𝒮, ℐ) where 𝒮 is the standardized data and ℐ is the input JSON structure. The transformation rules are then used to create a "Transformation Script for Standardization" 𝒯, ensuring consistency in data transformation tasks. Table <ref> shows the schema structure for the transformation rules used by the TRGM. § TRANSFORMATION SCRIPT UNIT TESTS The unit tests for the TRGM validate the accuracy of the transformation scripts. The validation function is described as: (ν, e) = 𝒱_TRGM(𝒯, 𝒮) where ν indicates whether the output from 𝒯 matches the standardized data 𝒮, and e contains error details. The iterative validation process refines the transformation rules through successive refinements until the correct output is achieved. § SENSOR FUSION The Sensor Fusion module employs an Extended Kalman Filter (EKF) to integrate data from multiple sensors, providing accurate, real-time estimates of 3D positions and velocities <cit.>. §.§ State Vector and Transition Matrix The state vector 𝐱 includes 3D positions and velocities: 𝐱 = [ x; y; z; v_x; v_y; v_z ] The state transition matrix 𝐅_k governs the evolution of this state vector: 𝐅_k = [ 𝐈_3 × 3 Δ t 𝐈_3 × 3; 0_3 × 3 𝐈_3 × 3 ] where Δ t is the time interval between updates. §.§ Measurement Model The EKF utilizes measurements from multiple sensors. Table <ref> summarizes the measurement vectors and covariance matrices. §.§.§ Measurement Variance Calculation Measurement variances are derived from experimental results. For the GNSS receiver: σ_GNSS^2 = 25.02^2 + 5.47^2 = 655.00 m^2 For the UWB sensor: σ_UWB^2 = 0.79^2 + 0.62^2 = 1.00 m^2 For the camera: σ_cam^2 = 0.32^2 + 0.23^2 = 0.15 m^2 §.§ Control Input The control input vector 𝐮 includes accelerations, angular velocities, and magnetometer readings from the IMU. The orientation matrix 𝐂 transforms the accelerations from the IMU's frame to the NED frame. The control input matrix 𝐁_IMU integrates these transformed accelerations into the state vector, accounting for the time step Δ t. § EXPERIMENT §.§ Experiment Setup The experiment was conducted in a dynamic environment transitioning from outdoors to indoors, as depicted in Fig. <ref>. This location, characterized by tall buildings, represents urbanized areas where enhanced positioning accuracy through sensor fusion is essential. The total length of the ground truth path was approximately 60 meters. Data collection was performed using four different sensors, each on dedicated devices to simulate sensor fusion from various sources. Table <ref> lists the sensors and their fused counterparts. §.§ Experiment Results The proposed feasibility study was rigorously tested using sensor data collected from multiple sources, including smartphones, IoT devices, and UWB tags. The initial dataset comprised 10,000 unstandardized data entries, each containing various elements such as timestamps, sensor readings, and device identifiers. §.§.§ Positioning System Accuracy Positioning accuracy was evaluated by comparing the results obtained from different methods against the ground truth data. The error was measured by calculating the shortest distance between our solution and the ground truth path. The results are presented in Table <ref> and visualized in Fig. <ref>. Table <ref> and Fig. <ref> demonstrate the benefits of integrating multiple positioning technologies. The standalone GNSS system shows a mean error of 25.02 meters, revealing significant deviations due to urban multipath effects. Integrating GNSS with an IMU reduces the mean error to 23.24 meters, but errors remain substantial. The UWB system shows improved performance indoors, with a mean error of 0.79 meters, further reduced to 0.69 meters when combined with an IMU. The Visual Positioning System (VPS) shows exceptional accuracy in outdoor settings, with a mean error of 0.32 meters. When integrated with an IMU, VPS maintains high accuracy with a mean error of 0.33 meters. Combining GNSS, VPS, UWB, and IMU leverages the strengths of each technology, providing reliable and accurate real-time positioning in urban settings. The GNSS + VPS + UWB + IMU combination achieves a mean error of 0.33 meters, demonstrating the effectiveness of this integrated approach. Fig. <ref> illustrates the dynamic performance of each method. The UWB error fluctuates initially but stabilizes to a lower range. VPS and VPS + IMU methods exhibit consistently low errors, highlighting their robustness. The UWB + IMU combination shows a notable reduction in error compared to standalone UWB. The integrated GNSS + UWB + VPS + IMU approach maintains a consistently low error rate, validating the effectiveness of combining these technologies for optimal positioning accuracy. Note that GNSS and GNSS + IMU data are omitted from Fig. <ref> due to their high error values, which would obscure the visualization of other data. § CONCLUSION The study highlights the efficacy of the proposed feasibility study for real-time automated data standardization using large language models, particularly in enhancing seamless positioning. The Intelligent Data Standardization Module (IDSM) achieved near-zero loss and full accuracy in both training and validation phases, demonstrating robust learning capabilities and reliable data standardization across diverse sensor inputs. The Transformation Rules Generation Module (TRGM) significantly reduced the manual effort required for script generation, enhancing productivity and minimizing human intervention. In terms of positioning accuracy, the data fusion approach outperformed traditional standalone methods, achieving a Root Mean Square Error (RMSE) of 0.35 meters and a Mean Absolute Error (MAE) of 0.25 meters. This improvement is critical for applications necessitating precise and reliable location data. The synergy of IDSM's high accuracy and TRGM's automation capabilities presents a powerful solution for sensor data processing, leading to more reliable data and improved decision-making. Despite promising results, the reliance on predefined schemas and manually specified covariance matrices may limit system adaptability in dynamic environments. Additionally, the controlled evaluation setting may not fully capture real-world complexities. Future research should refine this approach for dynamic and complex environmental adaptation and integrate emerging technologies for robustness. Investigate models beyond GPT-4-0613 to understand performance variations, study computational costs and assess real-time versus offline operations. § ACKNOWLEDGMENT This research is supported by the University Grants Committee of Hong Kong under the scheme Research Impact Fund on the project R5009-21 “Reliable Multiagent Collaborative Global Navigation Satellite System Positioning for Intelligent Transportation Systems”. 20 IEEEtran ref1 H. Liu, H. Darabi, P. Banerjee and J. Liu, "Survey of Wireless Indoor Positioning Techniques and Systems," in IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 37, no. 6, pp. 1067-1080, Nov. 2007, doi: 10.1109/TSMCC.2007.905750. ref2 L. Atzori, A. Iera, and G. Morabito, “The Internet of Things: A survey,” Comput. Netw., vol. 54, no. 15, pp. 2787–2805, 2010. ref3 J. Hightower and G. Borriello, "Location systems for ubiquitous computing," in Computer, vol. 34, no. 8, pp. 57-66, Aug. 2001, doi: 10.1109/2.940014. ref4 C. Luo et al., “Pallas: Self-bootstrapping fine-grained passive indoor localization using WiFi monitors,” IEEE Trans. Mobile Comput., vol. 16, no. 2, pp. 466–481, Feb. 2017. ref5 M. Siekkinen, M. Hiienkari, J. K. Nurminen and J. Nieminen, "How low energy is bluetooth low energy? Comparative measurements with ZigBee/802.15. 4", Proc. IEEE Wireless Commun. Netw. Conf. Workshops (WCNCW’12), pp. 232-237, 2012. ref6 L. Ojeda and J. Borenstein, "Personal Dead-reckoning System for GPS-denied Environments," 2007 IEEE International Workshop on Safety, Security and Rescue Robotics, Rome, Italy, 2007, pp. 1-6, doi: 10.1109/SSRR.2007.4381271. ref7 P. D. Groves, Principles of GNSS, inertial, and multisensor integrated navigation systems, Second edition. Boston: Artech House, 2013. ref9 S. Knauth and A. Koukofikis, "Smartphone positioning in large environments by sensor data fusion, particle filter and FCWC," 2016 International Conference on Indoor Positioning and Indoor Navigation (IPIN), Alcala de Henares, Spain, 2016, pp. 1-5, doi: 10.1109/IPIN.2016.7743706. ref11 Y. Zhuang et al., “Multi-sensor integrated navigation/positioning systems using data fusion: From analytics-based to learning-based approaches,” Information Fusion, vol. 95,pp. 62–90, Jul. 2023, doi: 10.1016/j.inffus.2023.01.025. ref14 Y.Chang et al., “A Survey on Evaluation of Large Language Models,” ACM transactions on intelligent systems and technology, 2024, doi: 10.1145/3641289. ref16 “ZED-F9P module,” U-blox. https://www.u-blox.com/en/product/zed-f9p-module (accessed May 21, 2024). ref17 “UWB High-Precision Positioning: LinkTrack P-A Series,” Nooploop. https://www.nooploop.com/en/linktrack/ (accessed May 21, 2024). ref18 P. -E. Sarlin, C. Cadena, R. Siegwart and M. Dymczyk, "From Coarse to Fine: Robust Hierarchical Localization at Large Scale," 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 12708-12717, doi: 10.1109/CVPR.2019.01300. ref19 P. -E. Sarlin, D. DeTone, T. Malisiewicz and A. Rabinovich, "SuperGlue: Learning Feature Matching With Graph Neural Networks," 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 2020, pp. 4937-4946, doi: 10.1109/CVPR42600.2020.00499. ref20 K. T. H. Choi, “tszheichoi / awesome-sensor-logger,” GitHub. https://github.com/tszheichoi/awesome-sensor-logger/.
http://arxiv.org/abs/2408.12162v1
20240822071046
Empowering Over-the-Air Personalized Federated Learning via RIS
[ "Wei Shi", "Jiacheng Yao", "Jindan Xu", "Wei Xu", "Lexi Xu", "Chunming Zhao" ]
cs.IT
[ "cs.IT", "eess.SP", "math.IT" ]
Empowering Over-the-Air Personalized Federated Learning via RIS Wei Shi, Jiacheng Yao, Jindan Xu, Member, IEEE, Wei Xu, Senior Member, IEEE, Lexi Xu, and Chunming Zhao, Member, IEEE W. Shi, J. Yao, W. Xu, and C. Zhao are with the National Mobile Communications Research Laboratory, Southeast University, Nanjing 210096, China, and are also with the Purple Mountain Laboratories, Nanjing 211111, China (e-mail: {wshi, jcyao, wxu, cmzhao}@seu.edu.cn). J. Xu is with the School of Electrical and Electronics Engineering, Nanyang Technological University, Singapore 639798, Singapore (e-mail: jindan.xu@ntu.edu.sg). L. Xu is with the Research Institute, China United Network Communications Corporation, Beijing 100048, China (e-mail: davidlexi@hotmail.com). August 26, 2024 ====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT Over-the-air computation (AirComp) integrates analog communication with task-oriented computation, serving as a key enabling technique for communication-efficient federated learning (FL) over wireless networks. However, AirComp-enabled FL (AirFL) with a single global consensus model fails to address the data heterogeneity in real-life FL scenarios with non-independent and identically distributed local datasets. In this paper, we introduce reconfigurable intelligent surface (RIS) technology to enable efficient personalized AirFL, mitigating the data heterogeneity issue. First, we achieve statistical interference elimination across different clusters in the personalized AirFL framework via RIS phase shift configuration. Then, we propose two personalized aggregation schemes involving power control and denoising factor design from the perspectives of first- and second-order moments, respectively, to enhance the FL convergence. Numerical results validate the superior performance of our proposed schemes over existing baselines. Federated learning (FL), over-the-air computation (AirComp), personalized FL (PFL), reconfigurable intelligent surface (RIS), statistical interference elimination. § INTRODUCTION To build ubiquitous intelligence at the edge of wireless networks, federated learning (FL) stands out as a promising distributed learning approach due to its privacy-enhancing characteristic <cit.>. In a wireless FL system, multiple distributed devices communicate with a parameter server (PS) via wireless links for collaborative model training <cit.>. To enhance communication efficiency of wireless FL, over-the-air computation (AirComp) has emerged as a key technique by exploiting the waveform superposition property of multiple access channels. Specifically, AirComp enables fast aggregation of gradients from distributed devices through non-orthogonal multiple access, aligning with FL's requirement of averaging local gradients without necessitating access to individual values <cit.>. Although AirComp-enabled FL (AirFL) offers significant performance gains, it does not address the data heterogeneity in most real-life FL scenarios with non-independent and identically distributed local datasets. Such data heterogeneity hinders the generalization of a single global consensus model. To this end, preliminary works have been made to develop a personalized AirFL framework via clustering algorithms, where different models are trained for different clusters under the orchestration of the PS <cit.>. However, this personalized framework requires large-scale receiving antennas to combat interference, leading to a significant escalation in hardware cost. As a cost-effective physical-layer technology, reconfigurable intelligent surface (RIS) has been extensively studied to support various communication applications due to its capability for smart channel reconstruction <cit.>. In this paper, we introduce low-cost RIS to achieve statistical interference elimination across different clusters and facilitate simultaneous multi-cluster computation over-the-air, thereby enhancing the efficiency of personalized AirFL. § SYSTEM MODEL We consider a personalized AirFL system consisting of K distributed devices, which are partitioned into M (M<K) disjoint clusters 𝒦_1,…,𝒦_M. A specific clustering method can be found in <cit.>, which is not the focus of this paper. Our goal is to find the optimal personalized model parameters 𝐰_m∈ℝ^D for each cluster m∈[M] to minimize the loss function ℒ_m(𝐰_m)=1/|𝒦_m|∑_k∈𝒦_mF_k(𝐰_m,𝒟_k), where F_k(·,𝒟_k) is the loss function of device k with local dataset 𝒟_k. Distributed stochastic gradient descent (SGD) is adopted to optimize 𝐰_m in an iterative manner. First, at each training round t, the PS broadcasts the latest personalized models {𝐰_m,t}_m∈[M] to each device. Then, based on the clustering mechanism, each device k∈𝒦_m computes its local gradient 𝐠_m,t,k∈ℝ^D based on 𝐰_m and its local dataset 𝒟_k, and reports it to the PS. Finally, after receiving all the local gradients, the PS calculates the global gradient of cluster m as 𝐠_m,t = 1/|𝒦_m|∑_k∈𝒦_m𝐠_m,t,k, and updates the personalized model for cluster m through 𝐰_m,t+1=𝐰_m,t-η_m,t𝐠_m,t, where η_m,t is a chosen learning rate at the t-th training round. The above steps iterate until a convergence condition is met. Note that the operation in (<ref>) requires the PS to sum the local gradients of devices in each cluster separately. By applying AirComp, all devices simultaneously upload the analog signals of local gradients to the PS, achieving summation over-the-air. However, the analog nature of AirFL makes the PS cannot distinguish between the gradients of different clusters. In the following, we introduce an RIS-enabled personalized AirFL framework to address this challenge. Each cluster is assisted by an RIS with N reflecting elements to help realize the personalized model aggregation. To support simultaneous multi-cluster gradient estimation, at least M receiving antennas are required. Without loss of generality, we consider a PS equipped with M receiving antennas. Then, the received signal at the PS in the t-th round, 𝐘_t=[𝐲_1,t,𝐲_2,t,⋯,𝐲_M,t]^H∈ℂ^M× D, is given by 𝐘_t=∑_k=1^K √(p_k)(∑_i=1^Mβ_i,k𝐇_p,i^HΘ_i𝐡_i,k) 𝐠̅_m,t,k^H +𝐙_t, where 𝐠̅_m,t,k≜1/σ_m,t,k(𝐠_m,t,k-u_m,t,k1) represents the normalized gradient, u_m,t,k and σ_m,t,k denote the mean and standard deviation of all entries in 𝐠_m,t,k, p_k is the transmit power of device k, β_i,k is the cascaded large-scale fading coefficient from device k to the PS through the i-th RIS, 𝐇_p,i=[𝐡_p,i,1,𝐡_p,i,2,⋯,𝐡_p,i,M]∼𝒞𝒩(0,𝐈_N⊗𝐈_M) and 𝐡_i,k∼𝒞𝒩(0,𝐈_N) denote the small-scale fading channel from the i-th RIS to the PS and device k to the i-th RIS, respectively, 𝐙_t=[𝐳_1,t,𝐳_2,t,⋯,𝐳_M,t]^H is additive white Gaussian noise whose entries follow 𝒞𝒩(0,σ^2), Θ_i≜ diag{ e^jθ_i,1,…, e^jθ_i,n,…, e^jθ_i,N} is the reflection matrix of the i-th RIS, and θ_i,n∈[0,2π) is the phase shift introduced by the n-th RIS reflecting element. Then, based on the signal 𝐲_m,t at the m-th receiving antenna, the PS computes an estimated global gradient of cluster m as 𝐠̂_m,t={𝐲_m,t}/λ_m+∑_k∈𝒦_m u_m,t,k/|𝒦_m|1, where λ_m>0 is a denoising factor introduced by the PS. It is rewritten as 𝐠̂_m,t=∑_k∈𝒦_mℓ_m,k𝐠̅_m,t,k+ ∑_k∈𝒦_m u_m,t,k/|𝒦_m|1+∑_1≤ m^'≤ M m^'≠ m∑_k^'∈𝒦_m^'ℓ_m,k^'𝐠̅_m^',t,k^'+𝐳̅_m,t, where ℓ_m,k=√(p_k)/λ_m∑_i=1^Mβ_i,k{𝐡_p,i,m^HΘ_i𝐡_i,k}, ∀ m,k, and 𝐳̅_m,t≜{𝐳_m,t}/λ_m is the equivalent noise. Note that the estimated gradient is interfered by signals from other clusters and these interference cannot be eliminated since M<K. To this end, we propose an RIS phase shift configuration scheme that fortunately eliminates the interference from a statistical perspective in the following theorem. Statistical interference elimination, i.e., 𝔼[ℓ_m,k]>0, ∀ k∈𝒦_m and 𝔼[ℓ_m,k^']=0, ∀ k^'∉𝒦_m, can be achieved by setting θ_m,n =-∠ h_p,m,m,n^∗+∠∑_k∈𝒦_m h_m,k,n^∗, for m∈[M] and n∈[N], where h_p,m,m,n and h_m,k,n are the n-th elements of channel vectors 𝐡_p,m,m and 𝐡_m,k, respectively. See Appendix A. ▪ According to Theorem <ref>, we conclude that favorable propagation can be achieved through phase matching using low-cost RIS reflecting elements, thereby eliminating the need for expensive large-scale receiving antennas. After the statistical interference elimination, we focus on joint design of power control and denoising factors from the following two perspectives to enhance the FL convergence. 1) Unbiased design: From the perspective of first-order moment, ensuring unbiased gradient estimation is of pivotal significance for guaranteeing FL convergence <cit.>. Hence, we consider the following unbiasedness-oriented method. By setting p_k=σ_m,t,k^2β_m,k^-2ζ_m^2, ∀ k ∈𝒦_m, and λ_m=π N√(|𝒦_m|)ζ_m/4, the gradient estimation in (<ref>) is unbiased, where ζ_m=min_k∈𝒦_m√(P_k)β_m,k/σ_m,t,k√(D) and P_k is the maximum transmit power. See Appendix B. ▪ 2) Minimum mean squared error (MMSE) design: Apart from unbiasedness of the first-order moment, the second-order moment, known as MSE, also plays a decisive role in FL convergence <cit.>. For any given power control of p_k, we derive the optimal denoising factors in closed form in the following proposition. The optimal denoising factor of cluster m for minimizing MSE is equal to λ_m^*=|𝒦_m|∑_i=1^M ∑_k∈𝒦_ip_k h̅_m,k^2σ_i,t,k^2+σ^2/2/∑_k∈𝒦_m√(p_k)h̅_m,kσ_m,t,k^3, where h̅_m,k≜∑_i=1^Mβ_i,k{𝐡_p,i,m^HΘ_i𝐡_i,k}. See Appendix C. ▪ Substituting the optimal λ_m^*, we formulate a power control optimization problem for minimizing the sum MSE, which can be solved via typical optimization methods and please refer to Appendix C for details. To summarize, we conclude the proposed RIS-enabled personalized AirFL approach in Algorithm <ref>. § NUMERICAL RESULTS Assuming that the distances between the PS and each RIS are 200  m, and all the devices in each cluster m∈[M] are uniformly distributed within a disk of radius 300  m centered at the m-th RIS. The path loss exponent for all the links is 2.2. Fig. <ref> depicts the normalized MSE (NMSE) as a function of N for different values of P_k. It is observed that as P_k increases, the improvement in NMSE performance is marginal. This is due to the fact that an increase in transmit power amplifies not only the useful signals but interference. Furthermore, the NMSE of our proposed designs decreases linearly with large N on a log-log scale. This phenomenon becomes more obvious as P_k increases, owing to the diminishing impact of noise error. In addition, the NMSE curves with corrupted RIS phase shifts (i.e., 1-bit phase noise) are presented to validate the robustness of our proposed designs. The baseline utilizing random RIS phase shifts fails to obtain any effective performance enhancements as N and P_k increases, which demonstrates the importance of RIS phase shift configuration in Theorem <ref>. § PROOF OF THEOREM <REF> By substituting the RIS phase shifts in Theorem <ref>, the mean of ℓ_m,k for k ∈𝒦_m is calculated as 𝔼[ℓ_m,k]=√(p_k)/λ_m∑_i=1^Mβ_i,k𝔼[{𝐡_p,i,m^HΘ_i𝐡_i,k}]. 1) For i=m, we have 𝔼[{𝐡_p,m,m^HΘ_m𝐡_m,k}] = {𝔼[𝐡_p,m,m^HΘ_m𝐡_m,k]} ={𝔼[∑_n=1^N|h_p,m,m,n^∗|h_m,k,n∑_k∈𝒦_m h_m,k,n^∗/|∑_k∈𝒦_m h_m,k,n^∗|]} (a)=√(π)N/2{𝔼[h_m,k,n∑_k∈𝒦_m h_m,k,n^∗/|∑_k∈𝒦_m h_m,k,n^∗|]} (b)=√(π)N/2|𝒦_m|{𝔼[∑_k∈𝒦_m h_m,k,n∑_k∈𝒦_m h_m,k,n^∗/|∑_k∈𝒦_m h_m,k,n^∗|]} =√(π)N/2|𝒦_m|{𝔼[|∑_k∈𝒦_m h_m,k,n|]} (c)=√(π)N/2|𝒦_m|√(|𝒦_m|π)/2=π N/4√(|𝒦_m|), where (a) is due to the independence between different channels and 𝔼[|h_p,m,m,n^*|]=√(π)/2 <cit.>, (b) follows from the identically distributed characteristic of h_m,k,n∑_k∈𝒦_m h_m,k,n^∗/|∑_k∈𝒦_m h_m,k,n^∗| for different k∈𝒦_m, and ( c) comes from the fact that ∑_k∈𝒦_m h_m,k,n∼𝒞𝒩(0,|𝒦_m|) <cit.>. 2) For i≠ m, we have 𝔼[{𝐡_p,i,m^HΘ_i𝐡_i,k}] ={𝔼[𝐡_p,i,m^HΘ_i𝐡_i,k]} ={𝔼[∑_n=1^Nh_p,i,m,n^∗ e^-j∠ h_p,i,i,n^∗h_i,k,n∑_k^'∈𝒦_i h_i,k^',n^∗/|∑_k^'∈𝒦_i h_i,k^',n^∗|]} (d)=N·{𝔼[h_p,i,m,n^∗]𝔼[ e^-j∠ h_p,i,i,n^∗]𝔼[h_i,k,n]𝔼[∑_k^'∈𝒦_i h_i,k^',n^∗/|∑_k^'∈𝒦_i h_i,k^',n^∗|]} =0, where (d) exploits the independence of 𝐡_p,i,m, 𝐡_p,i,i, 𝐡_i,k, and 𝐡_i,k^', and the last equality comes from h_p,i,m,n∼𝒞𝒩(0,1). Then, by substituting (<ref>) and (<ref>) into (<ref>), it yields 𝔼[ℓ_m,k]=√(p_k)β_m,k/λ_mπ N/4√(|𝒦_m|)>0. In addition, the mean of ℓ_m,k^' for k^'∈𝒦_m^' and m^'≠ m is calculated as 𝔼[ℓ_m,k^']=√(p_k^')/λ_m∑_i=1^Mβ_i,k^'𝔼[{𝐡_p,i,m^HΘ_i𝐡_i,k^'}]. 1) For i=m, we have 𝔼[{𝐡_p,m,m^HΘ_m𝐡_m,k^'}] ={𝔼[𝐡_p,m,m^HΘ_m𝐡_m,k^']} ={𝔼[∑_n=1^N|h_p,m,m,n^∗|h_m,k^',n∑_k∈𝒦_m h_m,k,n^∗/|∑_k∈𝒦_m h_m,k,n^∗|]} =N·{𝔼[|h_p,m,m,n^∗|]𝔼[h_m,k^',n]𝔼[∑_k∈𝒦_m h_m,k,n^∗/|∑_k∈𝒦_m h_m,k,n^∗|]} =0. 2) For i=m^', we have 𝔼[{𝐡_p,m^',m^HΘ_m^'𝐡_m^',k^'}] ={𝔼[𝐡_p,m^',m^HΘ_m^'𝐡_m^',k^']} ={𝔼[∑_n=1^Nh_p,m^',m,n^∗ e^-j∠ h_p,m^',m^',n^∗h_m^',k^',n∑_k^'∈𝒦_m^' h_m^',k^',n^∗/|∑_k^'∈𝒦_m^' h_m^',k^',n^∗|]} =N·{𝔼[h_p,m^',m,n^∗]𝔼[ e^-j∠ h_p,m^',m^',n^∗]𝔼[h_m^',k^',n∑_k^'∈𝒦_m^' h_m^',k^',n^∗/|∑_k^'∈𝒦_m^' h_m^',k^',n^∗|]} =0. 3) For i≠ m and i≠ m^', we have 𝔼[{𝐡_p,i,m^HΘ_i𝐡_i,k^'}] ={𝔼[𝐡_p,i,m^HΘ_i𝐡_i,k^']} ={𝔼[∑_n=1^Nh_p,i,m,n^∗ e^-j∠ h_p,i,i,n^∗h_i,k^',n∑_k^''∈𝒦_i h_i,k^'',n^∗/|∑_k^''∈𝒦_i h_i,k^'',n^∗|]} =N·{𝔼[h_p,i,m,n^∗]𝔼[ e^-j∠ h_p,i,i,n^∗]𝔼[h_i,k^',n]𝔼[∑_k^''∈𝒦_i h_i,k^'',n^∗/|∑_k^''∈𝒦_i h_i,k^'',n^∗|]} =0. Then, by substituting (<ref>), (<ref>) and (<ref>) into (<ref>), it yields 𝔼[ℓ_m,k^']=0. § PROOF OF PROPOSITION <REF> By directly applying Theorem <ref> and substituting the parameters in Proposition <ref> into the expectation of the estimated global gradient 𝐠̂_m,t, we have 𝔼[𝐠̂_m,t]= ∑_k∈𝒦_m𝔼[ℓ_m,k]𝐠̅_m,t,k+ ∑_k∈𝒦_m u_m,t,k/|𝒦_m|1+∑_1≤ m^'≤ M m^'≠ m∑_k^'∈𝒦_m^'𝔼[ℓ_m,k^']𝐠̅_m^',t,k^'+𝔼[𝐳̅_m,t] = ∑_k∈𝒦_mσ_m,t,k/|𝒦_m|1/σ_m,t,k(𝐠_m,t,k-u_m,t,k1)+∑_k∈𝒦_mu_m,t,k/|𝒦_m|1 = 1/|𝒦_m|∑_k∈𝒦_m𝐠_m,t,k=𝐠_m,t. Therefore, the expectation of the estimated global gradient 𝐠̂_m,t is equal to the ground-truth global gradient 𝐠_m,t for m∈[M], which ensures the unbiasedness of gradient transmission <cit.>. This completes the proof. § PROOF OF PROPOSITION <REF> To begin with, we formulate the MSE of gradient estimation for cluster m as MSE_m =𝔼[‖𝐠_m,t-𝐠̂_m,t‖^2] =𝔼[‖∑_k∈𝒦_mℓ_m,k𝐠̅_m,t,k+∑_k∈𝒦_m u_m,t,k/|𝒦_m|1+∑_1≤ m^'≤ M m^'≠ m∑_k'∈𝒦_m'ℓ_m,k'𝐠̅_m^',t,k'+𝐳̅_m,t-∑_k∈𝒦_m1/|𝒦_m|𝐠_m,t,k‖^2] (a)=𝔼[‖∑_k∈𝒦_m(ℓ_m,k-σ_m,t,k/|𝒦_m|)𝐠̅_m,t,k+ ∑_1≤ m^'≤ M m^'≠ m∑_k'∈𝒦_m'ℓ_m,k'𝐠̅_m^',t,k'+𝐳̅_m,t‖^2] (b)=∑_k∈𝒦_m(ℓ_m,k-σ_m,t,k/|𝒦_m|)^2 σ_m,t,k^2 D + ∑_1≤ m^'≤ M m^'≠ m∑_k'∈𝒦_m'ℓ_m,k^'σ_m^',t,k^'^2D +σ^2 D/λ_m^2 (c)=(∑_i=1^M ∑_k∈𝒦_i p_k h̅_m,k^2σ_i,t,k^2 D+σ^2 D)1/λ_m^2-2(∑_k∈𝒦_m√(p_k)h̅_m,kσ_m,t,k^3D/|𝒦_m|)1/λ_m+∑_k∈𝒦_mσ_m,t,k^4D/|𝒦_m|^2, where (a) is due to the definition of g̅_m,t,k, (b) exploits the statistics of 𝐠̅_m,t,k and 𝐳̅_m,t, and (c) comes from the definition of ℓ_m,k. Note that the optimization of denoising factor is an unconstrained problem. For any given power control, we derive the optimal denoising factor, λ_m, by checking the following equality ∂MSE_m/∂λ_m =-2(∑_i=1^M ∑_k∈𝒦_i p_k h̅_m,k^2σ_i,t,k^2 D + σ^2 D)1/λ_m^3+2(∑_k∈𝒦_m√(p_k)h̅_m,kσ_m,t,k^3D/|𝒦_m|)1/λ_m^2=0 ⇒λ_m^* = |𝒦_m| ∑_i=1^M ∑_k∈𝒦_i p_k h̅_m,k^2σ_i,t,k^2 + σ^2 /∑_k∈𝒦_m√(p_k)h̅_m,kσ_m,t,k^3, and the proof completes. As for the optimization of power control, substituting the optimal λ_m^* into (<ref>), we rewrite MSE_m as MSE_m =∑_k∈𝒦_mσ_m,t,k^4D/|𝒦_m|^2-(∑_k∈𝒦_m√(p_k)h̅_m,kσ_m,t,k^3)^2 D/|𝒦_m|^2(∑_i=1^M ∑_k∈𝒦_i p_k h̅_m,k^2σ_i,t,k^2 + σ^2) =∑_k∈𝒦_mσ_m,t,k^4D/|𝒦_m|^2-(𝐩^T𝐛_m)^2D/𝐩^T 𝐀_m 𝐩+|𝒦_m|^2σ^2 =∑_k∈𝒦_mσ_m,t,k^4D/|𝒦_m|^2-D𝐩^T𝐁_m 𝐩/𝐩^T 𝐀_m 𝐩+|𝒦_m|^2σ^2, where 𝐩≜ [√(p_1),√(p_2),⋯,√(p_K)]^T, 𝐛_m ≜∑_k∈𝒦_mh̅_m,kσ_m,t,k^3𝐞_k, 𝐀_m ≜ |𝒦_m|^2diag{h̅_m,k^2 σ_i,t,k^2 }, 𝐁_m≜𝐛_m 𝐛_m^T, and 𝐞_k is the Kronecker delta vector with [𝐞_k]_k=1. Now, we formulate an equivalent power control optimization problem for minimizing the sum MSE as maximize_𝐩 ∑_m=1^M 𝐩^T𝐁_m 𝐩/𝐩^T 𝐀_m 𝐩+|𝒦_m|^2σ^2 subject to [𝐩]_k≤√(P_k),∀ k. It worth noting that the problem in (<ref>) is known as the sum of quadratic ratios maximization, which has been addressed in existing works via branch and bound <cit.>, harmony search method <cit.> and semidefinite relaxation (SDR) technique <cit.>. 1 0 W. Xu et al., “Toward ubiquitous and intelligent 6G networks: From architecture to technology," Sci China Inf Sci, vol. 66, no. 3, pp. 130300:1–2, Mar. 2023. 1 W. Xu et al., “Edge learning for B5G networks with distributed signal processing: Semantic communication, edge computing, and wireless sensing," IEEE J. Sel. Topics Signal Process., vol. 17, no. 1, pp. 9–39, Jan. 2023. gzhu G. Zhu et al., “Pushing AI to wireless network edge: An overview on integrated sensing, communication, and computation towards 6G," Sci. China Inf. Sci., vol. 66, no. pp. 130301:1–19, Mar. 2023. yjc1 J. Yao, W. Xu, Z. Yang, X. You, M. Bennis, and H. V. Poor, “Digital versus analog transmissions for federated learning over wireless networks," in Proc. IEEE Int. Conf. Commun. (ICC), Denver, USA, Jun. 2024, pp. 1047–1052. 2 J. Yao, W. Xu, Z. Yang, X. You, M. Bennis, and H. V. Poor, “Wireless federated learning over resource-constrained networks: digital versus analog transmissions," IEEE Trans. Wireless Commun., early access. doi: 10.1109/TWC.2024.3407822. 3 J. Yao, Z. Yang, W. Xu, D. Niyato, and X. You, “Imperfect CSI: A key factor of uncertainty to over-the-air federated learning," IEEE Wireless Commun. Lett., vol. 12, no. 12, pp. 2273–2277, Dec. 2023. 4 A. Ghosh, J. Chung, D. Yin, and K. Ramchandran, “An efficient framework for clustered federated learning," IEEE Trans. Inf. Theory, vol. 68, no. 12, pp. 8076–8091, Dec. 2022. 5 H. Sami and B. Güler, “Over-the-air personalized federated learning," in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), Singapore, May 2022, pp. 8777–8781. 6 W. Shi, W. Xu, X. You, C. Zhao, and K. Wei, “Intelligent reflection enabling technologies for integrated and green Internet-of-Everything beyond 5G: Communication, sensing, and security," IEEE Wireless Commun., vol. 30, no. 2, pp. 147–154, Apr. 2023. sw3 W. Shi, J. Xu, W. Xu, M. Di Renzo, and C. Zhao, “Secure outage analysis of RIS-assisted communications with discrete phase control," IEEE Trans. Veh. Technol., vol. 72, no. 4, pp. 5435–5440, Apr. 2023. 7 N. Zhang and M. Tao, “Gradient statistics aware power control for over-the-air federated learning," IEEE Trans. Wireless Commun., vol. 20, no. 8, pp. 5115–5128, Aug. 2021. xjd J. Xu et al., “Reconfiguring wireless environment via intelligent surfaces for 6G: Reflection, modulation, and security," Sci China Inf Sci, vol. 66, no. 3, pp. 130304:1–20, Mar. 2023. sw1 W. Shi et al., “Secure outage analysis for RIS-aided MISO systems with randomly located eavesdroppers," in Proc. IEEE Globecom Workshops (GC Wkshps), Kuala Lumpur, Malaysia, Dec. 2023, pp. 1445–1450. sw2 W. Shi, J. Xu, W. Xu, C. Yuen, A. L. Swindlehurst, and C. Zhao, “On secrecy performance of RIS-assisted MISO systems over Rician channels with spatially random eavesdroppers," IEEE Trans. Wireless Commun., vol. 23, no. 8, pp. 8357–8371, Aug. 2024. unbia J. Yao, Z. Yang, W. Xu, M. Chen, and D. Niyato, “GoMORE: Global model reuse for rescource-constrained wireless federated learning," IEEE Wireless Commun. Lett., vol. 12, no. 9, pp. 1543–1547, Sept. 2023. opt1 P. Shen, Y. Chen, and Y. Ma, “Solving sum of quadratic ratios fractional programs via monotonic function," Appl. Math. Comput., vol. 212, no. 1, pp. 234–244, Jun. 2009. opt2 M. Jaberipour and E. Khorram, “Solving the sum-of-ratios problems by a harmony search algorithm," J. Comput. Appl. Math., vol. 234, no. 3, pp. 733–742, Jun. 2010. opt3 J. Yao, J. Xu, W. Xu, C. Yuen, and X. You, “Superimposed RIS-phase modulation for MIMO communications: A novel paradigm of information transfer," IEEE Trans. Wireless Commun., vol. 23, no. 4, pp. 2978–2993, Apr. 2024. hzy1 Z. He, W. Xu, H. Shen, D. W. K. Ng, Y. C. Eldar, and X. You, “Full-duplex communication for ISAC: Joint beamforming and power optimization," IEEE J. Sel. Areas Commun., vol. 41, no. 9, pp. 2920–2936, Sept. 2023. hzy2 Z. He, H. Shen, W. Xu, Y. C. Eldar, and X. You, “MSE-based training and transmission optimization for MIMO ISAC systems," IEEE Trans. Signal Process., vol. 72, pp. 3104–3121, 2024.
http://arxiv.org/abs/2408.12154v1
20240822064406
Binary codes from subset inclusion matrices
[ "Alexey D. Marin", "Ivan Yu. Mogilnykh" ]
math.CO
[ "math.CO", "cs.IT", "math.IT" ]
-5mm -5mm -10mm
http://arxiv.org/abs/2408.11173v1
20240820201133
Delegation with Trust<T>: A Scalable, Type- and Memory-Safe Alternative to Locks
[ "Noaman Ahmad", "Ben Baenen", "Chen Chen", "Jakob Eriksson" ]
cs.PF
[ "cs.PF", "cs.OS" ]
language=Rust, style=Boxed Delegation with Trust<T>: A Scalable, Type- and Memory-Safe Alternative to Locks Noaman Ahmad <nahmad30@uic.edu>, Ben Baenen <bbaene2@uic.edu>, Chen Chen <cchen262@uic.edu>, Jakob Eriksson <jakob@uic.edu> ================================================================================================================================= § ABSTRACT We present , a general, type- and memory-safe alternative to locking in concurrent programs. Instead of synchronizing multi-threaded access to an object of type T with a lock, the programmer may place the object in a . The object is then no longer directly accessible. Instead a designated thread, the object's trustee, is responsible for applying any requested operations to the object, as requested via the API. Locking is often said to offer a limited throughput per lock. is based on delegation, a message-passing technique which does not suffer this per-lock limitation. Instead, per-object throughput is limited by the capacity of the object's trustee, which is typically considerably higher. Our evaluation shows consistently and considerably outperforming locking where lock contention exists, with up to 22× higher throughput in microbenchmarks, and 5–9 × for a home grown key-value store, as well as memcached, in situations with high lock contention. Moreover, is competitive with locks even in the absence of lock contention. § INTRODUCTION Safe access to shared objects is fundamental to many multi-threaded programs. Conventionally, this is achieved through locking, or in some cases through carefully designed lock-free data structures, both of which are implemented using atomic compare-and-swap (CAS) operations. By their nature, atomic instructions do not scale well: atomic instructions must not be reordered with other instructions, often starving part of today's highly parallel CPU pipelines of work until the instruction has retired. This effect is exacerbated when multiple cores are accessing the same object, resulting in the combined effect of frequent cache misses and cores waiting for each other to release the cache line in question, while the atomic instructions prevent them from doing other work. Delegation <cit.>, also known as message-passing or light-weight remote procedure calls (LRPC), offers a highly scalable alternative to locking. Here, each shared object[Here, we use object to mean a data structure that would be protected by a single lock.] is placed in the care of a single core (trustee below). Using a shared-memory message passing protocol, other cores (clients) issue requests to the trustee, specifying operations to be performed on the object. Compared to locking, where threads typically contend for access, and may even suspend execution to wait for access, delegation requests from different clients are submitted to the trustee in parallel and without contention. This dramatically reduces the cost of coordination for congested objects. The operations/critical sections are applied sequentially in both designs: by each thread using locks, or by the trustee using delegation; here delegation may benefit from improved locality at the trustee. Together, this translates to much higher maximum per-object throughput with delegation vs. locking. Moreover, a single client thread may have multiple outstanding requests to one or more trustees, providing both parallelism and transparent batching benefits. We propose (pronounced trust-tee), a programming abstraction and runtime system which provides safe, high-performance access to a shared object (or property) of type T. Briefly, a provides a family of functions of the form: apply(c:FnOnce(&mut T)→ U)→ U, which causes the closure c to be safely applied to the property (of type T), and returns the return value (of type U) of the closure to the caller. Here, FnOnce denotes a category of Rust closure types, and &mut denotes a mutable reference. (A matching set of non-blocking functions is also provided, which instead executes a callback closure with the return value.) Critically, access to the property is only available through the API, which taken together with the Rust ownership model and borrow checker eliminates any potential for race conditions, given a correct implementation of apply. Our implementation of uses pure delegation. However, the design of the API also permits lock-based implementations, as well as hybrids. Beyond the API, provides a runtime for scheduling request transmission and processing, as well as lightweight user threads (fibers below). This allows each OS thread to serve both as a Trustee, processing incoming requests, and a client. Multiple outstanding requests can be issued either by concurrent synchronous fibers or an asynchronous programming style. The primary contributions of this paper are as follows: * Trust<T>: a model for efficient, multi-threaded, delegation-based programming with shared objects leveraging the Rust type system. * A new delegation channel design, for delegating a variable number of arbitrary-sized and extremely flexible requests per message. * Two efficient mechanisms for supporting nested delegation requests, a key missing ingredient in previous work on delegation. * Performance improvements up to 22× vs. the best locks on congested micro-benchmarks * Delegation performance consistently matching uncongested locks, given sufficient available parallelism * Memcached performance improvements of up to 9× on benchmarking workloads vs. stock memcached. § BACKGROUND AND MOTIVATION Locking suffers from a well-known scalability problem: as the number of contending cores grows, cores spend more and more of their time in contention, and less doing useful work. Consider a classical, but idealized lock, in which there are no efficiency losses due to contention. Here, the sequential cost of each critical section is the sum of (a) any wait for the lock to be released, (b) the cost of acquiring the lock, (c) executing the critical section, and (d) releasing the lock. Not counting any re-acquisitions on the same core, this must be at minimum one cache miss per critical section, in sequential cost. To make matters worse, this cache miss is incurred by an atomic instruction, effectively stallign the CPU until the cache miss is resolved (and in the case of a spinlock, until the lock is acquired). Two main solutions to this problem exist. First, where the data structure permits, fine-grained locking can be used to split the data structure into multiple independently locked objects, thus increase parallelism, reduce lock contention and wait times. With the data structure split into sufficiently many objects, and accesses distributed uniformly, a fine-grained locking approach tends to offer the best available performance. The second solution is various forms of delegation, where one thread has custody of the object, and applies critical sections on behalf of other threads. Ideally, this minimizes the sequential cost of each critical section without changing the data structure: there are no sequential cache misses, ideally no atomic instructions, but of course the critical sections themselves still execute sequentially. Combining <cit.>, is a flavor of delegation in which threads temporarily take on the role of combiner, performing queued up critical sections for other threads. Combining can scale better than locking in congested settings, but does not offer the full benefits of delegation as it makes heavy use of atomic operations, and moves data between cores as new threads take on the combiner role. Most recently, TCLocks <cit.> offers a fully transparent combining-based replacement for locks, by capturing and restoring register contents, and automatically pre-fetching parts of the stack. TClocks claims substantial benefits for extremely congested locks, and the backward compatibility is of course quite attractive. However, a cursory evaluation in <ref> reveals that TCLocks substantially underperform regular locks beyond extremely high contention settings, and never approaches performance. Beyond combining, delegation has primarily been explored in proof-of-concept or one-off form, with relatively immature programming abstractions. We propose , a full-fledged delegation API for the Rust language, which presents delegation in a type-safe and familiar form, while substantially outperforming the fastest prior work on delegation. While delegation offers much higher throughput for congested shared objects, it does suffer higher latency than locking in uncongested conditions. To hide this latency, and make delegation competitive in uncongested settings, exposes additional concurrency to the application via asynchronous delegation requests and/or light-weight, delegation-aware user threads (fibers). Lacking modularity is another common criticism of delegation: in FFWD <cit.>, an early delegation design, delegated functions must not perform any blocking operations, which includes any further delegation calls. In , this constraint remains for the common case, as this typically offers the highest efficiency. However, offers several options for more modular operation. First, asynchronous/non-blocking delegation requests are not subject to this constraint - these requests may be safely issued in any context. Second, leveraging our light-weight user threads, we offer the option of supporting blocking calls in delegated functions, on an as-needed basis. Finally, prior work on delegation has required one or more cores to be dedicated as delegation servers. While offers dedicated cores as one option, the runtime has every core act as a delegation server, again leveraging light-weight user threads. Beyond easing application development and improving load balancing, having a delegation server on every core allows us to implement without any use of atomic instructions, instead relying on delegation for all inter-thread communication. Beyond potential performance advantages, this also makes applicable to environments where atomic operations are unavailable. § : THE BASICS The objective of is to provide an intuitive API for safe, efficient access to shared objects. Naturally, our design motivation is to support delegation, but the API can in principle also be implemented using locking, or a combination of locking and delegation. Below, we first introduce the basic programming model, as well as the key terms trust, property, trustee and fiber in the context, before digging deeper into the design of . §.§ Trust: a reference to an object A is a thread-safe reference counting smart-pointer, similar to Rust's Arc<T>. To create a , we clone an existing or entrust a new object, or property of type T, that is meant to be shared between threads. Once entrusted, the property can only be accessed by applying closures to it, using a trust. Figure <ref> illustrates this through a minimal Rust example. Line 1 entrusts an integer, initialized to 17, to the local trustee - the trustee fiber running on the current kernel thread. Line 2 applies an anonymous closure to the counter, via the trust. The closure expected by apply takes a mutable reference to the property as argument, allowing it unrestricted access to the property, in this case, our integer. The example closure increments the value of the integer. The assertion on line 3 is illustrative only. Here, we apply a second closure to retrieve the value of the entrusted integer[A note on ownership: While the passed-in closure takes only a reference to the property, the Rust syntax *c denotes an explicit dereference, essentially returning a copy of the property to the caller. This will pass compile-time type-checking only for types that implement Copy, such as integers.]. In the example in Figure <ref> the counter is instead incremented by two different threads. Here, the clone() call on ct (line 2) clones the trust, but not the property; instead a reference count is incremented for the shared property, analogous to Arc::clone(). On line 3, a newly spawned thread takes ownership of ct2, in the Rust sense of the word, then uses this to apply a closure (line 4). When the thread exits, ct2 is dropped, decrementing the reference count, by means of a delegation request. When the last trust of a property is dropped, the property is dropped as well. For readers unfamiliar with Rust, Figure <ref> illustrates the rough equivalent of Figure <ref>, but using conventional Rust primitives instead of . Note the similarity in terms of legibility and verbosity. §.§ Trustee - a thread in charge of entrusted properties In our examples above, is implemented using delegation. Here, a property is entrusted to a trustee, a designated thread which executes applied closures on behalf of other threads. In the default runtime environment, every OS thread in use already has a trustee user-thread (fiber) that shares the thread with other fibers. When a fiber applies a closure to a trust, this is sent to the corresponding trustee as a message. Upon receipt, the trustee executes the closure on the property, and responds, including any closure return value. This may sound complex, yet the produced executable code substantially outperforms locking in congested settings. A TrusteeReference API is also provided. Here, the most important function is entrust(), which takes a property of type T as argument (by value), and returns a referencing the property that is now owned by the trustee. This API allows the programmer to manually manage the allocation of properties to trustees, for performance tuning or other purposes. Alternatively, a basic thread pool is provided to manage distribution of fibers and variables across trustees. §.§ Fiber - a delegation-aware, light-weight user thread While the abstraction has some utility in isolation, it is most valuable when combined with an efficient message-passing implementation and a user-threading runtime. User-level threads, also known as coroutines or fibers, share a kernel thread, but each execute on their own stack, enabling a thread to do useful work for one fiber while another waits for a response from a trustee. This includes executing the local trustee fiber to service any incoming requests. In this default setting, the synchronous apply() function suspends the current fiber when it issues a request, scheduling the next fiber from the local ready queue to run instead. The local fiber scheduler will periodically poll for responses to outstanding requests, and resume suspended fibers as their blocking requests complete. §.§ Delegated context For the purpose of future discussion, we define the term delegated context to mean the context where a delegated closure executes. Generally speaking, closures execute as part of a trustee fiber, on the trustee's stack. Importantly, blocking delegation calls are not permitted from within delegated context, and will result in a runtime assertion failure. In <ref>, we describe multiple ways around this constraint. § CORE API The API supports a variety of ways to delegate work, some of which we elide due to space constraints. Below, we describe the core functions in detail. For a full API review, see the technical report and API documentation <cit.>. §.§ apply(): synchronous delegation [numbers=none] apply(c: FnOnce( mut T)->U)->U apply() is the primary function for blocking, synchronous delegaion as described in earlier sections. It takes a closure of the form |&mut T| {}, where T is the type of the property. If the closure has a return value, apply returns this value to the caller. Importantly, apply() is synchronous, suspending the current fiber until the operation has completed. Often, the best performance with apply() is achieved when running multiple application fibers per thread. Then, while one fiber is waiting for its response, another may productively use the CPU. §.§ apply_then(): non-blocking delegation [numbers=none] apply_then(c: FnOnce( mut T)->U, then: FnOnce(U)) Frequently, asynchronous (or non-blocking) application logic can allow the programmer to express additional concurrency either without running multiple fibers, or in combination with multiple fibers. Here, apply_then() returns to the caller without blocking, and does not produce a return value. Instead, the second closure, then, is called with the return value from the delegated closure, once it has been received. Figure <ref> demonstrates the use of apply_then() following the pattern of Figure <ref>. The then-closure is a very powerful abstraction, as it too is able to capture variables from the local environment, allowing it to perform tasks like adding the return value (once available) to a vector accessible to the caller. Here, Rust's strict lifetime rules automatically catch otherwise easily introduced use-after-free and dangling pointer problems, forcing the programmer to appropriately manage object lifetime either through scoping or reference counted heap storage. Importantly, as apply_then() does not suspend the caller, it may freely be called from within delegated context. §.§ launch(): apply in a trustee-side fiber [numbers=none] launch(c: FnOnce( mut T)->U)->U launch_then(c: FnOnce( mut T)->U, then: FnOnce(U)) The most significant constraint imposed by on the closure passed to apply() and apply_then() is that the closure itself may not block. Blocking in delegated context means putting the trustee itself to sleep, preventing it from serving other requests, potentially resulting in deadlock. In previous work <cit.>, this problem was addressed by maintaining multiple server OS threads, and automatically switching to the next server when one server thread blocks. This avoids blocking the trustee, but imposes high overhead, resulting in considerably lower performance, as demonstrated in <cit.>. In , blocking in delegated context is prohibited: attempted suspensions in delegated context are detected at runtime, resulting in an assertion failure. Closures may still use apply_then(), but not the blocking apply(). [Other forms of blocking, such as I/O waits or scheduler preemption, do not result in assertion failures. However, these can significantly impact performance if common, as blocking the trustee can prevent other threads from making progress.] The lack of nested blocking delegation can be a significant constraint on the developer, and perhaps the most important limitation of . Specifically, it affects modularity, as a library function that blocks internally, even on delegation calls, cannot be used from within delegated context. To address this, without sacrificing the performance of the more common case, we provide a convenience function: launch(), which offers all the same functionality as apply(), but without the blocking restriction. Figure <ref> describes launch() from an implementation standpoint. launch() creates a temporary fiber on the trustee's thread, which runs the closure. If this fiber is suspended, the client is notified, and the trustee continues to serve the next request. Once the temporary fiber resumes and completes execution of the closure, it then delivers the return value and resumes the client fiber via a second delegation call. Thus, if a delegated closure fails the runtime check for blocking calls, the developer can fix this by replacing the apply() call, with a launch() call. §.§.§ Atomicity and launch() That said, a complicating factor with blocking closures executed by launch() is that without further protection, property accesses are no longer guaranteed to be atomic: while the newly created fiber is suspended, another delegation request may be applied to the property, resulting in a race condition. To avoid this risk, launch() is implemented only for Trust<Latch<T>>. Latch<T> is a wrapper type which provides mutual exclusion, analogous to Mutex<T> except that it uses no atomic instructions, and thus may only be accessed by the fibers of a single thread.[In Rust terms, Latch<T> does not implement Sync.] §.§.§ Leveraging Rust for safe and efficient delegation Using the Rust type system, we ensure that delegated closures in cannot capture values that contain any references or pointers. In principle, this is far stricter than what is necessary: the existing and pervasive Rust traits Send and Sync already describe the types that may be safely moved and shared between threads, and this continues to hold within . That said, safety does not guarantee performance. A common performance pitfall when writing delegation-based software is memory stalls on the trustee, which affects trustees disproportionately due to the polling nature of the delegation channel (see <ref>). Frequent cache misses and use of atomic instructions in delegated closures can substantially degrade trustee throughput vs. running closures with good memory locality. Generally speaking, cache line contention and use of atomic instructions are a natural result of sharing memory between threads. By prohibiting the capture of references and pointers, makes accidental shared memory patterns of programming much less likely in delegated code, and encourage pass-by-value practices. §.§.§ Variable-size and other heap-allocated values Rust closures very efficiently and conveniently capture their environment, which apply() sends whole-sale to the trustee. However, only types with a size known at compile time may be captured in a Rust closure (or even allocated on the stack). In conventional Rust code, variable size types, including strings, are stored on the heap, and referenced by a Box<T> smart pointer. For the reasons described above (see <ref>), we do not allow Box<T> or other types that include pointers or references to be captured in a closure: only pure values may pass through the delegation channel. As a result, variable size objects and other heap-allocated objects must be passed as explicit arguments rather than captured, so that they may be serialized before transmission over the delegation channel. For example, a Box<[u8]> (a reference to a heap-allocated variable-sized array of bytes) cannot traverse the delegation channel. Instead, we encode a copy of the variable number of bytes in question into the channel, and pass this value to the closure when it is executed by the trustee. In practice, this takes the form of a slightly different function signature. [numbers=none] apply_with(c: FnOnce( mut T, V)->U, w: V)->U Here, the w: argument is any type V:Serialize+Deserialize, using the popular traits from the serde crate. That is, any type that can be serialized and deserialized, may pass over the delegation channel in serialized form. If more than one argument is needed, these may be passed as a tuple. Thus, to insert a variable-size key and value into an entrusted table, we might use: [numbers=none] table_trust.apply_with(|table, (key, value)| table.insert(key,value),(key,value)) We use the efficient bincode crate internally for serialization. As a result, while passing heap-allocated values does incur some additional syntax, the impact in terms of performance is minimal. § KEY DESIGN AND IMPLEMENTATION DETAILS In this section, we delve deeper into the design and implementation of , from the mechanics of delegating Rust closures and handling requests and responses, to asynchronous versions of apply(). §.§ Delegating Closures The key operation supported by is apply(), which applies a Rust closure to the property referenced by the trust. A Rust closure consists of an anonymous function and a captured environment, which together is represented as a 128-bit fat pointer. Thus, to delegate a closure, a request must at minimum contain this fat pointer, and a reference to the property in question. One or more requests are written to the client's dedicated, fixed-sized request slot for the appropriate trustee. That is, only the client thread may write to the request slot. For efficiency, if the captured environment of the closure fits in the request slot, we copy the environment directly to the slot, and update the fat pointer to reflect this change. A flag in the request slot indicates that new requests are ready to be processed. See <ref> for details on request and response slot structure. Responses are transmitted in a matching dedicated response slot. Leveraging the Rust type system, we restrict both requests and responses to types that can be serialized. The subtle implication of this is that the return value may not pass any references or pointers to trustee-managed data.[That said, we cannot prevent determined Rust programmers from using unsafe code to circumvent this restriction.] While small closures with simple, known-and-fixed-size return types will generally yield the best performance, there is no limit beyond the serializability requirement on the size or complexity of closures and return types. §.§ Scheduling Delegation Work Generally speaking, a call to apply() appends a request to a pending request queue, local to the requesting thread. In the case of apply(), the calling fiber is then suspended, to be woken up when the response is ready. Pending requests are sent during response polling, and as soon as an appropriate request slot is available. The intervening time is spent running other fibers, including the local trustee fiber, and polling for responses/transmitting requests. There is a throughput/latency trade-off between running application fibers, and polling for requests/responses: poll too often, and few requests/responses will be ready, wasting polling effort. Poll too seldom, and many requests/responses will have been ready for a long time, increasing latency. Automatically tuning this trade-off is an area of ongoing research. That said, the current implementation performs delegation tasks in a fiber that is scheduled in FIFO order just as other fibers. After serving incoming requests, this fiber polls for incoming responses and issues any enqueued outgoing requests as applicable. §.§.§ Local Trustee Shortcut When a Trust has the current thread as its trustee, it is superfluous to use delegation to apply the closure. Instead, it is just as safe, and more efficient, to simply apply the closure directly, since we know that no other closures will run until the provided closure has run to completion. As a reminder, we know this because delegated closures may not suspend the current fiber. §.§ Request and Response Slot Structure Figure <ref> illustrates the internal structure of the basic request and response slot design. A header consisting of a ready bit and a request count, is followed by a variable number of variable-sized requests. The value of the ready bit is used to indicate whether a new request or set of requests has been written to the slot: if the bit differs from the ready bit in the corresponding response slot, then a new set of requests is ready to be processed. By default, the slot size is 1152 bytes, and the client may submit as many closures as it can fit within the slot. Here, the minimum size of a request is 24 bytes: a 128-bit fat pointer for the closure, and a regular 64-bit pointer for the property. The captured environment of Rust closures have a known, fixed size, which is found in the vtable of the closure. For typical small captured environments, this is copied into the request slot, and the pointer updated to point at the new location. Serialized closure arguments are appendend next, followed by the next request. Responses are handled in a similar fashion, though there is no minimum response size. Responses are sent simultaneously for all the requests in the request slot. The size of each response is often statically known, in which case it is not encoded in the channel. Any variable-size responses are preceded by their size. The size of each request is always known, either statically or at the time of submission, which means we can restrict the number of requests sent to what can be accommodated by the request slot. The size of the response is not always known at the time the request is sent. In cases where the combined size of return values exceeds the space in the response slot, the trustee dynamically allocates additional memory to fit the full set of responses, at a small performance penalty. §.§.§ Two-part slot optimization In order to accommodate a broad range of application characteristics, including those with a single trustee and many clients, as well as a single client with many trustees, we introduce a small optimization beyond the basic design above. Rather than represent the request and response slots as monolithic blocks of bytes, we represent each as two blocks: a 128-byte primary block, and a 1024-byte overflow block; each request and response is written, in its entirety, to one or the other block. This addresses an otherwise problematic trade-off with respect to the request and response slot sizes: with a monolithic request slot of, say, one kilobyte, the trustee would be periodically scanning flags 1024 bytes apart, a very poor choice from a cache utilization perspective, unless the slots are heavily utilized. A two-part design accommodates a large number of requests (where needed), but improves the efficiency of less heavily utilized request slots by spacing ready flags, and a small number of compact requests, more closely using a smaller primary request block. § EVALUATION Below, we evaluate the performance of in two ways: 1) on both microbenchmarks, designed to stress test the core mechanisms behind and locking, and 2) on end-to-end application benchmarks, which measure the performance impact of in the context of a complete system and a more realistic use case. §.§ Fetch and Add: Throughput For our first microbenchmark, we use a basic fetch-and-add application. Here, a number of threads repeatedly increment a counter chosen from a set of one or more, and fetches the value of the counter. In common with prior work on synchronization and delegation <cit.>, we also include a single pause instruction in both the critical section and the delegated closures. The counter is chosen at random, either from a uniform distribution, or a zipfian distribution. Each thread completes 1 million such increments. In this section, each data point is the result of a single run. Below, we primarily evaluate on a two-socket Intel Xeon CPU Max 9462, of the Sapphire Rapids architecture. This machine has a total of 64 cores, 128 hyperthreads, and 384 GB of RAM. Unless otherwise noted, we use 128 OS threads. In testing, several older x86-64 ISA processors have shown similar trends – these results are not shown here. For locking solutions, we use standard Rust Mutex<T> and the spinlock variant provided by the Rust spin-rs-0.9.8 crate, as well as MCSLock<T> provided by the Rust synctools-0.3.2 crate. For Trust<T>, we show results for blocking delegation (Trust) as well as nonblocking delegation (Async). In Fig. <ref>, we also include TCLocks, a recent combining approach offering a transparent replacement for standard locks, via the Litl lock wrapper <cit.> for pthread_mutex. To be able to evaluate this lock, we wrote a separate C microbenchmark, matching the Rust version. In the interest of an apples-to-apples comparison, we first verified that the reported performance with stock pthread_mutex on the C microbenchmark matched the Rust Mutex<T> performance in our Rust microbenchmark. Below, the Trust results may be seen to represent any application with ample concurrency available in the form of conventional synchronous threads. Async represents applications where a single thread may issue multiple simultaneously outstanding requests, e.g. a key-value store or web application server. Applications with limited concurrency are not well suited to delegation, except where the delegated work is itself substantial, which is not the case for this fetch-and-add benchmark. We further report results with both letting all cores serve as both clients and trustees (shared), as well as with an ideal number of cores dedicated serve only as trustees (dedicated). §.§.§ Uniform Access Pattern Figure <ref> illustrates the performance of several solutions on the uniform distribution version of this benchmark. For a very small number of objects, no data points are reported for some of the lock types - this is because the experiment took far too long to run due to severe congestion collapse. Trust<T> substantially outperforms locking under congested conditions. Between 1–16 objects, the performance advantage is 8–22× the best-performing MCSLock. For larger numbers of objects, the overhead of switching between fibers becomes apparent, as asynchronous delegation is able to reach a higher peak performance. In entirely uncongested settings, with 10× as many objects as there are threads, locking is able to match asynchronous delegation performance. TCLocks <cit.> was the only lock type to complete the single-lock experiment within a reasonable time. It consistently outperforms spinlocks under congestion, and remains competitive with Mutex and MCS on highly congested locks. However, TCLocks appear to trade their transparency for high memory and communication overhead, making it unable to compete performance-wise beyond highly congested settings. [TCLocks performance appears somewhat architecture dependent. In separate runs on our smaller Skylake machines, TCLocks were able to outperform Mutex by ≈50% under the most extreme contention (a single lock).] Moreover, we struggled to apply TCLocks to memcached (which consistently crashed under high load), as well as to Rust programs (as Rust now uses built-in locks rather than libpthreads wrappers). We thus elide TCLocks from the remainder of the evaluation. §.§.§ Skewed Access Pattern: Zipfian distribution Zipf's law <cit.> elegantly captures the distribution of words in written language. In brief, it says that the probability of word occurrence p_w is distributed according to the rank r_w of the word, thus: p_w ∝r_w^-α, where α∼ 1. Similar relationships, often called “power laws”, are common in areas beyond written language <cit.>, sometimes with a greater value for α. The higher the α, the more pronounced is the effect of popular keys, resulting in congestion. Figure <ref> shows the results of our fetch-and-add experiment, but with objects selected according to a zipfian distribution (α=1) instead of the uniform distribution above, representing a common skewed access distribution. With this skewed access pattern, overwhelmingly outperforms locking across the range of table sizes. This is explained by the relatively low throughput of a single lock. In our experiments, even MCSLocks, known for their scalability, offer at best 2.5 MOPs. When a skewed access pattern concentrates accesses to a smaller number of such locks, low performance is inevitable. By comparison, a single trustee will reliably offer 25 MOPs, for similarly short critical sections. For more highly skewed patterns, where α > 1 (not shown), the curve grows ever closer to the horizontal as performance is bottlenecked by a small handful of popular items. §.§ Fetch and Add: Latency Next we measure mean latency for a scenario with 64 objects (uniform access distribution), and 1,000,000 objects (Zipfian access distribution), while varying the offered load. We show delegation results with 8 dedicated trustee cores, and with 64 shared trustee cores[The evaluation system has 64 cores, 128 hardware threads. In the vast majority of cases, having both hardware threads of each core work as trustees results in reduced performance.]. We also plot the results for a spinlock, a standard Rust mutex, and an MCS lock as above. At low load, low contention results in low latency for locking, a ideal situation for locks. However, as load increases, the locks eventually reach capacity, resulting in a rapid rise in latency. With , even low load incurs significant latency, due to message passing overhead. However, due to the much higher per-object capacity available, latency increases slowly with load until the capacity is reached. Thus, offers stable performance over a wide range of loads, at the cost of increased latency at low load. The higher latency does mean that to take full advantage of delegation, applications need to have ample parallelism available. For both Uniform and Zipfian access distributions, we also measured 99.9th percentile (tail) latency (not shown). Overall, tail latency with locking (all types) tended to be approximately 10× the mean latency, in low-congestion settings. Delegation tail latency with a dedicated trustee, meanwhile, was 2.5× the mean, making delegation tail latency under low load only 2–3× that of locking. It's also worth noting the difference between 8 dedicated trustees, and 64 trustees on threads shared with clients. The latency when sharing the thread with clients is naturally higher than when using trustees dedicated to trustee work. However, as load increases having more trustees available to share the load results in better performance. Using all the cores for trustees all the time also eliminates an important tuning knob in the system. §.§ Concurrent key-value store For a more complete end-to-end evaluation, we implement a simple TCP-based key-value store, backed by a concurrent dictionary. Here, we run a multi-threaded TCP client on one machine, and our key-value store TCP server on another, identical machine. The two machines are connected by 100 Gbps Ethernet. We compare our based solution to Dashmap <cit.>, one of the highest-performing concurrent hashmaps available as a public Rust crate, as well as to our own naïvely sharded Hashmap, using Mutex or Readers-writer locks and the Rust std::collections::HashMap<K, V>. Dashmap is a heavily optimized and well-respected hash table implementation, which is regularly benchmarked against competing designs. We implement the key-value store as a multi-threaded server, where each worker-thread receives or queries from one or more connections, and applies these to the back-end hashmap. Both reading requests and sending results is done in batches, so as to minimize system call overhead. Moreover, the client accepts responses out-of order, to minimize waiting. The TCP client continuously maintains a queue of parallel queries over the socket, such that the server always has new requests to serve. In the experiments, we dedicate one CPU core to each worker thread. For our sharded hashmaps, we create a fixed set of 512 shards, using many more locks than threads to reduce lock contention. Dashmap uses sharding and readers-writer locks internally, but exposes a highly efficient concurrent hashmap API. For our based key-value store, we use 16 and 24 cores to run trustees (each hosting a shard of the table) exclusively, and the remaining cores for socket workers. They are named Trust16 and Trust24, respectively. Socket workers delegate all hash table accesses to trustees. The key size is 8 bytes and the value size is 16 bytes in the experiments. Prior to each run, we pre-fill the table, and report results from an average of 10 runs. Figures <ref>–<ref> show the results from this small key-value store application, for a varying total number of keys with 5% write requests and 95% read request, and Uniform as well as Zipfian <cit.> access distributions. For Zipfian access, we use the conventional α=1. Overall, similar to the microbenchmark results, we find that the delegation-based solution performs significantly better when contention for keys is high. However, due to the considerably higher complexity of this application, the absolute numbers are lower than in our microbenchmarks. The relative advantage for delegation is also somewhat smaller, as some parts of the work of a TCP-based key-value store are already naturally parallel. For the Uniform distribution and 5% writes, all the solutions perform similarly above 1,000 keys, a large enough number that there is no significant contention. With 100 keys and less, enjoys a large advantage even under uniform access distribution. With a Zipfian access distribution, accesses are concentrated to the higher-ranked keys, leading to congestion. In this setting, trounces the competition, offering substantially higher performance across the full 1–100,000,000 key range. It is interesting to note, also, that the Zipfian access distribution is where the carefully optimized design of Dashmap shines, while it offers a fairly limited advantage over a naïve sharded design with readers-writer locks on uniform access distributions. This speaks to the importance of efficient critical sections in the presence of lock congestion. The througput of Trust16 is higher than Trust24 with 1,000–100,000 keys because it is of low cost to manage a relatively small key space, while Trust16 can dedicate more resources to handle socket connections. However, the performance of Trust16 starts to degrade with more keys, because the limited number of trustees fall short when managing larger key spaces. With 24 trustees, the performance can be maintained at a high level. The difference between Trust16 and Trust24 suggests an important direction of future research. For I/O heavy processes like key-value stores, dedicated trustees will often outperform sharing the core between trustees and clients. However, it is non trivial to correctly choose the number of trustees. Automatically adjusting the number of cores dedicated to trustee work at runtime would be preferable. In principle, readers-writer locks have a major advantage over in that they allow concurrent reader access, while exclusively allows trustees to access the underlying data structure. To better understand this dynamic, Figures <ref>–<ref> show key-value store throughput over a varying percentage of writes. Here, we use 1,000 keys for the Uniform access distribution, and 10,000,000 keys for Zipfian access distribution. We note that these are table sizes where lock-based approaches hold an advantage in Figures <ref>–<ref>. For Uniform access patterns, where there is limited contention given the table size of 1,000 keys, the impact of the write percentage is muted. For lock-based designs, the performance does drop somewhat, but remains at a high level even with 100% writes. It is interesting to note that performance increases modestly with the write percentage. One reason behind this is that in our key-value store, the closures issued by reads by necessity have large return values, while the closures issued by writes have no return values at all. This may allow the trustee to use only the first, small part of the return slot, occasionally saving two LLC cache misses per round-trip. With the Zipfian access distribution, even with 10,000,000 keys, contention remains a bigger concern, especially for Mutex. All four designs exhibit reduced performance with increased write percentages, but again, proves more resilient. The efficiency advantage of Dashmap over our naïve lock-based designs is on full display with the Zipfian access distribution and a high write percentage. That said, the fundamental advantage of over locking in this application is clear. § LEGACY APPLICATION: MEMCACHED We also port memcached version 1.6.20 to to demonstrate both the applicability and performance impact on legacy C applications. Memcached is a multi-threaded key-value store application. Its primary purpose is serving PUT and GET requests with string keys and values over standard TCP/IP sockets. Internally, memcached contains a hash-table type data structure with external linkage and fine-grained per-item locking. By default, memcached is configured to use a fixed number of worker threads. Incoming connections are distributed among these worker threads. Each worker thread uses the epoll() system call to listen for activity on all its assigned connection. Each connection to a memcached server traverses a fairly sophisticated state machine, a pipelined design that is aimed at maximizing performance when each thread serves many concurrent connections with diverse behaviors. The state machine will process requests in this sequence: receive available incoming bytes, parse one request, process the request, enqueue the result for transmission, and transmit one or more results. For our port to , we eliminate the use of most locks, and instead divide the internal hash table and supporting data structures into one or more shards, and delegate each shard to one of potentially multiple trustees. Thus, instead of acquiring a lock, we delegate the critical section to the appropriate trustee for the requested operation. Our ported version follows the original state machine design, with one key difference: for each incoming request on the socket, we make an asynchronous delegation request using apply_then, then move on to the next request without waiting for the response from the trustee. That is, rather than sequentially process each incoming request, we leverage asynchronous delegation to capture additional concurrency. A complicating factor in this asynchronous approach results from memcached being initially designed for synchcronous operation with locking. For any one trustee-client pair, even asynchronous delegation requests are executed in-order, and responses arrive in-order. However, this is not guaranteed for requests issued to different trustees. Consequently, the memcached socket worker thread must order the responses before they are transmitted over the network socket to the remote client. By contrast, our delegation-native key-value store in <ref> sends responses out of order over the socket, and instead includes a request ID in the response. Another difference worth mentioning is that we don't allow delegation clients (in this case, the memcached socket worker thread) to access delegated data structures at all. This means that instead of a pointer to a value in the table, clients recieve a copy of the value. This significantly improves memory locality and simplifies memory management, since every value has a single owner. However, it does incur extra copying, which may reduce performance under some circumstances.[This can become a problem when values are large. For this use case, includes an equivalent of Rust's Arc<T> which allows multiple ownership of read-only values. ] In practice, because memcached is written in C and is written in Rust, we cannot directly add delegation to the memcached source code. We address this in a two-step process: first, for any task that requires delegation, we create a minimal Rust function that performs that specific task. That is, a custom Rust function that becomes part of the memcached code base. Typically, such a function locates the appropriate Trust or TrusteeReference, and delegates a single closure. Second, we break out the critical sections in the C code into separate inner functions that may be called from Rust. Thus, to delegate a C critical section, we simply call the inner function from a delegated Rust closure. Our port of Memcached to has approximately 600 lines of added, deleted or modified lines of code, out of 34,000+ lines total. This number includes approximately 200 of lines which were simply cut-and-pasted into the new inner functions for critical sections. In addition, we introduced approximately 350 new lines of Rust code, to provide the interface between the C and Rust environments. §.§ Evaluation To understand the performance of our delegated Memcached, we use the memtier benchmark client (version 1.4.0) with our delegated Memcached as well as stock memcached. For the cleanest results, but without loss of generality, we configure memcached with a sufficiently large hash power and available memory to eliminate table resizing and evictions. We also limit our evaluation to the conventional memcached PUT/GET operations. Recent versions of memcached feature a optional new cache eviction scheme, which trades less synchronization for the need for a separate maintenance thread. For stock memcached, we evaluated both the traditional eviction scheme and the new one. We show results for the new scheme, which scales much better for write-heavy workloads and is otherwise similar in our setting. For our ported version, we use the traditional eviction scheme, maintaining one LRU per shard. Eviction is not relevant here, as we provide ample memory relative to the table size. The server and client run on separate machines, connected by 100Gbps Mellanox-5 Ethernet interfaces via a 100Gbps switch. Both client and server machines are 28-core, two-socket systems with Intel Sandy Bridge CPUs and 256 GB of RAM. The machines run Ubuntu Linux with kernel version 5.15.0. Unless otherwise noted, we structure the experiments as follows: start a fresh memcached instance. Populate the table with the indicated number of key-value pairs, then run measurements with 1% writes, 5% writes, and 10% writes. After this, we start over with a new, empty memcached instance. Each data point represents a single experiment, each set to last 20 seconds. For each, unless otherwise noted, we choose memcached and memtier parameters to maximize throughput. By default, this means 28 memcached threads pinned to hardware threads 0–27. Running with 56 hardware threads did not yield any further performance improvement. On the memtier side, we configure 28 threads, with four clients per thread, and pipelining set to 48. Figures <ref>–<ref> illustrates the throughput of memcached as we vary the number of keys in the table. While the absolute numbers are significantly lower than in the microbenchmarks and the key-value store, the overall picture from memcached corresponds well with previous experiments. Using results in performance improvements of more than 5× when accessing popular objects, whether this popularity is due to a uniform access distribution across a smaller number of keys, or a Zipfian distribution over millions of of key-value pairs. When all items are accessed infrequently, locking suffers very little contention, and has the advantage of better distributing the work across cores. Here, this results in performance competitive with delegation, at least for read-heavy workloads. The stock version is heavily affected by writes, due to the extra work required for these operations. This includes memory allocation, LRU updates as well as table writes, all of which involve synchronization in a lock-based design. With , all such operations are local to the shard/trustee, and do not require synchronization. With 5% of writes, stock memcached loses ≈40% of its performance, while the version sees only a minor performance penalty, resulting in delegation outperforming locking in this setting for the entire range of table sizes. While not shown, this trend continues with even more writes. § CONCLUSIONS In this paper, we proposed Trust<T>, a new delegation-based programming model for safe, highly performance concurrent access to shared mutable state in Rust. Trust<T> provides an intuitive API that replaces locking with message passing between application threads and trustees in charge of shared data structures. Beyond the delegation API, we introduced two novel techniques to enable modular programming with nested delegation. First, apply_then() are non-blocking delegation requests, which may be issued from within a delegated context. Second, launch() safely supports arbitrary delegated code, by running the critical section in a separate fiber, protected by a single-threaded latch construct. Trust<T> provides evidence that delegation can be a competitive alternative to locking in real systems. The programming model integrates cleanly in Rust, making delegation an accessible option for developers. This work lays the groundwork for additional language and runtime support to unlock the performance and scalability benefits of delegation-based designs. plain
http://arxiv.org/abs/2408.11396v1
20240821074349
MoE-LPR: Multilingual Extension of Large Language Models through Mixture-of-Experts with Language Priors Routing
[ "Hao Zhou", "Zhijun Wang", "Shujian Huang", "Xin Huang", "Xue Han", "Junlan Feng", "Chao Deng", "Weihua Luo", "Jiajun Chen" ]
cs.CL
[ "cs.CL" ]
1National Key Laboratory for Novel Software Technology, Nanjing University, China 2China Mobile Research Beijing, China; 3Alibaba Group,China MoE-LPR: Multilingual Extension of Large Language Models through Mixture-of-Experts with Language Priors Routing Hao Zhou1  Zhijun Wang1  Shujian Huang1Corresponding author.  Xin Huang2 Xue Han2 Junlan Feng2 Chao Deng2 Weihua Luo3 Jiajun Chen1 August 26, 2024 ================================================================================================================================================== § ABSTRACT Large Language Models (LLMs) are often English-centric due to the disproportionate distribution of languages in their pre-training data. Enhancing non-English language capabilities through post-pretraining often results in catastrophic forgetting of the ability of original languages. Previous methods either achieve good expansion with severe forgetting or slight forgetting with poor expansion, indicating the challenge of balancing language expansion while preventing forgetting. In this paper, we propose a method called MoE-LPR (Mixture-of-Experts with Language Priors Routing) to alleviate this problem. MoE-LPR employs a two-stage training approach to enhance the multilingual capability. First, the model is post-pretrained into a Mixture-of-Experts (MoE) architecture by upcycling, where all the original parameters are frozen and new experts are added. In this stage, we focus improving the ability on expanded languages, without using any original language data. Then, the model reviews the knowledge of the original languages with replay data amounting to less than 1% of post-pretraining, where we incorporate language priors routing to better recover the abilities of the original languages. Evaluations on multiple benchmarks show that MoE-LPR outperforms other post-pretraining methods. Freezing original parameters preserves original language knowledge while adding new experts preserves the learning ability. Reviewing with LPR enables effective utilization of multilingual knowledge within the parameters. Additionally, the MoE architecture maintains the same inference overhead while increasing total model parameters. Extensive experiments demonstrate MoE-LPR's effectiveness in improving expanded languages and preserving original language proficiency with superior scalability. Code and scripts are freely available at <https://github.com/zjwang21/MoE-LPR.git>. § INTRODUCTION Large Language Models (LLMs) such as ChatGPT <cit.>, GPT-4 <cit.>, Llama2 <cit.>, Llama3 <cit.>, and Qwen <cit.> have demonstrated remarkable performance across different tasks, including multiple-choice question-answering <cit.>, summarization <cit.>, and reasoning <cit.>. However, many studies have highlighted a significant discrepancy between performances on English and non-English tasks <cit.>. Pre-training a LLM with data from multiple languages may achieve better multilingual capabilities, but highly resource-intensive and often impractical given limited computational budgets. Consequently, current research predominantly focus on post-pretraining (also known as continue training) techniques <cit.>, which carry out further multilingual pre-training on a pre-trained LLM, aiming to inject extensive language knowledge for certain language(s). Despite its efficiency, this method significantly increases the risk of catastrophic forgetting, where the performance of LLMs in the languages they are initially good at (such as English or Chinese) may dramatically decline. As a result, improving the performance of expanded languages while maintaining the performance of existing ones becomes a critical challenge in the field. To prevent forgetting, existing work <cit.> usually retain the original parameters of the model as much as possible, and train new parameters to fit knowledge for new languages. However, less attention is paid on effectively incorporating these new and old parameters for tasks in different languages. In this paper, we propose a novel two-stage training method called Mixture-of-Experts with Language Priors Routing (MoE-LPR) that improves multilingual capability with the retention of original language proficiency. MoE-LPR contains two stages: post-pretraining with MoE and review with LPR. In the post-pretraining stage, we upcycle the LLM into a MoE architecture and post-pretrain the newly added parameters with a substantial amount of high-quality monolingual data, while keeping the original parameters frozen. This ensures that the original capabilities of the model are preserved while expanding its proficiency in additional languages. We also incorporate load balancing loss to unleash the model's learning potential and maintain training stability. In the review stage, we further train the router to better utilize the experts for different languages. We design LPR training to recover the model's capabilities in its original languages using replay data that amounts to less than 1% of the post-pretraining corpus. As shown in Figure <ref>, experiment results demonstrate that our method not only significantly improves proficiency in newly expanded languages (languages in the top half) but also substantially retains the model's capabilities in its original languages (languages in the bottom half). Moreover, our approach allows for easy upscaling for the number of model parameters while maintaining a fixed inference overhead. Our approach represents a step forward in developing LLMs that are both powerful and versatile across a wide range of languages, addressing the critical need for more inclusive and effective NLP technologies in a multilingual world. The contributions of our proposed method are as follows: * Two-Stage Training Strategy: MoE-LPR employs a two-stage training strategy, with a special focus on balancing the capability of newly expanded languages and the original languages. * Language Priors Routing: MoE-LPR introduces the LPR mechanism to mitigate catastrophic forgetting of original languages with replay data amounting to less than 1% of the post-pretraining corpus. LPR also exhibits excellent generalization to languages it has not been trained on. * Scalability: MoE-LPR is designed to easily upscale the number of model parameters without increasing the inference overhead and the risk of catastrophic forgetting, making it a cost-effective and stable solution for multilingual NLP tasks. § METHODOLOGY Figure <ref> describes the overall framework of our MoE-LPR. In the post-pretraining with MoE stage, we train the new experts on a large amount of monolingual data in the expanded languages for injecting language knowledge. In the review with LPR stage, we train the router on a small amount of monolingual data in both the expanded and original languages for better utilizing the experts. §.§ Post-pretraining with MoE As shown in Figure <ref>, inspired by Mixtral  <cit.> and upcycling  <cit.>, we upcycle the dense model to a MoE model by copying the FFN parameters and incorporating a router matrix W_r∈ℝ^h× N in each layer, where h represents the token dimension and N denotes the number of experts within the model. The router in MoE allows the model to dynamically select the most suitable experts. Formally, let x ∈ℝ^h be a token representation, the router score is expressed as: G ( x ) = Softmax ( x· W_r ) where G ( x ) ∈ℝ^N. After obtaining router scores, We select the index set 𝒯 of the top-K experts and combine their outputs using normalized weights from the router scores to obtain the final representation as: 𝒯 = { i| G_i ( x ) ∈Topk(G ( x ), K) } y= ∑_i ∈𝒯 G_i(x) /∑_j ∈𝒯 G_j(x)E_i(x) +x where G_i(x) and E_i(x) represent the router score and the output of the i-th expert respectively, and K denotes the number of activated experts. To enhance the multilingual capability of the MoE model while preserving its performance in the original languages, we freeze the parameters of the original dense model. During post-pretraining on the expanded language corpus, we only update the parameters of the newly added experts and the router, which ensures that the core knowledge embedded in the initial model remains intact. The model is trained with a combination of a next token prediction loss and a load balancing loss as follows. Next Token Prediction Loss. Given an expanded language corpus D, a batch ℬ with T tokens, and N experts indexed by i from 0 to N-1, where index 0 is used to denote the original dense FFN, the post-pretraining next token prediction loss is: L_NTP(θ_new,W_r) = - ∑_i=1^|ℬ|∑_j=1^|d^i|log p_ℳ(d_j^i| d_<j^i) where ℳ denotes the whole MoE model, θ_new indicates the parameters of the newly added experts and W_r is the parameter of the router. Load Balancing Loss. We also use an expert-level load balance loss <cit.> to mitigate the risk of routing collapse: L_balance(θ_new,W_r) = ∑_i=1^N f_i P_i f_i = N/KT∑_t ∈ℬ1{Token t selects expert i} P_i = 1/T∑_t ∈ℬ G_i(t) where 1 denotes the indicator function. We opt for a top-2 strategy by setting K=2 to select the two most suitable experts with normalization, intending to achieve a trade-off between inference overhead and learning capabilities. The final optimization objective during post-pretraining is: θ_new,W_rargmin L_NTP + α L_balance where α is a hyper-parameter that controls the weight of the load balancing loss. §.§ Review with LPR After post-pretraining on the expanded language corpus, the router, which has only been trained on the expanded languages but not on the original languages, may incorrectly assign experts for the original languages. This misallocation is also an important factor for catastrophic forgetting in the MoE model. Therefore, we design this review stage to train the model to deal with both original and expanded languages. As the router is the main source of the problem, we only update the parameters of the router and freeze the other parts of the model. Because the number of router parameters accounts for a negligible proportion, this stage could be efficient and requires very little computational resource and training data. In fact, the amount of original language data used in our review stage, is less than 1% of the post-pretraining corpus. In comparison, traditional replay strategy <cit.> incorporates data from original languages into the post-pretraining stage, which usually requires a much larger amount (25%). LPR Loss. Intuitively, the routing could be led by language priors: all the original language tokens should be routed to the originally frozen expert (i.e. expert 0 in this case), making the model work exactly the same as before the expansion. Therefore, we design a LPR loss to be a Cross-Entropy loss for the tokens from the original languages, forcing the top-1 selection of these tokens to be expert 0, where the top-1 selection refers to the expert selection with the highest routing score. Formally, considering original language tokens set D_original and the indicator function 𝐅(t) : 𝐅(t) = 1 if t ∈ D_original, 0 if t ∉ D_original. The LPR loss is defined as: L_LPR(W_r) = -∑_t ∈ℬ𝐅(t)log G_0(t) where index 0 denotes the originally frozen expert. In practice, when training with LPR loss, we remove the load balancing loss in Eq. (<ref>). The final optimization objective for the review stage is: W_rargmin L_NTP + γ L_LPR where γ is a hyper-parameter that controls the weight of the LPR loss. § EXPERIMENTS §.§ Experiment Setup Given the focus on multilingual capability enhancement, we introduce the language selection first. Then follow the training details, several baselines, and the evaluation details. Model and Languages We choose Qwen-1.5 [Qwen-1.5 has a powerful multilingual tokenizer that produces shorter sequences in expanded languages, which means we don't have to worry about vocabulary expansion.] as our base model. The 1.8B version of the Qwen-1.5 series is selected for its lower computation overhead and ease of upcycling. For our study, we choose three low-resource languages as the expanded languages where Qwen-1.5-1.8B performs poorly as shown in Figure <ref>: Greek (El), Hungarian (Hu), and Turkish (Tr). Additionally, we select three high-resource languages as the original languages to observe the catastrophic forgetting phenomenon: English (En), Chinese (Zh), and Spanish (Es). Details of Post-pretraining We construct a dataset focusing on the three expanded languages by sampling 8 billion tokens from the monolingual data of each language in CulturalX <cit.>, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages. Our base model, Qwen-1.5-1.8B, is upcycled into the MoE structure with 5 newly added FFN (6 experts in total). We post-pretrain this model with the 24 billion tokens, marking only the new experts and the router as trainable. The training setup includes a batch size of 512, a sequence length of 1024, a learning rate of 5e-5, and a cosine learning rate scheduler. We incorporate the load balancing loss with a weight of 0.01 and utilize bf16 mixed precision and flash attention <cit.> to speed up the training process. Our experiments are conducted on 8 A800 GPUs, involving 45856 steps, totaling approximately 848 A800 GPU hours. Details of Review We randomly sample 50K documents for each original language and 100K documents for each expanded language. The English data are sampled from Slimpajam <cit.>, the Chinese data from SkyPile-150B <cit.>, and the Spanish data from CulturalX <cit.>. The number of tokens in original languages is 0.138B, accounting for less than 1% of the post-pretraining data (24B). As for the three expanded languages, we sample from the post-pretraining dataset. We concatenate these data for the review stage training. We employ a batch size of 512, a sequence length of 512, a learning rate of 5e-5, and a cosine learning rate scheduler. The load balancing loss is removed and the LPR loss is added as introduced in Eq. (<ref>) with a weight of 0.1. Only the router parameters are trainable. Bf16 mixed precision and flash attention <cit.> mechanism is used for training. Baselines We conducted experiments on several existing baseline methods trained on the same data, including the small amount of replay data, to ensure that our approach is competitive and effective. * Full Fine-tuning: Fine-tune all parameters directly on the dense model. * LoRA <cit.>: The LoRA targets include all linear modules. We set the LoRA rank to 8. * MoE: The same settings as MoE-LPR except for training all the parameters only in one post-pretraining stage. * LoRAMoE <cit.>: A novel framework combines multiple LoRAs with a router network to effectively learn new knowledge while avoiding catastrophic forgetting. The router selects all LoRAs for each token. We set the number of LoRAs as 8 and a LoRA rank of 180 to match the same inference overhead. * LLaMA-Pro <cit.>: A method is considered where a dense LLM periodically duplicates and inserts new transformer blocks at fixed layer intervals. During post-pretraining, only these newly added transformer blocks are trained to acquire new knowledge while preserving the original knowledge. We add 12 new layers because this is the best setting in our experiments. Evaluation Details We evaluate our method on several benchmarks including multiple-choice tasks and generation tasks. Examining the model's multilingual capabilities from multiple perspectives. * ARC-Challenge (25-shot) <cit.>: A benchmark for evaluating comprehension and reasoning across diverse academic fields. * MMLU (5-shot) <cit.>: A multiple-choice dataset testing general knowledge and problem-solving across various subjects. * HellaSwag (10-shot) <cit.>: A dataset with 70k questions for studying grounded commonsense inference. * Belebele (5-shot) <cit.>: A machine reading comprehension dataset covering 122 language variants. * FLORES-101 (8-shot) <cit.>: A parallel corpus for evaluating multilingual translation capabilities. We report the performance evaluated by COMET <cit.> [We use the wmt22-comet-da version.] We mainly follow Okapi <cit.> to evaluate the multilingual versions of ARC-Challenge, MMLU and HellaSwag, which are translated from the original English version using GPT-3.5-turbo or DeepL. More details about the source are reported in the technical appendix C. §.§ Experiment Results Table <ref> presents the performance of various methods across different benchmarks for both expanded and original languages. We report here the performance of the best setting of all baselines. With the additional small amount of replay data, full fine-tuning outperforms LoRA in preventing catastrophic forgetting but still drops about 4 points in original languages. Full fine-tuning can recover to 92.1% performance in original languages with replay data amounting to less than 1% of the post-pretraining data. <cit.> demonstrates that training new languages suffers from dramatic distribution shifts. Only when using more than 25% replay data can the model recover to more than 95.7% performance, indicating that significant language shifts in post-pretraining data require more replay data and computational overhead. However, our MoE-LPR can recover to 96.6% performance (52.12/53.97) with less than 1% replay data. LoRA performs poorly in expanded languages due to the excessive data in the post-pretraining stage. We also experiment with LoRA at rank=64 to achieve comparable effects in expanded languages, but this results in worse catastrophic forgetting, as shown in the technical appendix B. LLaMA-Pro demonstrates a strong ability to retain knowledge, but its performance in expanded languages is only comparable to full fine-tuning, with the drawback of higher inference overhead. LoRAMoE performs better than other baselines in both expanded and original languages. Our proposed method, MoE-LPR, surpasses LoRAMoE by 2.7 points in expanded languages and by 0.88 points in original languages on average. While adding more new parameters, the inference overhead of LLaMA-Pro and LoRAMoE increases accordingly, while that of MoE-LPR does not. More details about scaling will be discussed in the following sections. The results also demonstrate that MoE underperforms our MoE-LPR both in expanded and original languages, which implies that freezing all the original parameters will not limit the model's learning ability. In contrast, the frozen parameters contribute a robust basic capabilities of the model during post-pretraining, resulting in significant performance improvement. Further details on each benchmark are provided in the technical appendix A. § ABALATION & ANALYSIS §.§ Review with LPR §.§.§ Performance Gain from Review & EC The review with LPR stage is proposed to recover the capabilities of the original languages. As shown in Table <ref>, without the review stage, MoE-LPR exhibits severe catastrophic forgetting. However, after review training, the performance in original languages improves substantially, by about 5 points on average, while not harming the performance in expanded languages. Furthermore, the performance in original languages drops without the LPR loss, indicating that the LPR mechanism pushes this ability closer to its upper bound. These results show that the review stage allows the model to learn how to handle both new and old languages. We also conduct experiment without the Expert-Copy, which means that the parameters of new experts are randomly initialized but not copied from the original FFN. As shown in Table <ref>, performance in original languages does not suffer a serious decrease, but performance in expanded languages shows a significant decrease. Results imply that copying the original FFN to construct new experts is important to the learning of expanded language knowledge. §.§.§ Routing Scheme for Different Languages In this section, we examine whether the review stage works properly. As shown in Figure <ref>, the router scores of the frozen expert on original language tokens show obvious improvement with the review stage. In addition, without the LPR loss, the router scores demonstrate a significant drop. The router scores of the frozen expert on expanded language tokens almost remain unchanged, as shown in the technical appendix D. In the review stage, we optimize the model with only the next token prediction loss for expanded languages. The results show that the next token prediction loss effectively prevents expanded languages from being influenced by the language priors of original languages. These observations indicate that the review stage is functioning correctly, biasing the routing scheme of original language tokens toward the frozen expert. §.§.§ How much Data is Enough for Review In this section, we experiment with varying numbers of original language documents in the review stage, ranging from 0K to 150K, while maintaining the 1:2 mix of original and expanded languages. As shown in Figure <ref>, the original language performance continues to improve significantly while the expanded language performance continues to decrease slightly. After 50k, the original language performance improvement starts to become slow. Therefore, considering both training cost and effects, we choose 50K as the best data size in this experiment, which amounts to less than 1% of the post-pretraining corpus. Using 50K results in a 4.98 points performance boost in the original languages while almost maintaining the performance in the expanded languages. These results indicate that a small amount of replay data is sufficient for the model to review its original languages. §.§ Scaling Law We compare the performance of LLaMA-Pro with different numbers of extending layers, LoRAMoE with different ranks and MoE-LPR with different numbers of experts. All the models are trained on the 24 billion tokens dataset in the three expanded languages. Figure <ref> demonstrates the superior scalability of MoE-LPR. For expanded languages, adding 12 layers to LLaMA-Pro improves performance more than adding 6 layers, but adding 24 layers, matching the base model's layer count, results in a performance drop. Increasing the rank of LoRAMoE from 32 to 180 shows significant improvements. MoE-LPR consistently outperforms these configurations as more experts are added, even with just 2 experts, maintaining a significant advantage over LLaMA-Pro and LoRAMoE. For original languages, LLaMA-Pro suffers from catastrophic forgetting, worsening with more layers. Adding 24 layers even performs worse than full fine-tuning. Although LoRAMoE's catastrophic forgetting does not worsen with increased parameters, it still underperforms MoE-LPR. Even with 8 experts and a 7B parameter size, MoE-LPR can still greatly mitigate catastrophic forgetting. Unlike LLaMA-Pro and LoRAMoE, whose activated parameters per token increase linearly with more parameters, adding experts to MoE-LPR does not increase the inference overhead. This improves performance in expanded languages while maintaining stable levels of catastrophic forgetting. MoE-LPR demonstrates superior scalability. Details of each model in the scaling experiments are reported in the technical appendix A. §.§ Language Generalization In the review stage, we only use documents of three of the original languages. We conduct evaluations on two additional high-resource languages that the base model is good at relatively: French and Portuguese to examine the generalization of MoE-LPR when preventing catastrophic forgetting. We name them out-of-domain original languages because the review stage training does not contain tokens in these two languages. Table <ref> demonstrates that MoE-LPR successfully generalizes its catastrophic forgetting prevention effect to these languages. Despite the router not being trained on French and Portuguese tokens, our LPR mechanism minimizes the performance gap from the base model for these languages, outperforming other post-pretraining methods. This demonstrates MoE-LPR's excellent language generalization in preventing catastrophic forgetting. We also try to move the LPR loss and the small amount of replay data to the post-pretraining stage. As shown in Table <ref>, MoE-LPR One-Stage shows comparable performance to the two-stage strategy. However, it demonstrates worse language generalization, which showcases a 1.54 points performance drop in the out-of-domain original languages. Therefore, we choose the two-stage strategy as a better proposal. § RELATED WORK §.§ Mixture of Experts Recent studies <cit.> have shown a strong correlation between the number of parameters in a model and its capabilities. When the number of parameters is large, the model demonstrates emergent abilities <cit.>. Traditional dense models require the activation of all parameters for a given input, significantly increasing computational overhead. Distinct from conventional dense models, Mixture of Experts (MoE) achieves computational feasibility and expanded model capacity by utilizing a router that selectively activates a limited number of experts for each input. There are several works, such as Switch-transformer <cit.>, ST-MoE <cit.>, Glam <cit.> , attempts to train an MoE model from scratch. These works have demonstrated that MoE models can achieve significantly lower loss and performance gains compared to dense models with the same activated parameters and require less energy consumption compared to dense models with the same total parameters. However, considering the huge computational budget, <cit.> indicates that a sparse MoE model could be initialized from dense models. In the era of LLMs, numerous MoE works have been developed. For instance, Mixtral <cit.> adds experts to each layer, increasing the total parameter count to 141B. DeepSeek <cit.> utilizes shared experts, enabling the model to select experts more effectively. Snowflake Arctic <cit.>incorporates many fine-grained experts, enhancing the diversity of expert selection. <cit.> combines MoE with LoRA, resulting in more effective training and alleviating data conflict issues. The most relevant work to us is Lifelong-MoE <cit.>, which effectively expands the number of experts during lifelong learning and introduces a regularization to avoid catastrophic forgetting. However, we employ a different freezing method and a two-stage training framework, significantly alleviating catastrophic forgetting and gaining a promising performance in expanded languages. §.§ LLM for Multilingual Post-pretraining on a massive multilingual corpus is an effective way to improve the multilingual abilities of LLMs. <cit.> and <cit.> highlight monolingual data's importance in post-pretraining. Notably, <cit.> demonstrates that with fixed computational resources, allocating more to monolingual data rather than translation data better improves a model's translation performance, allowing large models to achieve translation abilities comparable to traditional supervised models NLLB <cit.>. <cit.> have explored using the Branch Then Merge (BTM;<cit.>), where separate models are trained independently for different languages and then merged, partially overcoming the challenges of the multilingual curse <cit.>. <cit.> employs the LoRA <cit.> architecture to help migrate a chat LLM to the target language while preserving its chat capabilities. § CONCLUSION In this paper, we propose MoE-LPR, a scalable post-pretraining method that effectively expands languages and prevents catastrophic forgetting using the Mixture-of-Experts architecture. Expanding new languages often encounters severe catastrophic forgetting due to significant distribution changes, and the challenge lies in balancing old and new languages. Through two-stage training, MoE-LPR addresses this with efficient parameter assignment and balanced routing. The post-pretraining stage enables the model to have a strong enough learning ability and steadily enhances the capabilities of the expanded languages. The review stage brings a performance boost to the original languages without harming the performance in expanded languages. Our two-stage training achieves both expansion and prevention of forgetting effects well. Additionally, MoE-LPR shows better scalability and generalization than SOTA methods. Overall, MoE-LPR is an effective and scalable approach for expanding new languages during the post-pretraining stage. § A Experiment results in detail for each benchmark and language are shown in Table <ref> and Table <ref>. The performance of the best configuration for each method is presented. The results demonstrate that MoE-LPR outperforms the comparison methods on most benchmarks, especially HellaSwag and Belebele. The results of the scaling experiment are detailed in Table <ref>. With the addition of more new experts, the performance of MoE-LPR in expanded languages continues to increase while demonstrating stable effects in preventing catastrophic forgetting of original languages. Meanwhile, the inference overhead per token of MoE-LPR remains unchanged, showing more powerful scalability than other methods. § B Hyper-parameters for different baselines. §.§ Full Fine-tuning of dense and MoE In our experiments, the hyperparameters for full fine-tuning remain the same as those in MoE-LPR, except for the learning rate. Our experimental results demonstrate that full fine-tuning requires a smaller learning rate. As shown in Table <ref>, we experimented with a series of learning rates and determined that 5e-6 is the best configuration for full fine-tuning of the dense model. Meanwhile, 5e-6 is also the best configuration for full fine-tuning of MoE. When using a larger learning rate, the final checkpoint demonstrates random selection on several multiple-choice benchmarks, indicating that a drastic parameter distribution shift has occurred, causing the model to lose basic capabilities. §.§ LoRA We conducted experiments for LoRA with ranks of 8 and 64, with the aim of achieving excellent catastrophic forgetting prevention and expansion effects. The results in Table <ref> demonstrate that a rank of 64 does show better performance in expanded languages compared to a rank of 8 but still underperforms full fine-tuning. Besides, LoRA with a rank of 64 encounters a serious catastrophic forgetting phenomenon. Results imply that continuing to improve Lora's rank will hardly make its expanded language effects exceed full fine-tuning, and will produce more serious catastrophic forgetting. Therefore, in our paper, we report the performance of rank 8. §.§ LLaMA-Pro & LoRAMoE For LLaMA-Pro, training details remain the same as in MoE-LPR, except for a learning rate of 2e-4 as referenced in the original paper. For LoRAMoE, the learning rate is 2e-4, the LoRA alpha is 32, and the dropout rate is 0.05. We set the number of LoRA as 8, a blc-alpha of 0.1, and a blc-weight of 0.1. In our experiments, we attempt to increase the inference overhead of LoRAMoE to match that of MoE-LPR. When increasing the number of LoRA, the training costs become too high since LoRAMoE activates all the LoRAs for each token, resulting in more FLOPs within the MoE architecture. Therefore, we achieve this by increasing the LoRA rank. § C The details of our evaluation benchmarks. For Chinese, Spanish, Hungarian, French and Portuguese, we use the translated version of ARC-Challenge, MMLU and HellaSwag following Okapi <cit.> except for CMMLU <cit.> in Chinese. For Turkish, we follow <cit.> to evaluate these three benchmarks. For Greek, we use the translated version provided by ILSP [<https://huggingface.co/ilsp>]. The evaluation is performed through the Eleuther AI Language Model Evaluation Harness [<https://github.com/EleutherAI/lm-evaluation-harness>] except for flores, which is evaluated through script written by ourselves. § D Figure <ref> demonstrates the router scores of the frozen expert for Hungarian (expanded language) tokens. The review stage does not significantly alter the expanded languages, ensuring their performance remained stable. Among the side effects during this phase, LPR loss was the most prominent. However, this did not significantly affect the abilities of expanded languages. Furthermore, LPR loss effectively prevents catastrophic forgetting and enhances generalization.
http://arxiv.org/abs/2408.11991v1
20240821210159
Capturing anharmonic effects in single vibronic level fluorescence spectra using local harmonic Hagedorn wavepacket dynamics
[ "Zhan Tong Zhang", "Máté Visegrádi", "Jiří J. L. Vaníček" ]
physics.chem-ph
[ "physics.chem-ph", "quant-ph" ]
Laboratory of Theoretical Physical Chemistry, Institut des Sciences et Ingénierie Chimiques, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland Laboratory of Theoretical Physical Chemistry, Institut des Sciences et Ingénierie Chimiques, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland jiri.vanicek@epfl.ch Laboratory of Theoretical Physical Chemistry, Institut des Sciences et Ingénierie Chimiques, Ecole Polytechnique Fédérale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland § ABSTRACT Hagedorn wavepacket dynamics yields exact single vibronic level (SVL) fluorescence spectra from any initial vibrational level in displaced, squeezed, and Duschinsky-rotated global harmonic models. Real molecules, however, have anharmonic potential energy surfaces. To partially describe effects of anharmonicity on the spectra, we combine the Hagedorn approach to spectroscopy with the local harmonic approximation of the potential. We compute the SVL spectra for several anharmonic Morse-type potentials in one, two, and twenty dimensions and compare them to the results of global harmonic approximations and, where possible, of exact quantum calculations. We show that the local harmonic approach yields more accurate results than global harmonic approximations, especially for the emission spectra from higher initial vibrational levels. Capturing anharmonic effects in single vibronic level fluorescence spectra using local harmonic Hagedorn wavepacket dynamics Jiří J. L. Vaníček August 26, 2024 ============================================================================================================================ ./Figures/"C:/Users/GROUP LCPT/Documents/Tong/SVL_LHA_model/Figures/" § INTRODUCTION Single vibronic level (SVL) fluorescence spectroscopy has been used to study intramolecular relaxation,<cit.> characterize molecular vibronic structure,<cit.> and identify conformers and reactive intermediates.<cit.> Recently, several time-dependent approaches to simulate SVL spectra in global harmonic models have been developed.<cit.> For example, Tapavicza<cit.> used a generating function formalism to compute the SVL spectra of anthracene from singly excited levels using a harmonic model determined from accurate electronic structure calculations. Motivated by Tapavicza's work, we proposed a method based on Hagedorn wavepacket dynamics,<cit.> which enabled us to compute the SVL spectra from any initial vibrational levels<cit.> and to simulate the SVL spectra of higher vibrational levels of anthracene within the harmonic approximation.<cit.> To accurately compute vibrationally resolved spectra of polyatomic molecules, however, it is often necessary to account for the anharmonicity of the molecular potential energy surfaces (PESs).<cit.> Conventional time-independent methods that evaluate Franck–Condon factors for each transition can include anharmonic effects by using the variational principle<cit.> or perturbation theory.<cit.> In higher-dimensional systems with Duschinsky coupling, the determination of anharmonic wavefunctions and the computation of a large number of Franck–Condon overlaps become very costly or even unfeasible. Even when feasible, computing all Franck–Condon factors is wasteful because individual peaks are not visible in low- or intermediate-resolution spectra. Alternatively, in the time-dependent framework, the spectrum is computed as the Fourier transform of an appropriate autocorrelation function obtained by propagating a wavepacket on the final electronic PES.<cit.> Semiclassical trajectory-based methods can capture the effects of an anharmonic PES more efficiently using an on-the-fly implementation that relies only on the local PES information. In the thawed Gaussian approximation<cit.> (TGA), a Gaussian wavepacket is propagated using the local harmonic approximation (LHA), which expands the true potential locally to the second order at each time step. The center of the wavepacket then follows the anharmonic classical trajectory, while the width evolves according to the local Hessian matrix. Whereas the TGA uses a single Gaussian wavepacket to simulate the emission or absorption process from the ground vibrational level,<cit.> our approach,<cit.> based on Hagedorn wavepackets,<cit.> uses functions that are polynomials multiplied by a Gaussian to compute SVL spectra from any vibrationally excited levels. These functions result from applying a special raising operator to a Gaussian wavepacket. Remarkably, the Hagedorn functions are not only exact solutions of the time-dependent Schrödinger equation (TDSE) in global harmonic models, but they remain so even with a state-dependent quadratic potential. This property enables an efficient combination of Hagedorn wavepacket dynamics with the local harmonic approximation in order to partially account for anharmonic effects in SVL spectra. Here, we first apply the local harmonic Hagedorn wavepacket approach to simulate SVL spectra from different initial vibrational levels in a one-dimensional Morse potential and in a two-dimensional coupled Morse system. Compared to the quantum split-operator results, the local harmonic spectra improve on both vertical and adiabatic harmonic models; however, significant differences from the exact results still exist for spectra with high initial excitations. In a twenty-dimensional coupled Morse system, we demonstrate the practicality of the Hagedorn approach in higher dimensions and the importance of anharmonic effects by comparing the local and global harmonic results. § THEORY In single vibronic level fluorescence spectroscopy, the molecular population in the ground electronic state g is first excited to a specific vibrational level |K⟩≡ |e, K⟩ in an electronically excited state e, where K=(K_1,…,K_D) is a multi-index of non-negative integers specifying the vibrational quantum numbers in the D vibrational degrees of freedom. The rate of subsequent spontaneous emission from this vibronic level with energy ħω_e,K is given by the Fourier transform<cit.> σ_em(ω) = 4ω^3/3πħ c^3 |μ _ge|^2 Re∫^∞_0C(t)exp[it(ω - ħω_e,K)/ħ] dt of the wavepacket autocorrelation function C(t)= ⟨ K | exp(-i H_gt/ħ) | K ⟩, obtained by propagating the initial wavepacket with the ground-state Hamiltonian H_g. We make the Condon approximation, in which the transition dipole moment μ_ge does not depend on nuclear coordinates. Assuming the harmonic approximation for the excited-state surface V_e, we can represent the initial vibrational wavepacket |K⟩ by a Hagedorn function<cit.> φ_K = (K!)^-1/2 (A^†)^Kφ_0, obtained by applying Hagedorn's raising operator<cit.> A^† := i/√(2 ħ)(P_t^†· (q̂ - q_t) - Q_t^†· (p̂ - p_t) ) to the D-dimensional normalized Gaussian, parametrized in the excited-state normal-mode coordinates as φ_0(q) = 1/(πħ)^D / 4√( (Q_t) ) ×exp{i/ħ[ 1/2 x^T· P_t· Q_t^- 1· x + p_t^T· x + S_t] }. Here, x:= q - q_t is the shifted position, q_t and p_t are the position and momentum of the center of the wavepacket in the phase space, and S_t is the classical action. Compared to Heller's parametrization,<cit.> the width matrix C_t≡ P_t· Q_t^-1 of the Gaussian is factorized in terms of two complex-valued D-dimensional matrices Q_t and P_t, which satisfy the symplecticity conditions<cit.> Q_t^T· P_t - P_t^T· Q_t = 0 and Q_t^†· P_t - P_t^†· Q_t = 2 i Id, where Id is the D-dimensional identity matrix. In Eq. (<ref>), we used the multi-index notation for K! = K_1!· K_2! · ⋯ · K_D! and (A^†)^K=(A^†_1)^K_1· (A^†_2)^K_2· ⋯ · (A^†_D)^K_D, where A^†_j is the j-th component of the vector operator A^†. In the position representation, Hagedorn functions (<ref>) are given by polynomials multiplying a common Gaussian (<ref>) and form an orthonormal basis. Instead of using them as a time-dependent basis as in previous applications,<cit.> here we exploit the fact that they are exact solutions to the TDSE with a time-dependent harmonic potential that may depend on the state of the system. To partially capture anharmonic effects while maintaining the simplicity of the propagation, we use the local harmonic approximation (LHA), where the true potential V is expanded as an effective quadratic potential, V_LHA(q; q_t) := V(q_t) + V^'(q_t)· x + x^T· V^''(q_t) · x/2, around the center q_t of the wavepacket at each time. Solving the TDSE with the Hagedorn functions and LHA yields the following equations of motion for the parameters of the guiding Gaussian φ_0 from Eq. (<ref>): q̇_̇ṫ = m^-1· p_t, ṗ_̇ṫ = -V^'(q_t) Q̇_̇ṫ = m^-1· P_t, Ṗ_̇ṫ = -V^''(q_t) · Q_t, Ṡ_̇ṫ = L_t, with m being the real symmetric mass matrix and L_t the Lagrangian.<cit.> The center of the wavepacket is guided by the classical trajectory and can explore the anharmonic shape of the PES. The local quadratic expansion takes into account the changes of the Hessian V^''(q_t) along the anharmonic trajectory while ensuring that the form (i.e., the multi-index K) of the Hagedorn function remains constant. Remarkably, the classical-like equations (<ref>) are exactly the equations of motion in the thawed Gaussian approximation and do not depend on the initial excitation K. As such, the SVL spectra from any vibrational levels may be obtained from a single Gaussian trajectory, and in ab initio applications, the expensive on-the-fly electronic structure calculations do not have to be repeated for different K. To evaluate the autocorrelation function (<ref>), it is necessary to compute overlaps between arbitrary two Hagedorn functions. Unlike for overlaps between two Gaussians, we have not found a simple, closed-form expression for overlaps of Hagedorn functions. Instead, we evaluate these overlaps with the exact, recursive algebraic expressions derived in Ref. Vanicek_Zhang:2024, and thus avoid errors and difficulties arising from the use of numerical quadratures.<cit.> § NUMERICAL EXAMPLES In the following, we use mass-scaled nuclear normal-mode coordinates and the atomic units (ħ = 1). §.§ One-dimensional Morse potential Using the Morse oscillator as the first example of a realistic anharmonic potential, we assume that the final, ground electronic surface V_g is given by V_g(q)=ω_g/4χ[ 1-e^-√(2mω_gχ)(q-q_eq,g)]^2, where q_eq,g = 15 is the equilibrium position, ω_g = 0.0041 is the frequency of the harmonic oscillator fitted at q_eq,g, and χ = 0.002 is a dimensionless anharmonicity parameter. The initial wavepacket |K⟩ is assumed to be an eigenfunction of a harmonic potential V_e that has a fundamental frequency of ω_e=0.00456 and is centered at the excited-state equilibrium position q_ref,e = 0. To quantify the anharmonic effects, two global harmonic models were constructed for V_g. In the vertical harmonic approximation (VHA), the effective potential was constructed around the Franck–Condon point as V_VHA = V_g(q_eq,e) + V^'_g(q_eq,e) · (q-q_eq,e) + (q-q_eq,e)^T· V^''_g(q_eq,e) · (q-q_eq,e)/2, whereas in the adiabatic harmonic approximation (AHA), the potential was expanded around the ground-state equilibrium position q_eq,g as V_AHA = V_g(q_eq,g) +(q-q_eq,g)^T· V^''_g(q_ref,g) · (q-q_eq,g)/2, where the gradient V^'_g(q_eq,g) vanishes at the equilibrium configuration. Figure <ref> compares the SVL spectra (from 1^a levels, where a is the vibrational quantum number of the initial state) computed using the Hagedorn wavepacket approach combined with the local, adiabatic, and vertical harmonic approximations. These spectra are compared with exact quantum results to assess the accuracy of various approximations of the anharmonic Morse potential. The split-operator quantum calculations were performed on a position grid with 2048 equidistant points between -128 and 356. The differences between exact and “Hagedorn" spectra are shown in Fig.  <ref>. In all simulations, the propagation lasted 80000 a.u. (20000 steps with a time step of 4 a.u.) and the correlation function (<ref>) was computed every five steps. Whereas the quantum calculation required a separate propagation of the wavepacket for each excitation level, in the Hagedorn approach it is sufficient to propagate only the ground-level Gaussian wavepacket to generate spectra for all excitation levels. The spectra were broadened by a Gaussian function with a half-width at half-maximum of 100 cm^-1. Intensities were scaled to the highest peak of the 1^0 quantum spectrum, and the wavenumbers were shifted so that the transition to the ground level (1^a_0) on the ground-state surface was at 0 cm^-1. For the ground-level emission spectra from 1^0 (first column in Figs. <ref> and <ref>), both the adiabatic and local harmonic models show excellent agreement with the quantum results. The vertical model also performs reasonably well; however, the spacing between the peaks differs slightly from the exact spectrum, which could be corrected with an empirical scaling factor. In the 1^1 and 1^2 spectra (second and third columns), the local and adiabatic harmonic models still perform well, with the LHA spectra capturing slightly better the small peaks at lower wavenumbers. As the initial excitation increases, the vertical harmonic model breaks down further, and significant differences from the exact spectra become apparent both in the intensities and peak positions. The adiabatic and local harmonic models still provide reasonable results, but they too start to deviate from the quantum benchmarks. This is expected since the initial states with higher vibrational excitations are more delocalized and feel more the anharmonic shape of the potential. We also observe several small, unphysical negative peaks in the LHA spectra due to the nonconservation of energy and the nonlinearity of the TDSE in the local harmonic dynamics.<cit.> Nonetheless, compared to the results of global harmonic models, the local harmonic results generally differ less from the exact spectra, especially in the lower frequency region (see Fig. <ref>). This region corresponds to transitions to higher vibrational levels, which are more significantly affected by the anharmonicity of the ground-state PES. In contrast, the adiabatic harmonic model tends to perform slightly better in the higher frequency range for transitions to vibrational levels closer to the harmonic region. §.§ Two-dimensional coupled Morse potential In one-dimensional systems, Hagedorn functions reduce to Hermite polynomials multiplied by a Gaussian;<cit.> the beauty of the Hagedorn wavepackets emerges in higher dimensions, where they are not simple products of one-dimensional Hermite functions.<cit.> Remarkably, they still preserve their form and remain exact solutions to the TDSE with a global or local harmonic potential, even in the presence of Duschinsky rotation (mode mixing). To validate the local harmonic Hagedorn dynamics in a multidimensional anharmonic system, we used the “coupled Morse potential."<cit.> This nonseparable generalization of the Morse potential to higher dimensions is given by V_g(q) = ∑_j=1^DV_j(q_j) + V_cpl(q). In each vibrational degree of freedom j, there is a Morse term V_j in the form of Eq. (<ref>) specified by parameters ω_g,j, χ_j, and q_eq,g,j. A D-dimensional coupling term, V_cpl(q) = d^'( 1-e^-a^T· (q - q_eq,g))^2, introduces nonseparability and depends on the dissociation energy d^' and a vector a ≡ (a_1,⋯,a_D) of decay parameters, which are related by a dimensionless anharmonicity vector χ^'≡ (χ^'_1,⋯,χ^'_D) via a = √(8d^') χ^'. A two-dimensional Morse system with ω_g = (0.0041, 0.005), χ = (0.005, 0.002), q_eq,g = (20, 5), d^' = 0.08 and χ^' = (0.001,0.001) was chosen as the simplest multidimensional example. The harmonic excited-state PES is centered at q_eq,e = (0,0) and its Hessian corresponds to fundamental frequencies ω_e = (0.00456, 0.00365). We used the local harmonic Hagedorn dynamics to compute the emission spectra from levels 1^02^0, 1^02^2, 1^12^1, and 1^32^1. Exact quantum split-operator calculations (with 256× 256 position grid points ranging from -128 to 356 in each dimension) as well as simulations using the adiabatic and vertical harmonic approximations [Eqs. (<ref>) and (<ref>)] were performed for comparison. The time step, total propagation time, and frequency of evaluating the autocorrelation function were the same as in the one-dimensional Morse model. The simulated spectra were broadened with a Gaussian function with a half-width at half-maximum of 100 cm^-1. The intensities in the spectra were scaled by the intensity of the highest peak in the exact 1^0 2^0 spectrum, and the wavenumbers were shifted so that the transition to the ground level in the ground state was at 0 cm^-1. Figure <ref> compares the spectra computed using the different approximations with the exact quantum spectra and the differences are shown in Fig. <ref>. In this two-dimensional system, the spectra already become quite complex, but the behavior of the different approximations mirrors that of the one-dimensional case. Both local and adiabatic harmonic results agree very closely with the exact ground-level emission spectrum (1^0 2^0), whereas the vertical harmonic spectrum reproduces the intensities relatively well but has wrong peak spacing. The effect of anharmonicity becomes more pronounced for higher initial vibrational excitations, leading to increased deviations of both local and global harmonic results from the exact spectra. At the same level of initial excitation in a single mode, the global harmonic models perform better when mode 2 is excited than when mode 1 is excited (compare the second and third columns of Figs. <ref> and <ref> for 1^0 2^2 and 1^2 2^0 spectra). This is consistent with the fact that mode 1 is more “anharmonic" than mode 2 based on our parametrization (χ_1 > χ_2). In general, the vertical harmonic model performs worse than the adiabatic harmonic model and the local harmonic approximation, particularly in terms of peak positions. The spectra from the adiabatic model capture the high frequency peaks in the more harmonic regions more accurately, whereas the local harmonic dynamics reproduces better the peaks in the more anharmonic low-frequency tail region. For example, in the 1^32^1 spectra, the adiabatic model agrees well with the quantum result for the peaks in the region >-5000 cm^-1 but the LHA is better at lower wavenumbers. §.§ Twenty-dimensional coupled Morse potential Since the propagation (<ref>) of Hagedorn wavepackets with a local harmonic potential requires only propagating the five parameters of the Gaussian, the same method can be used in high-dimensional systems where exact quantum calculations are no longer practical. We therefore present a twenty-dimensional coupled Morse system as an example where split-operator benchmarks are not feasible due to the exponential scaling of the required numerical grid with D. We chose to excite at most two vibrational modes (modes 1 and 2) in the excited electronic state and computed SVL spectra from levels 1^02^0, 1^02^3, 1^12^0, 1^12^1, and 1^22^1. The parameters of the simulation are given in the supplementary material. The SVL spectra obtained from the local harmonic dynamics are compared to the adiabatic and vertical harmonic spectra in Fig. <ref>. As in the lower-dimensional examples, the local harmonic results are closer to those of the adiabatic harmonic model than to the vertical one. Although the exact quantum benchmarks are not available, we can still observe significant alterations in the spectra due to the partial inclusion of anharmonicity through the local harmonic dynamics. § CONCLUSIONS We have demonstrated that the local harmonic Hagedorn wavepacket dynamics can capture anharmonic effects in SVL fluorescence spectra of moderately anharmonic and even nonseparable systems with relatively low vibrational excitations in one or multiple modes. As the initial vibrational excitation increases, the delocalization of excited vibrational wavepackets increases the effects of anharmonicity on the spectra. Although the differences between the exact and the local harmonic spectra become more pronounced for higher initial excitations, the LHA offers notable improvements over vertical and adiabatic harmonic approximations. The Hagedorn wavepacket dynamics, combined with the global or local harmonic approximation, has the advantage of obtaining SVL spectra from all initial vibrational levels using a single common Gaussian wavepacket trajectory, thus avoiding the need of repeating expensive electronic structure calculations for different initial excitations. The local harmonic Hagedorn approach can be particularly valuable for simulating the SVL spectra of relatively small and flexible molecules, where anharmonicity is expected to play a greater role.<cit.> To further increase the efficiency of the local harmonic Hagedorn approach, techniques like Hessian interpolation<cit.> or the single Hessian approximation<cit.> could be employed, similar to their use in simulating conventional absorption and emission spectra from ground vibrational levels with the thawed Gaussian approximation. Finally, in cases where the exact spectra are completely out of reach, the local harmonic results can serve as a check of the adequacy of harmonic models.<cit.> § SUPPLEMENTARY MATERIAL The supplementary material contains the parameters of the twenty-dimensional system from Section <ref>. The authors acknowledge the financial support from the EPFL. § AUTHOR DECLARATIONS §.§ Conflict of Interest The authors have no conflicts to disclose. § DATA AVAILABILITY The data that supports the findings of this study are available within the article and its supplementary material. aipnum4-2
http://arxiv.org/abs/2408.12088v1
20240822025452
Mental-Perceiver: Audio-Textual Multimodal Learning for Mental Health Assessment
[ "Jinghui Qin", "Changsong Liu", "Tianchi Tang", "Dahuang Liu", "Minghao Wang", "Qianying Huang", "Yang Xu", "Rumin Zhang" ]
cs.CY
[ "cs.CY" ]
LLM-enhanced Scene Graph Learning for Household Rearrangement Kai Xu August 26, 2024 ============================================================= § ABSTRACT Mental disorders, such as anxiety and depression, have become a global issue that affects the regular lives of people across different ages. Without proper detection and treatment, anxiety and depression can hinder the sufferer's study, work, and daily life. Fortunately, recent advancements of digital and AI technologies provide new opportunities for better mental health care and many efforts have been made in developing automatic anxiety and depression assessment techniques. However, this field still lacks a publicly available large-scale dataset that can facilitate the development and evaluation of AI-based techniques. To address this limitation, we have constructed a new large-scale Multi-Modal Psychological assessment corpus (MMPsy) on anxiety and depression assessment of Mandarin-speaking adolescents. The MMPsy contains audios and extracted transcripts of responses from automated anxiety or depression assessment interviews along with the self-reported anxiety or depression evaluations of the participants using standard mental health assessment questionnaires. Our dataset contains over 7,700 post-processed recordings of interviews for anxiety assessment and over 4,200 recordings for depression assessment. Using this dataset, we have developed a novel deep-learning based mental disorder estimation model, named Mental-Perceiver, to detect anxious/depressive mental states from recorded audio and transcript data. Extensive experiments on our MMPsy and the commonly-used DAIC-WOZ datasets have shown the effectiveness and superiority of our proposed Mental-Perceiver model in anxiety and depression detection. The MMPsy dataset will be made publicly available later to facilitate the research and development of AI-based techniques in the mental health care field. § INTRODUCTION Anxiety and depression are two common mental health disorders that significantly impact patients, characterized by a cluster of similar signs and symptoms. Individuals suffering from anxiety or depression often experience a persistent state of distress, excessive fear and worry, lack of interest in everyday activities, and diminished energy levels. Beyond emotional disturbances, people with anxiety or depression also exhibit various physiological symptoms, such as weight loss, appetite changes, insomnia, and menopause in female patients <cit.>. Due to these psychological and physiological impacts, anxiety and depression can hinder their sufferers' study, work, and daily life. These mental disorders have become a thorny issue in our modern society. According to a 2019 report by the World Health Organization (WHO) <cit.>, over 301 million people, including 58 million children and adolescents, suffer from anxiety disorders, while more than 280 million people, including 23 million children and adolescents, live with depression. Despite the widespread prevalence, these disorders are frequently under- or mis-diagnosed and the treatment rates remain low <cit.>. This is largely due to the time-consuming and costly nature of diagnosis and treatment. Additionally, patients, particularly children and adolescents, may conceal their true mental state out of fear of stigma during mental health assessments such as self-report screenings or clinical interviews, leading to misdiagnosis. Fortunately, recent advancements of digital and AI technologies provide new opportunities for better mental health care and many efforts have been made in developing automatic anxiety and depression assessment techniques <cit.>. Given the pioneering explorations and achievements in this field, however, several significant limitations hinder current research efforts. Firstly, many existing methods rely heavily on the expertise of psychologists during interviews for data collection, which incurs substantial costs and makes it difficult to construct large-scale annotated datasets. As a result, the size of current datasets related to anxiety and depression detection, such as DAIC-WoZ <cit.> and AViD-Corpus <cit.>, is limited, typically involving only around 100 participants. Although Shen et al. <cit.> made efforts to develop the EATD-Corpus, an emotional audio-textual depression detection dataset utilizing the Self-Rating Depression Scale (SDS), their dataset still includes only 162 volunteers from a University. Besides, the effectiveness of the SDS questionnaires completed by these volunteers was not validated, casting doubt on the reliability of the results. Consequently, constructing larger datasets at a lower cost remains a significant challenge for enhancing automatic anxiety and depression detection systems. In this work, we address key challenges in the study of automatic anxiety and depression detection, aiming to extend research into more realistic settings. First, we introduce a novel large-scale Multi-Modal Psychological assessment corpus (MMPsy) focused on anxiety and depression in Mandarin-speaking adolescents. The MMPsy corpus includes audio recordings and corresponding transcripts of responses from both anxious/depressed and non-anxious/non-depressed adolescent volunteers, collected using a human-computer interaction system. Self-reporting anxiety and depression rating scales were also collected and verified using the same system, which can be used as gold standard in training and evaluating machine-learning based techniques. After rigorous data preprocessing and cleaning, the corpus comprises 7,758 interview data points for anxiety detection and 4,266 for depression detection. To the best of our knowledge, MMPsy is the first publicly available adolescent psychological assessment corpus that simultaneously detects anxiety and depression while providing both audio and text data in Chinese. In addition, we propose a novel mental disorder estimation network, termed Mental-Perceiver, to automatically detect anxious or depressive mental states based on users' audio inputs and corresponding transcripts. The Mental-Perceiver leverages attention mechanisms to map multimodal inputs and category semantic priors to a fixed-size multimodal feature, using a learnable, sharable embedding for effective multimodal fusion. It then applies a deep, fully attentional network to process the fused multimodal feature for in-depth multimodal feature extraction. Finally, the network decodes the extracted feature using a learnable query array to produce the final mental state estimation. Extensive experiments conducted on the MMPsy corpus and the public DAIC-WOZ dataset demonstrate the effectiveness and superiority of our proposed Mental-Perceiver model. Our contributions can be summarized as follows: * We introduce MMPsy, a large-scale multimodal psychological assessment corpus containing 7,758 cleaned interview data for anxiety detection and 4,266 for depression detection. * We develop Mental-Perceiver, a novel mental disorder estimation network that automatically detects anxious or depressive mental states from audio and transcript data, utilizing a fully attentional network architecture for effective multimodal fusion. * Extensive experimental results on MMPsy and the public DAIC-WOZ dataset validate the effectiveness and superiority of our proposed Mental-Perceiver model in automatic mental disorder detection. § RELATED WORK §.§ Automatic Anxiety/Depression Detection Early studies of automatic anxiety/depression detection were concentrated on extracting effective features from the responses to questions that were highly correlated with anxiety/depression. Williamson et al. <cit.> used related semantic context cues entailed in the voice, video-based facial action units, and transcribed text of individuals and built a Gaussian Staircase Model to detect depression automatically. Yang et al. <cit.> selected depression-related questions after analyzing interview transcripts manually and constructed a decision tree with the selected questions to predict the participants’ depression states. Similarly, Sun et al. <cit.> first manually selected questions related to certain topics such as recent feelings and sleep quality by conducting content analysis based on the text transcripts of clinical interviews. Then, they extracted text features from these selected questions as the input of Random Forest to detect depression tendencies. Gong et al. <cit.> performed topic modeling to split the interviews into topic-related segments. Then, they maintained the most discriminating features by a feature selection algorithm. Giannakakis et al. <cit.> focused on non-voluntary and semi-voluntary facial cues to estimate the emotion representation more objectively. They selected the most robust features from the features including eye-related events, mouth activity, head motion parameters, and heart rate estimated through camera-based photoplethysmography. Finally, they deployed a ranking transformation to investigate the correlation of facial parameters with a participant's perceived amount of stress/anxiety. With the advancement of deep learning, extracting and integrating multi-modal features through deep learning models have become particularly promising for anxiety/depression detection. Ma et al. <cit.> encoded the depressive audio characteristics with the combination of CNN and LSTM to predict the presence of depression. Yang et al. <cit.> trained a deep Convolution Neural Network (CNN)-based depression detection model with specially-designed a set of audio and video descriptors. Tuka et al. <cit.> selected audio features and text features that were strongly related to depression severity by computing Pearson Coefficients and built a Long Short-Term Memory (LSTM) network to assess depression tendency. Haque et al. <cit.> proposed a causal CNN model to summarize acoustic, visual, and linguistic features into embeddings which were then used to predict depressive states. Shen et al. <cit.> proposed a depression detection approach utilizing speech characteristics and linguistic contents from participants’ interviews. Lin et al. <cit.> used the biological information of speech, combined with deep learning to build a rapid binary classification model of depression in the elderly. Agarwal <cit.> developed machine-learning solutions to diagnose anxiety disorders from audio journals of patients. §.§ Anxiety/Depression Detection Datasets The public anxiety/depression datasets are quite scarce <cit.> due to ethical issues. To the best of our knowledge, there are only three publicly available datasets  <cit.> referring to depression detection while there are no publicly available multi-modal datasets including audio, facial, or text referring to anxiety detection. The DAIC-WoZ dataset <cit.> contains recordings and transcripts of 142 American participants who were clinically interviewed by a computer agent automatically. AVid-Corpus <cit.> contains audios and videos of German participants answering a set of queries or reciting fables. However, the text transcripts are not provided. EATD-Corpus <cit.> is released to facilitate the research in depression detection. It consists of audios and text transcripts extracted from the interviews of 162 student volunteers recruited from Tongji University. Although EATD-Corpus provides the text transcripts, its data scale is small. § MMPSY: A NEW BENCHMARK Since the public anxiety/depression datasets are quite scarce, we construct and release a new Chinese multi-modal mental health dataset named MMPsy to facilitate the research in both anxiety detection and depression detection. Like EATD-Corpus <cit.>, MMPsy consists of audios and text transcripts extracted from the interviews of more than 20 thousand primary and secondary school students in the Guangdong Province of China. All the volunteers have signed informed consent and tried their best to guarantee the authenticity of all the information provided. Each volunteer is required to answer 10 specially designed questions about anxiety and depression and complete a GAD-7 questionnaire <cit.> or PHQ-9 questionnaire <cit.> for different disorders detection. The GAD-7 questionnaire consists of 7 items that measure the severity of anxiety. It is a commonly used questionnaire for psychologists to screen the anxiety severity of individuals in practice. A raw GAD-7 score can be summarized from the questionnaire. For Chinese people, an index GAD-7 score greater than or equal to 10 implies that the individual is in anxiety. Similarly, the PHQ-9 questionnaire consists of 9 items that measure the severity of depression. It is a commonly used questionnaire for psychologists to screen the depression severity of individuals. A raw PHQ-9 score can be summarized from the questionnaire. For Chinese people, an index PHQ-9 score greater than or equal to 10 implies that the individual is in depression. According to the criterion, there are 704 anxious volunteers and 7,032 non-anxious volunteers in the anxiety subset of MMPsy while there are 853 depressed volunteers and 3,394 non-depressed volunteers in the depression subset of MMPsy. In the final dataset, the overall duration of response audios about anxiety is about 145.9 hours and the overall duration of response audios about depression is about 69.5 hours. The construction of MMPsy can be summarized as two steps: data collection and data preprocessing. * Data collection. We develop a web app to conduct the interview and to collect audio responses and the questionnaires. Our web app will ask the interviewees 10 questions and let them finish the GAD-7 questionnaires or PHQ-9 questionnaires. During the interview, the audio responses will be recorded and uploaded to the server automatically. The results of the questionnaires also will be uploaded. In this step, we collected 17,247 raw interviews about anxiety and 11,306 raw interviews about depression. To ensure the authenticity of the questionnaires, we added some extra but irrelevant questions in the questionnaires, which are used to check the response reliability. * Data preprocessing. Since the raw interviews are noised, we perform several preprocessing operations on the collected data. First, we filtered out the unauthentic data by checking the responses of those extra questions in the questionnaires. Second, mute audios and audios less than 1 second are removed. Besides, we also removed the silent segments identified with voice activity detection in each recording. Third, the background noises are eliminated using Spleeter <cit.>. After that, Paraformer <cit.> is deployed to extract textual transcripts from audios. To ensure the correctness of extracted transcripts, we checked and corrected the transcript manually by listening to the audio and comparing the consistency between the semantics of auidos and the transcripts at the word level. After data preprocessing, we obtained 7,758 cleaned interview data for anxiety detection and 4,266 for depression detection. We randomly split these data into a training set, a validation set, and a test set in an 8:1:1 ratio. In this way, the anxiety detection part of MMPsy consists of 6,188 training volunteers, 744 validation volunteers, and 744 test volunteers while the depression detection part of MMPsy contains 3,397 training volunteers, 425 validation volunteers, and 425 test volunteers. The data statistics are listed in the Table <ref>. For each volunteer, we pack their audios and transcripts according to their sequential order and resegment the data in 60 seconds with a 10-second overlap like the way in the work <cit.>. Thus, the data of a volunteer can generate multiple samples. § MENTAL-PERCEIVER This section describes the architecture of our proposed Mental-Perceiver as shown in Figure <ref>. We first encode by applying an attention module that maps multimodal input x ∈ℝ^M × D_x to features in a latent space z ∈ℝ^2 × D_z by interacting with category prior p ∈ℝ^2 × D_p which is obtained by computing the center point of different text representations from different categories. Then, we conduct deep feature extraction on latent features z by applying a series of attention modules that take in and return features z' in this latent feature space. Finally, we decode by applying an attention module that maps latent arrays z' and the query array q ∈ℝ^2 × D_qto the final feature representation y ∈ℝ^2 × D_y. Based on the final feature representation y, we apply a Linear layer to map the feature y into class-wise logit outputs y^C_0∈ℝ^1 × 2 and y^C_1∈ℝ^1 × 2. M is the length of multimodal input while D_x, D_z, D_q, and D_y denote the feature dimension. With class-wise logits y^C_0 and y^C_1, to predict the class c' of the input x, we compute the mean values of corresponding vector elements in the y^C_0 and y^C_1 to obtain final logits y' followed by a Softmax function. §.§ Basic Attention module Following the pioneering work PerceiverIO <cit.>, all attention modules deployed in our Mental-Perceiver are implemented as Transformer-style attention <cit.>. Each attention module applies a global query-key-value (QKV) attention operation followed by a multi-layer perceptron (MLP). The MLP is independently applied to each element of the index dimension. Both the encoder and decoder take in two input feature arrays. The first array is used as input to the attention module's key and value networks, and another array is used as input to the module's query network for interacting and fusing with the first array. The attention module’s output has the same index dimension (the same number of elements) as the query input. Therefore, the attention module can be modeled as follows: attention(Q, K, V) = MLP( Softmax ( QK^T/√(d_k)) V) §.§ Category Semantic Prior Our Mental-Perceiver extracts deep features based on the latent features z which is produced by fusing the multimodal input x and the latent embedding p with an attention module. p is used as a query and x is used as the key and value. This means that the output z is the result that p extracts and fuse semantic information from x. So, the initialization of the latent embedding p is crucial for learning more discriminative features in the following steps to identify a user's mental state according to the user's audio and transcript text. Semantic prior has been shown to help greatly learn more discriminative features <cit.>. Therefore, we build semantic priors for different categories, one for the normal category and another for the mental disorder category. Then, we use these two semantic priors to initialize the latent embedding p ∈ℝ^2 × D with a learnable MLP for mapping the hidden size of semantic priors to D. In Mental-Perceiver, we fixed the parameters of these two semantic priors and only optimized the parameters of the learnable MLP semantic priors. To obtain the semantic priors, we simply compute the center points for different categories. Formally, given the text representation set E_t ∈ℝ^N × H of a category on the training set, where N is the number of data samples for the current category and H is the hidden size of a text representation, we can compute the semantic prior p^C_i∈ℝ^1 × H by averaging the normalized E_t at the index dimension. This procedure can be modeled as follows: p^C_i = Avg(Norm(E_t)) where Avg is the averaging function and Norm is the z-score normalization function that normalizes each vector separately in E_t. Let C_0 and C_1 denote the classes of normal people and psychological patients, respectively, we first obtain two semantic priors p^C_0 and p^C_1 according to Equation (<ref>). Then, we obtain p by concatenating p^C_0 and p^C_1 at the index dimension followed by a learnable MLP layer as follows: p = MLP(p^C_0⊙ p^C_1) where ⊙ denotes the concatenation. §.§ Encoder The encoder, consisting of an attention module, takes charge of mapping the multimodal input x into latent features z by utilizing category semantic prior-enhanced latent embedding p to query the multimodal input x. This way will fuse the multimodal input x and category semantic prior-enhanced latent embedding p and build fused prior-guided latent features z, which will be used as input to the next deep feature extraction. The encoder can be modeled as follows: z = attention(p, x, x) §.§ Deep Feature Extraction Once we obtain the latent features z, we conduct deep feature extraction based on the input latents z by applying a series of attention modules that take in and return latents z_i in this latent space iteratively. This module can be modeled as follows: z_1 = attention(z, z, z) z_2 = attention(z_1, z_1, z_1) ... z_k = attention(z_k-1, z_k-1, z_k-1) where k is a hyper-parameter that is simply set to 8. §.§ Decoder The goal of the decoder is to produce a final class-wise logit output of size 2 × 2, given a latent representation of size 2 × D_z. Let z' = z_k, the decoder first applies an attention module to map the latent z' to output features y. Then, the decoder applies a linear layer to map y into the final class-wise logit output [y^C_0, y^C_1] ∈ℝ^2×2, where y^C_0∈ℝ^1×2 and y^C_1∈ℝ^1×2 are two logit outputs for indicating whether the multimodal input x matches with semantic prior p^C_0 and p^C_1. Finally, we compute the mean vector of these two output logit vectors y^C_0 and y^C_1 at the index dimension as the final classification logits y'. Therefore, the decoder can be modeled as follows: y = attention(q, z', z') [y^C_0, y^C_1] = Linear(y) y' = Mean(y^C_0⊙ y^C_1) §.§ Training Objectives To optimize our Mental-Perceiver, we deploy two losses. The first one is the matching loss ℒ_match and another one is the classification loss ℒ_cls. Both these two losses are binary cross-entropy loss functions. The matching loss aims to optimize the matching degree between the multimodal input x and its corresponding category semantic prior while the classification loss optimizes the model to be able to identify the inherent mental state according to multimodal input x with the help of category prior p and category query q. Formally, given a multimodal input x and its class C_x which can be 0 or 1, the training objectives can be modeled as follows: ℒ_match = -C_x (log y^C_0_0+log y^C_1_1) - (1-C_x) (log y^C_0_1 + log y^C_1_0) ℒ_cls = - C_x logy'_0 - (1-C_x) logy'_1 ℒ = ℒ_match + ℒ_cls where y^C_0_0 and y^C_0_1 are the 0-th element and 1-th element in the probability distribution obtained from class-wise logits y^C_0∈ℝ^1×2 by Softmax function while y^C_1_0 and y^C_1_1 are the 0-th element and 1-th element in the probability distribution obtained from class-wise logits y^C_1∈ℝ^1×2 by Softmax function. Similarly, y'_0 and y'_1 are the two probabilities on normal class (0) and mental disorder class (1) that can be obtained by applying the Softmax function on y'. § EXPERIMENTS §.§ Datasets We conduct experiments on both two subsets of MMPsy for anxiety detection and depression detection. We use MMPsy-Anxiety and MMPsy-Depression to represent these two subsets. Besides, we also verify our Mental-Perceiver on the Distress Analysis Interview Corpus - Wizard of Oz (DAIC-WOZ) dataset <cit.>. DAIC-WOZ contains clinical interviews of 189 participants designed to support the diagnosis of psychological distress conditions such as anxiety, depression, and posttraumatic stress disorder (PTSD). During each interview, several data in different formats as well as modalities are recorded simultaneously. However, only the acoustic recordings and transcriptions are chosen in this work for a fair comparison. Moreover, the given GT is an eight-item Patient Health Questionnaire depression scale (PHQ-8), which indicates the severity of depression. A PHQ-8 Score ≥ 10 implies that the participant is undergoing a mental disorder. §.§ Baselines The main baselines to be compared are listed as follows: * SVM <cit.>: a robust shallow model capable of performing binary classification efficiently by using a kernel trick, representing the data only through a set of pairwise similarity comparisons between the original data points using a kernel function, which transforms them into coordinates in the higher dimensional feature space. * RandomForest <cit.>: it is a robust ensemble learning method for classification by constructing a multitude of decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. * XGBoost <cit.>: it is a robust toolbox for classification via an optimized distributed gradient boosting. * NUSD <cit.>: it is a deep model ECAPA-TDNN enhanced with a speaker disentanglement method that utilizes a non-uniform mechanism of adversarial SID loss maximization. * ConvLSTM <cit.>: it is a Convolutional Bidirectional LSTM with a sub-attention mechanism for linking heterogeneous information. * PerceiverIO <cit.>: it is a general-purpose architecture that handles multimodal data with fully attention design and a flexible querying mechanism. For the shallow models SVM, RandomForest, and XGBoost, we provide the following audio features as input: F0 statistics (mean), log-energy, zero-crossing-rate, loudness, pitch period entropy, jitters, shimmers, harmonics-to-noise ratio, detrended fluctuation analysis, linear spectral coefficients-0, linear spectral frequencies-0, formants (F1), and amplitude Shannon entropy. All these features can be extracted by applying Surfboard [https://github.com/novoic/surfboard] <cit.>. Besides, we also extract topic words by TF-IDF as text features for these shallow models. For the deep models, we use BERT <cit.> to extract features and use Mel-spectrum to represent audio. §.§ Metrics In classification tasks, True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) from the confusion matrix are metrics used to measure the accuracy of model predictions. TP refers to instances where the model correctly predicted them as the positive class, while TN refers to instances where the model correctly predicted them as the negative class. On the contrary, FP represents cases where the model incorrectly labeled instances as positive, and FN denotes instances of the positive class that were incorrectly classified as negative. Based on these concepts, several common performance metrics can be defined to assess the model’s performance: * Accuracy: Accuracy (Acc) represents the proportion of all predictions that are correctly predicted. * Recall: Recall measures the model’s ability to identify all true positive instances, that is, the proportion of actual positives that are correctly identified. * Precision: Precision denotes the proportion of samples predicted as positive by the model that are actually positive, focusing on the accuracy of positive predictions. * F1-Score: F1 Score ranges from 1 to 0, with a value closer to 1 indicating better model performance, particularly in scenarios where the distribution of positive and negative samples is imbalanced. The F1 Score serves as a more comprehensive evaluation metric under such conditions. * Sensitivity: Sensitivity (Sens), also termed true positive rate, is the ratio of positive predictions to the number of actual positives. It is hence identical to the recall of the positive class, Recall(1). * Specificity: Specificity (Spec) is the ratio of negative predictions to the number of actual negatives, and therefore identical to Recall(0). * UAR: Although Acc is often used to evaluate model performance, it suffers from the data imbalance issue. The stronger the imbalance the more accuracy tends to reflect the performance of the majority class since it is a weighted accuracy. For this reason in areas where relatively small data sets predominate, such as the bio-medical field or in the paralinguistics field in general, researchers usually prefer and report the Unweighted Average Recall (UAR), which is defined as the average of Sensitivity and Specificity. UAR = Sensitivity + Specificity/2 During these metrics, we use Accuracy (Acc), Unweighted Average Recall (UAR), Sensitivity (Sens), and Specificity (Spec) as the main evaluation metrics for evaluating overall model performance. Meanwhile, we also report the Recall, Precision, and F1-score separately for different categories. §.§ Implementation Details We use Pytorch[http://pytorch.org] to implement our framework on Linux with two NVIDIA RTX 4090 GPU cards. The feature dimension D_x is set to 768 and other dimensions D_z, D_q, and D_y are all set to 512. In each epoch, all training data is shuffled randomly and then cut into mini-batches. The text feature and audio feature are concatenated as the multimodal input. We deploy the AdamW <cit.> optimizer for model optimization. We trained models for 200 epochs with an initial learning rate of 0.00003 and used LambdaLR to adjust the learning rate during training. The early stopping with patience 15 is deployed to accelerate training. We use the validation set for model selection and report the performance on the test set. §.§ Experiment Results §.§.§ Main Results The experiment results of our Mental-Perceiver and various baselines on three datasets MMPsy-Anxiety, MMPsy-Depression, and DAIC-WOZ are shown in Table <ref>. From the results on different datasets, we can conclude as follows. On the MMPsy-Anxiety dataset, our Mental-Perceiver outperforms baselines on Acc, UAR, and Sens while achieving competitive performance on Spec. This shows that our Mental-Perceiver can achieve better overall anxiety detection performance with a better trade-off between Sensitivity and Specificity. For different categories, we can observe that our Mental-Perceiver achieves relatively high performance in the normal category and achieves the best performance in the anxious category. On MMPsy-Depression dataset, our Mental-Perceiver outperforms baselines on Acc, UAR, and Sens while achieving competitive performance on Spec. This shows that our Mental-Perceiver can achieve better overall depression detection performance with a better trade-off between Sensitivity and Specificity. For different categories, we can observe that our Mental-Perceiver achieves the best precision and F1-score in the normal category while outperforming all baselines in the depressed category on all three metrics. On DAIC-WOZ dataset, a similar conclusion can be reached. The Mental-Perceiver outperforms baselines on Acc, UAR, and Spec. Although the Mental-Perceiver's sensitivity is lower than the baseline ConvLSTM, ConvLSTM is very poor in specificity, indicating that there is a high rate of misdiagnosis on ConvLSTM. According to the UAR, we can observe Mental-Perceiver has a better balance between the false positive rate (1 - Specificity) and the false negative rate (1 - Sensitivity). Besides, according to the metrics on different categories, we can observe that the Mental-Perceiver achieves the best overall performance in both two categories. Overall, our Mental-Perceiver can achieve the best overall performance on different datasets for different mental disorder detection, showing the effectiveness and universality of our Mental-Perceiver for detecting different mental disorders. Besides, different models can achieve varying degrees of performance, indicating the effectiveness and usability of our MMPsy dataset as a benchmark for developing and evaluating mental disorder detection models. §.§.§ Ablation study on different modalities To verify the superiority of multimodal text-audio for mental disorder detection, we conduct an ablation study by using only audio, only text, and text+aduio as input to the Mental-Perceiver. The experimental results are shown in Table <ref>. We can observe that the Mental-Perceiver with multimodal inputs can achieve the best performance across various metrics on the MMPsy-Depression while achieving the best performance on UAR, Sensitivity, Precision in the normal category, Precision in the anxious category, Recall in the anxious category, and F1 on the anxious category and reasonable and competitive performance on Acc, Specificity, Recall on the normal category, and F1 on the normal category. Overall, multimodal input helps detect mental disorders. §.§.§ The effects of category priors and matching loss To investigate the effects of the category priors and matching loss, we conduct a study by removing the category priors and matching loss. The results are shown in Table <ref>. It can be seen that each component can improve the performance across various metrics, showing the effectiveness of category priors and matching loss. § CONCLUSION In this work, we construct a new large-scale Multi-Modal Psychological assessment corpus (MMPsy) about anxiety and depression in adolescents who use Mandarin. The MMPsy contains audios and extracted transcripts of responses from anxious/depressed and non-anxious/non-depressed adolescent volunteers. There are 7,758 cleaned interview data for anxiety detection and 4,266 for depression detection. We further propose a novel mental disorder estimation network, named Mental-Perceiver, to detect anxious/depressive mental states automatically according to users' audio and corresponding transcripts. Extensive experiments on MMPsy and the public DAIC-WOZ show the effectiveness and superiority of our proposed Mental-Perceiver.
http://arxiv.org/abs/2408.10996v1
20240820164345
Approximation Rates for Shallow ReLU$^k$ Neural Networks on Sobolev Spaces via the Radon Transform
[ "Tong Mao", "Jonathan W. Siegel", "Jinchao Xu" ]
stat.ML
[ "stat.ML", "cs.LG", "cs.NA", "math.NA", "62M45, 41A25, 41A30" ]
Approximation Rates for Shallow ReLU^k Neural Networks on Sobolev Spaces via the Radon Transform Tong Mao Computer, Electrical and Mathematical Science and Engineering Division King Abdullah University of Science and Technology Thuwal 23955, Saudi Arabia Jonathan W. Siegel Department of Mathematics Texas A&M University College Station, TX 77843 Jinchao Xu Computer, Electrical and Mathematical Science and Engineering Division King Abdullah University of Science and Technology Thuwal 23955, Saudi Arabia Received: date / Accepted: date ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Let Ω⊂ℝ^d be a bounded domain. We consider the problem of how efficiently shallow neural networks with the ReLU^k activation function can approximate functions from Sobolev spaces W^s(L_p(Ω)) with error measured in the L_q(Ω)-norm. Utilizing the Radon transform and recent results from discrepancy theory, we provide a simple proof of nearly optimal approximation rates in a variety of cases, including when q≤ p, p≥ 2, and s ≤ k + (d+1)/2. The rates we derive are optimal up to logarithmic factors, and significantly generalize existing results. An interesting consequence is that the adaptivity of shallow ReLU^k neural networks enables them to obtain optimal approximation rates for smoothness up to order s = k + (d+1)/2, even though they represent piecewise polynomials of fixed degree k. § INTRODUCTION We consider the problem of approximating a target function f:Ω→ℝ, defined on a bounded domain Ω⊂ℝ^d, by shallow ReLU^k neural networks of width n, i.e. by an element from the set Σ_n^k(ℝ^d) := {∑_i=1^n a_iσ_k(ω_i· x + b_i), a_i,b_i∈ℝ, ω_i∈ℝ^d}, where the ReLU^k activation function σ_k is defined by σ_k(x) = 0 x ≤ 0 x^k x > 0. We remark that when d = 1, the class of shallow ReLU^k neural networks is equivalent to the set of variable knot splines of degree k. For this reason, shallow ReLU^k neural networks are also called ridge splines and form a higher dimensional generalization of variable knot splines. The approximation theory of shallow ReLU^k neural networks has been heavily studied due to their relationship with neural networks and their success in machine learning and scientific computing (see for instance <cit.> and the references therein). Despite this effort, many important problems remain unsolved. Notably, a determination of sharp approximation rates for shallow ReLU^k neural networks on classical smoothness spaces, in particular Sobolev and Besov spaces, has not been completed except when d=1 (the theory of variable knot splines in one dimension is well known and can be found in <cit.>, for instance). Let Ω⊂ℝ^d be a bounded domain. To simplify the presentation, we will only consider the case where Ω := {x:|x| < 1} is the unit ball in ℝ^d, although we remark that our techniques give the same results for any domain Ω with smooth boundary by utilizing appropriate Sobolev and Besov extension theorems <cit.>. We define the Sobolev spaces W^s(L_q(Ω)) for integral s via the norm f_W^s(L_q(Ω)) = f_L_q(Ω) + ∑_|α| = sf^(s)_L_q(Ω), where the sum is over multi-indices α with weight s. When s is not an integer, we write s = k+θ with k an integer and 0 < θ < 1, and define the fractional Sobolev spaces (see for instance <cit.>) via |f|^q_W^s(L_q(Ω)) := ∑_|α| = k∫_Ω×Ω|D^α f(x) - D^α f(y)|^q/|x - y|^d+θ qdxdy and f^q_W^s(L_q(Ω)) := f^q_L_q(Ω) + |f|^q_W^s(L_q(Ω)), with the standard modifications when q = ∞. Sobolev spaces are central objects in analysis and the theory of PDEs (see for instance <cit.>). We remark also that when q = 2 and Ω = ℝ^d, the Sobolev norm can be conveniently characterized via the Fourier transform, specifically |f|^2_W^s(L_2(ℝ^d))∫_ℝ^d |ξ|^2s|f̂(ξ)|dξ, where f̂ denotes the Fourier transform of f defined by (see <cit.>) f̂(ξ) := ∫_ℝ^d e^iξ· x f(x)dx. The Besov spaces may be defined using the modulus of smoothness (see for instance <cit.>), which for a function f∈ L_q(Ω) is given by ω_k(f,t)_q = sup_|h|≤ tΔ^k_h f_L_q(Ω_kh). Here Δ^k_h f is the k-th order finite difference in the direction h and Ω_kh = {x∈Ω, x + kh∈Ω}, which guarantees that all terms of the finite difference lie in Ω. For s > 0 and 1≤ r,q≤∞ the Besov norm is defined by |f|_B^s_r(L_q(Ω)) := (∫_0^∞ω_k(f,t)_q^r/t^sr+1dt)^1/r when r < ∞ and by |f|_B^s_∞(L_q(Ω)) := sup_t > 0 t^-sω_k(f,t)_q, when r = ∞. The Besov spaces are closely related to approximation by trigonometric polynomials, splines, and wavelets, and have numerous applications in approximation theory, harmonic analysis, signal processing, and statistics (see for instance <cit.>). We remark that it is known that the Sobolev spaces for non-integral values of s are equivalent to Besov spaces (see <cit.>), specifically f_W^s(L_q(Ω))f_B^s_q(L_q(Ω)) for an appropriately smooth domain. An important theoretical question is to determine optimal approximation rates for Σ_n^k(ℝ^d) on the classes of Sobolev and Besov functions. Specifically, we wish to determine minimax approximation rates sup_f_W^s(L_q(Ω))≤ 1inf_f_n∈Σ_n^k(ℝ^d)f - f_n_L_p(Ω) and sup_f_B_r^s(L_q(Ω))≤ 1inf_f_n∈Σ_n^k(ℝ^d)f - f_n_L_p(Ω) for different values of the parameters s,p,q,r and k. When d = 1, the set of shallow neural networks Σ_n^k(ℝ) simply corresponds to the set of variable knot splines with at most n breakpoints. In this case a complete theory follows from known results on approximation by variable knot splines <cit.>. When d > 1, this problem becomes considerably more difficult and only a few partial results are known. We remark that when approximating functions from a Sobolev space W^s(L_q(Ω)) or a Besov space B_r^s(L_q(Ω)) in L_p there is a significant difference depending upon whether q ≥ p or q < p. In the former case, linear methods of approximation are able to achieve an optimal approximation rate, while when q < p non-linear methods are required <cit.>. For shallow ReLU^k neural networks, existing approximation results have exclusively been obtained in the linear regime when q ≥ p. Fully understanding approximation by shallow ReLU^k neural networks in the non-linear regime when q < p appears to be a very difficult open problem. In this work, we study approximation rates for shallow ReLU^k neural networks on Sobolev spaces using existing approximation results on variation spaces (we leave the more technical case of Besov spaces to future work). The variation space corresponding to ReLU^k neural networks is defined as follows. Let Ω⊂ℝ^d be a bounded domain and consider the dictionary, i.e. set, of functions ℙ_k^d := {σ_k(ω· x + b), ω∈ S^d-1, b∈ [c,d]}, where the interval [c,d] depends upon the domain Ω (see <cit.> for details and intuition behind this definition). The set ℙ_k^d consists of the possible outputs of each neuron given a bound on the inner weights. The unit ball of the variation space is the closed symmetric convex hull of this dictionary, i.e. B_1(ℙ_k^d) = {∑_i=1^n a_id_i, d_i∈ℙ_k^d, ∑_i=1^n|a_i|≤ 1}, where the closure is taken for instance in L_2 (it is known that the closure is the same when taken in different norms as well <cit.>). Given the unit ball, we may define the variation space norm via f_𝒦_1(ℙ_k^d) = inf{c > 0: f∈ cB_1(ℙ_k^d)}. The variation space will be denoted 𝒦_1(ℙ_k^d) := {f∈ X: f_𝒦_1(ℙ_k^d) < ∞}. We remark that the variation space can be defined for a general dictionary, i.e. bounded set of functions, 𝔻 (see for instance <cit.>). This space plays an important role in non-linear dictionary approximation and the convergence theory of greedy algorithms <cit.>. In addition, the variation spaces 𝒦_1(ℙ_k^d) play an important role in the theory of shallow neural networks and have been extensively studied in different forms recently <cit.>. An important question regarding the variation spaces is to determine optimal approximation rates for shallow ReLU^k networks on the space 𝒦_1(ℙ_k^d). This problem has been studied in a series of works <cit.>, with the (nearly) optimal rate of approximation, inf_f_n∈Σ_n^k(ℝ^d)f - f_n≤ Cf_𝒦_1(ℙ_k^d)n^-1/2-2k+1/2d, recently being obtained for the L_2-norm in <cit.> and in the L_∞-norm in <cit.>. To be precise, this rate is optimal up to logarithmic factors, which is shown in <cit.> under a mild restriction on the weights, while the general optimality follows from the results proved in this work. A promising approach to obtaining approximation rates for ReLU^k neural networks on Sobolev and Besov spaces is to use the approximation rate (<ref>) obtained on the variation space to obtain rates on Sobolev and Besov spaces via an interpolation argument, i.e. by approximating the target function f first by an element of the variation space and then approximating via a shallow neural network. This type of argument was applied in <cit.>, where an approximation rate of inf_f_n∈Σ_n^1(ℝ^d)f - f_n_L_∞(Ω)≤ Cf_W^1(L_∞(Ω))(n/logn)^-1/d was proved for the class of Lipschitz functions W^1(L_∞(Ω)). We remark that, due to a minor error, the proof in <cit.> is only correct when d ≥ 4. This approach was extended in <cit.> (see also <cit.>) to larger values of the smoothness s and the logarithmic factor was removed, which gives the approximation rate inf_f_n∈Σ_n^k(ℝ^d)f - f_n_L_∞(Ω)≤ Cf_W^s(L_∞(Ω))n^-s/d for all s < (d+2k+1)/2. Up to logarithmic factors, this rate is optimal, which solves the problem (<ref>) when p = q = ∞. Indeed, lower bounds on the approximation rates (<ref>) can be obtained using either the VC-dimension or pseudodimension of the class of shallow neural networks Σ_n^k(ℝ^d) (see <cit.>). This gives a lower bound of sup_f_W^s(L_q(Ω))inf_f_n∈Σ_n^k(ℝ^d)f - f_n_L_p(Ω)≥ C(nlog(n))^-s/d for all s,d,k,p and q. Removing the remaining logarithmic gap here appears to be a very difficult problem. We remark that there are also other approaches which do not utilize the variation space, such as the method developed in <cit.>, where it is proved for Sobolev spaces that inf_f_n∈Σ_n^k(ℝ^d)f - f_n_L_2(Ω)≤ Cf_W^s(L_2(Ω))n^-s/d for s ≤ (d+2k+1)/2. Again, this rate is optimal up to logarithmic factors, giving the solution to (<ref>) when p = q = ∞. In this work, we utilize approximation rates for the variation space and an interpolation argument to extend the approximation rates derived in previous work to a variety of new cases. The key component of our analysis is the following embedding theorem, which is proved using a Radon space characterization of the variation space <cit.>. Let s = (d+2k+1)/2. Then we have the embedding W^s(L_2(Ω)) ⊂𝒦_1(ℙ_k^d). This result shows that the L_2-Sobolev space with a certain amount of smoothness embeds into the variation space 𝒦_1(ℙ_k^d), and has quite a few important consequences. First, combining this with the approximation rate (<ref>), we obtain the following corollary. Let s = (d+2k+1)/2. Then we have the approximation rate inf_f_n∈Σ_n^k(ℝ^d)f - f_n_L_∞(Ω)≤ Cf_W^s(L_2(Ω))n^-s/d. Note that in (<ref>) we have error measured in L_p with p = ∞ and smoothness measured in L_q with q = 2. In particular, this result gives to the best of our knowledge the first approximation rate for ridge splines in the non-linear regime when q < p. However, this only applies to one particular value of s, p and q, and it is an interesting open question whether this can be extended more generally. To understand the implications for the linear regime, we note that it follows from Corollary <ref> that inf_f_n∈Σ_n^k(ℝ^d)f - f_n_L_p(Ω)≤ Cf_W^s(L_p(Ω))n^-s/d for any 2≤ p≤∞ with s = (d+2k+1)/2. Standard interpolation arguments can now be used to give approximation rates for Sobolev spaces in the regime when p = q and p ≥ 2. Suppose that 2≤ p ≤∞ and 0 < s ≤ k + d+1/2. Then we have inf_f_n∈Σ_n^kf - f_n_L_p(Ω)≤ Cf_W^s(L_p(Ω))n^-s/d. Corollary <ref> extends the approximation rates obtained in <cit.> to all p ≥ 2. We remark that using interpolation an analogous result can be proved for Besov spaces, but for simplicity we will not discuss this more technical result in this paper. Note that in Corollary <ref>, we required the index p ≥ 2. When d = 1, i.e. in the case of one-dimensional splines, it is well-known that the same rate also holds when p < 2. In this case, Theorem <ref> can actually be improved to (see <cit.>, Theorem 3) W^s(L_1(Ω)) ⊂𝒦_1(ℙ_k^d) for s = k+1 (the case d = 1 in Theorem <ref>), and approximation rates for all 1≤ p ≤∞ easily follow from this in an analogous manner. However, we remark that this method of proof fails when d > 1, since the embedding (<ref>) fails in this case for the value of s in Theorem <ref>, which is required to obtain the approximation rate in Corollary <ref>. This can be seen by noting that 𝒦_1(ℙ_k^d) ⊂ L_∞(Ω), and thus if (<ref>) holds, then we must have W^s(L_1(Ω)) ⊂ L_∞(Ω) which by the Sobolev embedding theory implies that s ≥ d, which is not compatible with Theorem <ref> unless (d+2k+1)/2 ≥ d, i.e. k ≥ (d-1)/2. For this reason the current method of proof cannot give the same approximation rates when d > 1 for all values of 1≤ p < 2 and k ≥ 0. Resolving these cases is an interesting open problem, which will require methods that go beyond the variation spaces 𝒦_1(ℙ_k^d), for instance by generalizing the analysis in <cit.>. Let us also remark that the embedding given in Theorem <ref> is sharp in the sense of metric entropy. Recall that the metric entropy numbers of a compact set K⊂ X in a Banach space X is defined by ϵ_n(K)_X = inf{ϵ > 0: K is covered by 2^n balls of radius ϵ}. This concept was first introduced by Kolmogorov <cit.> and gives a measure of the size of compact set K⊂ X. Roughly speaking, it gives the smallest possible discretization error if the set K is discretized using n-bits of information. It has been proved in <cit.> that the metric entropy of the unit ball B_1(ℙ_k^d) satisfies ϵ_n(B_1(ℙ_k^d))_L_2(Ω) n^-1/2 - 2k+1/2d. Moreover, the results in <cit.> imply that the metric entropy decays at the same rate in all L_p(Ω)-spaces for 1≤ p ≤∞ (potentially up to logarithmic factors). By the Birman-Solomyak theorem <cit.>, this matches the rate of decay of the metric entropy with respect to L_p(Ω) of the unit ball of the Sobolev space W^s(L_2(Ω)) for s = (d+2k+1)/2. This means that both spaces in Theorem <ref> have roughly the same size in L_p(Ω). Finally, let use relate these results to the existing literature on ridge approximation. Ridge approximation is concerned with approximating a target function f by an element from the set ℛ_n := {∑_i=1^n f_i(ω_i· x), f_i:ℝ→ℝ, ω_i∈ S^d-1}, Here the functions f_i can be arbitrary one-dimensional functions and the direction ω_i lie on the sphere S^d-1. There is a fairly extensive literature on the problem of ridge approximation (see for instance <cit.> for an overview of the literature). In the linear regime optimal approximation rates are known for Sobolev and Besov spaces (see <cit.>) and we have for instance inf_f_n∈ℛ_nf - f_n_L_p(Ω)≤ Cf_W^s(L_p(Ω))n^-s/d-1 for all 1≤ p ≤∞. This result is proved by first approximating f by a (multivariate) polynomial of degree m, and then representing this polynomial as a superposition of m^d-1 polynomial ridge functions. This construction applies to neural networks provided we use an exotic activation function σ whose translates are dense in C([-1,1]) (see <cit.>). Using an arbitrary smooth non-polynomial activation function we can also reproduce polynomials using finite differences to obtain an approximation rate of O(n^-s/d) (see <cit.>). On the other hand, shallow ReLU^k neural networks always represent piecewise polynomials of fixed degree k, and our results do not proceed by approximating with a high-degree polynomial. One would expect that such a method could only capture smoothness up to order k+1. Interestingly, as shown in Corollary <ref>, the non-linear nature of ReLU^k neural networks allow us to capture smoothness up to degree k + (d+1)/2. This shows that in high dimensions, suitably adaptive piecewise polynomials can capture very high smoothness with a fixed low degree, providing a Sobolev space analogue of the results obtained in <cit.>. We remark that this is a potential advantage of shallow ReLU^k networks for applications such as solving PDEs <cit.>. The paper is organized as follows. In Section <ref> we give an overview of the relevant facts regarding the Radon transform <cit.> that we will use later. Then, in Section <ref> we provide the proof of Theorem <ref>. Finally, in Section <ref> we deduce Corollary <ref>. § THE RADON TRANSFORM In this Section, we recall the definition and several important facts about the Radon transform that we will use later. The study of the Radon transform is a large and active area of research and we necessarily only cover a few basic facts which will be important in our later analysis. For more detailed information on the Radon transform, see for instance <cit.>. We also remark that the Radon transform has recently been extensively applied to the study of shallow neural networks in <cit.>. Given a Schwartz function f∈𝒮(ℝ^d) defined on ℝ^d, we define the Radon transform of f as ℛ(f)(ω,b) = ∫_ω· x = b f(x)dx, where the above integral is over the hyerplane ω· x = b. The domain of the Radon transform is S^d-1×ℝ, i.e. |ω| = 1 and b∈ℝ. Using Fubini's theorem, we easily see that for each ω∈ S^d-1 we have ℛ(f)(ω,·)_L_1(ℝ) = ∫_ℝ |ℛ(f)(ω,b)|db = ∫_ℝ|∫_ω· x = b f(x)dx|db ≤∫_ℝ∫_ω· x = b |f(x)|dxdb = f_L_1(ℝ^d). Integrating this over the sphere S^d-1 we get ℛ(f)_L_1(S^d-1×ℝ)≤ω_d-1f_L_1(ℝ^d), where ω_d-1 denotes the surface area of the sphere S^d-1, so that the Radon transform extends to a bounded map from L_1(ℝ^d)→ L_1(S^d-1×ℝ). In fact, the bound (<ref>) gives even more information. A fundamental result relating the Radon transform to the Fourier transform is the Fourier slice theorem (see for instance Theorem 5.10 in <cit.>). Let f∈ L_1(ℝ^d) and ω∈ S^d-1. Let g_ω(b) = ℛ(f)(ω,b). Then for each t∈ℝ we have g_ω(t) = f̂(ω t). Note that by (<ref>) we have g_ω∈ L_1(ℝ) and so the Fourier transform in Theorem <ref> is well-defined. For completeness, we give the simple proof. Expanding out the definition of the Fourier transform and Radon transforms and using Fubini gives g_ω(t) = ∫_ℝ e^-it b∫_ω· x = b f(x)dxdb = ∫_ℝ∫_ω· x = be^-itω· xf(x) dxdb = ∫_ℝ^d e^-itω· xf(x) dx = f̂(ω t), since ω· x = b. Utilizing the Fourier slice theorem and Fourier inversion, we can invert the Radon transform as follows (see for instance Section 5.7 in <cit.>). f(x) = 1/(2π)^d∫_ℝ^df̂(ξ) e^iξ· xdξ = 1/2(2π)^d∫_S^d-1∫_-∞^∞f̂(ω t)|t|^d-1e^itω· xdtdω = 1/2(2π)^d∫_S^d-1∫_-∞^∞g_ω(t)|t|^d-1e^itω· xdtdω. The inner integral above is the inverse Fourier transform of g_ω(t)|t|^d-1 evaluated at ω· x. This gives the inversion formula f(x) = ∫_S^d-1 H_dℛf(ω,ω· x)dω, where the operator H_d acts on the b-coordinate and is defined by the (one-dimensional) Fourier multiplier H_dg(t) = 1/2(2π)^d|t|^d-1ĝ(t). The inversion formula (<ref>) is typically called the filtered back-projection operator and is often applied to invert the Radon transform in medical imaging applications (see for instance <cit.>). It is valid provided that the Fourier inversion formula is valid, for instance whenever f is a Schwartz function. § EMBEDDINGS OF SOBOLEV SPACES INTO RELU^K VARIATION SPACES Our goal in this section is to prove Theorem <ref> on the embedding of Sobolev spaces into the neural network variation space. By a standard density argument and the Sobolev extension theory (see for instance <cit.>) it suffices to prove that f_𝒦_1(ℙ_k^d)≤ Cf_W^s(L_2(ℝ^d)) for s = (d + 2k + 1)/2 and every function f∈ C^∞_c(𝔹_2^d). Here the norm on the left-hand side is the variation norm of f restricted to Ω, the constant C is independent of f, and 𝔹_2^d denotes the ball of radius 2 in ℝ^d (any bounded domain containing Ω will also do). Since f is a Schwartz function, we may use the Radon inversion formula (<ref>) to write f(x) = ∫_S^d-1 F_ω(ω· x)dω, where F_ω(t) = H_dℛf(ω,t). We remark also that since f∈ C^∞_c(𝔹_2^d), we have F_ω∈ C^∞(ℝ) for each ω∈ S^d-1 (it is not necessarily compactly supported due to the Hilbert transform in the filtered back-projection operator). Next, we use the Peano kernel formula to rewrite (<ref>) for x in the unit ball as f(x) = p(x) + 1/k!∫_S^d-1∫_-1^ω· x F^(k+1)_ω(b)(ω· x - b)^kdbdω = p(x) + 1/k!∫_S^d-1∫_-1^1 F_ω^(k+1)(b)σ_k(ω· x - b)dbdω, where p(x) is a polynomial of degree at most k given by p(x) = ∫_S^d-1∑_j=0^kF^(j)_ω(-1)/j!(ω· x + 1)^jdω. Now Hölder's inequality implies that ∫_S^d-1∫_-1^1 |F^(k+1)_ω(b)|dbdω≤ C∫_S^d-1(∫_-1^1 |F^(k+1)_ω(b)|^2db)^1/2dω ≤ C∫_S^d-1(∫_ℝ |F^(k+1)_ω(b)|^2db)^1/2dω =C∫_S^d-1(∫_ℝ |t^k+1F̂_ω(t)|^2dt)^1/2dω. Utilizing the Fourier slice theorem, the definition of the filtered back-projection operator H_d, and Jensen's inequality, we obtain the bound ∫_S^d-1∫_-1^1 |F^(k+1)_ω(b)|dbdω ≤ C∫_S^d-1(∫_ℝ |t^k+1F̂_ω(t)|^2dt)^1/2dω = C∫_S^d-1(∫_-∞^∞ |t|^2s+d-1|ℛ(f)(ω,t)|^2dt)^1/2 dω ≤ C(∫_S^d-1∫_-∞^∞ |t|^2s+d-1|ℛ(f)(ω,t)|^2dt dω)^1/2 = C(2∫_ℝ^d |ξ|^2s|f̂(ξ)|^2dξ)^1/2 = C|f|_W^s(L_2(ℝ^d)). Setting g(x) := 1/k!∫_S^d-1∫_-1^1 F_ω^(k+1)(b)σ_k(ω· x - b)dbdω the bound (<ref>) implies that (see for instance Lemma 3 in <cit.>) g_𝒦_1(ℙ_k^d)≤∫_S^d-1∫_-1^1 |F^(k+1)_ω(b)|dbdω≤ C|f|_W^s(L_2(ℝ^d)). It also immediately follows from (<ref>) that g_L_2(Ω)≤ C∫_S^d-1∫_-1^1 |F^(k+1)_ω(b)|dbdω≤ C|f|_W^s(L_2(ℝ^d)), since the elements of the dictionary ℙ_k^d are uniformly bounded in L_2. This implies that p_L_2(Ω) = f - g_L_2(Ω)≤f_L_2(Ω) + g_L_2(Ω)≤ Cf_W^s(L_2(ℝ^d)). Since all norms on the finite dimensional space of polynomials of degree at most k are equivalent, we thus obtain p_𝒦_1(ℙ_k^d)≤ Cf_W^s(L_2(ℝ^d)), which combined with (<ref>) gives f_𝒦_1(ℙ_k^d)≤ Cf_W^s(L_2(ℝ^d)) as desired. § APPROXIMATION UPPER BOUNDS FOR SOBOLEV SPACES In this section, we deduce the approximation rates in Corollary <ref> from Theorem <ref> and Corollary <ref>. This result follows easily from the interpolation theory characterizing the interpolation spaces between the Sobolev space W^s(L_p(Ω)) and L_p(Ω) (see for instance <cit.>, Chapter 6 for the one dimensional case), but for the reader's convenience we give a simple direct proof (which contains the essential interpolation argument). We remark that a similar, but more complicated interpolation argument can be used to obtain approximation rates for Besov spaces as well. The first step in the proof is to note that by the Sobolev extension theorems (see for instance <cit.>) we may assume that f is defined on all of ℝ^d, f is supported on the ball of radius (say) 2 (or some other domain containing Ω), and f_W^s(L_p(ℝ^d))≤ Cf_W^s(L_p(Ω)) for a constant C = C(Ω). Let ϕ:ℝ^d→ [0,∞) be a smooth radially symmetric bump function supported in the unit ball and satisfying ∫_ℝ^dϕ(x)dx = 1. For ϵ > 0, we define ϕ_ϵ:ℝ^d→ℝ^d by ϕ_ϵ(x) = ϵ^-dϕ(x/ϵ) and form the approximant f_ϵ(x) = ∑_t=1^ρρt(-1)^t-1∫_ℝ^dϕ_ϵ(y)f(x - ty)dy, where ρ > s is an integer. Using that ∫ϕ_ϵ(y)dy = 1, we estimate the error f - f_ϵ_L_p by f - f_ϵ_L_p(ℝ^d)≤∫_ℝ^dϕ_ϵ(y) (∑_t=0^ρρt(-1)^tf(x - ty))dy_L_p(dx). Now, ϕ_ϵ is supported on a ball of radius ϵ and (∑_t=0^ρρt(-1)^tf(x - ty))dy_L_p(dx)≤ω_ρ(f,|y|)_p ≤ C|y|^sf_W^s(L_p(Ω)) for a constant C = C(s,p,d) by the definition of the Besov space B^s_∞(L_p(ℝ^d)) (here ω_ρ(f,r)_p denotes the ρ-th order modulus of smoothness). Thus the triangle inequality implies that f - f_ϵ_L_p(ℝ^d)≤∫_ℝ^dϕ_ϵ(y) ∑_t=0^ρρt(-1)^tf(x - ty)_L_p(dx)dy ≤ Cϵ^s f_W^s(L_p(ℝ^d)), since ϕ_ϵ_L^1(ℝ^d) = 1. The next step is to bound the W^α(L_2(ℝ^d))-norm of f_ϵ, where α = (d + 2k+1)/2. Observe that since ρ is fixed depending upon s, it suffices to bound ∫_ℝ^dϕ_ϵ(y) f(x - ty)dy_W^α(L_2(ℝ^d,dx)) for each fixed integer t ≥ 1. To do this, we first make a change of variables to rewrite f_ϵ,t(x) := ∫_ℝ^dϕ_ϵ(y) f(x - ty)dy = 1/t^d∫_ℝ^dϕ_ϵ(y/t) f(x - y)dy = ∫_ℝ^dϕ_tϵ(y) f(x - y)dy. Taking the Fourier transform, we thus obtain f̂_ϵ,t(ξ) = f̂(ξ)ϕ̂(tϵξ). We now estimate the W^α(L_2(ℝ^d))-norm of f_ϵ,t as follows |f_ϵ,t|^2_W^α(L_2(ℝ^d))∫_ℝ^d|ξ|^2α|f̂_ϵ,t(ξ)|^2dξ = ∫_ℝ^d|ξ|^2α|f̂(ξ)|^2|ϕ̂(tϵξ)|^2dξ. Note that since f is supported on a ball of radius 2, we have (recall that p ≥ 2) ∫_ℝ^d |ξ|^2s|f̂(ξ)|^2dξ |f|^2_W^s(L_2(ℝ^d))≤ Cf^2_W^s(L_p(ℝ^d)). Thus Hölder's inequality implies that |f_ϵ,t|^2_W^α(L_2(ℝ^d)) ≤(∫_ℝ^d |ξ|^2s|f̂(ξ)|^2dξ)(sup_ξ∈ℝ^d |ξ|^2(α - s)|ϕ̂(tϵξ)|) ≤ Cf^2_W^s(L_p(ℝ^d))(sup_ξ∈ℝ^d |ξ|^2(α - s)|ϕ̂(tϵξ)|). By changing variables, we see that (sup_ξ∈ℝ^d |ξ|^2(α - s)|ϕ̂(tϵξ)|) = (tϵ)^-2(α - s)(sup_ξ∈ℝ^d |ξ|^2(α - s)|ϕ̂(ξ)|) ≤ Cϵ^-2(α - s), since the supremum above is finite (ϕ is a Schwartz function). Hence, we get |f_ϵ,t|_W^α(L_2(ℝ^d))≤ Cf_W^s(L_p(ℝ^d))ϵ^-(α - s). In addition, we clearly have from the triangle inequality that f_ϵ,t_L_2(ℝ^d)≤f_L_2(ℝ^d)≤f_W_2(L_2(ℝ^d)), so that if ϵ≤ 1 we obtain (applying this for all t up to ρ) f_ϵ_W^α(L_2(ℝ^d))≤ Cf_W^s(L_p(ℝ^d))ϵ^-(α - s) We now apply Corollary <ref> to obtain an f_n∈Σ_n^k(ℝ^d) such that f_n - f_ϵ_L_p(Ω)≤ Cf_W^s(L_p(ℝ^d))ϵ^-(α - s)n^-α. Combining this with the bound (<ref>), we get f - f_n_L_p(Ω)≤ Cf_W^s(L_p(ℝ^d))(ϵ^s + n^-αϵ^-(α - s)). Finally, choosing ϵ = n^-1/d and recalling that α = (d + 2k+1)/2 completes the proof. § ACKNOWLEDGEMENTS We would like to thank Ronald DeVore, Robert Nowak, Rahul Parhi, and Hrushikesh Mhaskar for helpful discussions during the preparation of this manuscript. Jonathan W. Siegel was supported by the National Science Foundation (DMS-2424305 and CCF-2205004) as well as the MURI ONR grant N00014-20-1-2787. Tong Mao and Jinchao Xu are supported by the KAUST Baseline Research Fund. amsplain
http://arxiv.org/abs/2408.11440v1
20240821085100
LAHAJA: A Robust Multi-accent Benchmark for Evaluating Hindi ASR Systems
[ "Tahir Javed", "Janki Nawale", "Sakshi Joshi", "Eldho George", "Kaushal Bhogale", "Deovrat Mehendale", "Mitesh M. Khapra" ]
cs.CL
[ "cs.CL" ]
[ [ August 26, 2024 =================== § ABSTRACT Hindi, one of the most spoken language of India, exhibits a diverse array of accents due to its usage among individuals from diverse linguistic origins. To enable a robust evaluation of Hindi ASR systems on multiple accents, we create a benchmark, , which contains read and extempore speech on a diverse set of topics and use cases, with a total of 12.5 hours of Hindi audio, sourced from 132 speakers spanning 83 districts of India. We evaluate existing open-source and commercial models on and find their performance to be poor. We then train models using different datasets and find that our model trained on multilingual data with good speaker diversity outperforms existing models by a significant margin. We also present a fine-grained analysis which shows that the performance declines for speakers from North-East and South India, especially with content heavy in named entities and specialized terminology. § INTRODUCTION Hindi is one of the most widely spoken languages of India with 528M speakers identifying it as their first language and another 163M identifying it as their second or third language. People across the country learn and speak Hindi for personal, political and/or employment reasons, and it serves as an unofficial lingua franca for day-to-day activities in several parts of the country. As a result there is significant variation in the accents of people speaking Hindi across the country with regional influences as well as influences from the primary language. These regional influences stem from the rich linguistic diversity of India which has 22 scheduled languages, 122 major languages, and 1599 other languages, as per the Census of 2011. Speakers of languages from the Dravidian family, like Tamil and Malayalam, showcase unique speech rhythms and ways of articulating words that stand in contrast to those from the Indo-Aryan group, including languages like Hindi, Marathi, and Gujarati. Accentual differences are also prominent within the Indo-Aryan languages, reflecting the diverse linguistic landscapes of India's northern, western, and eastern regions. Given the widespread usage and diversity, it is imperative to develop automatic speech recognition systems for Hindi which cater to multiple accents. While there are efforts to collect voice samples from native speakers of Hindi <cit.> for training ASR systems, there is no benchmark which has Hindi speakers from diverse backgrounds, speaking with different accents. In this work, we address this gap by releasing , an ASR benchmark containing multi-accent Hindi data. We follow the same collection methodology as used in Svarah <cit.>, which is an ASR benchmark containing Indian-accent English data and IndicVoices <cit.> which is an effort to collect data from native speakers only (as opposed to non-native speakers in our case). contains a total of 12.5 hours of Hindi data collected from 132 speakers of which 122 are non-native speakers. These speakers were spread across 82 districts spanning 18 states in India as shown in Figure <ref>. The set of native languages spoken by these speakers encompasses 19 of the 22 constitutionally recognised languages of India, spread across 4 distinct language families. We evaluate existing open source and commercial models on to understand the current state of ASR for Hindi. In addition, we train multiple model variants based on the Conformer architecture by using different data sources. We observe that (i) our model outperforms all existing models on (ii) our model trained on multilingual data performs better perhaps due to better speaker diversity and (iii) in low resource monolingual settings adding synthetic code-mixed data helps. We also present a fine-grained analysis across different accents and content categories and observe that the performance is poor on speakers from North-East India and South India, with a sharp drop on content rich in named entities and terminology from specific domains. All the code, datasets, models and scripts have been made publicly available[<https://github.com/AI4Bharat/Lahaja>] and we hope that they will enable further research on multi-accent Hindi ASR systems. § We now describe the process of creating . As mentioned earlier, we largely follow the same methodology as used in <cit.> but focus on non-native speakers of Hindi. §.§ Recruitment of Speakers We selected 132 participants from 18 out of the 28 states of India, of which 122 were non-native speakers and identified Hindi as their second, third or fourth language. The primary languages of these non-native speakers covered 19 of the 22 scheduled languages of India belonging to Indo-Aryan, Dravidian, and Tibeto-Burman language families. For each of the 19 languages, we recruited 3–5 participants who could speak Hindi. This included 65 males and 67 females, with 6.5 hours of male speech and 6.0 hours of female speech. We included participants from diverse age groups: 18–30, 30–45, 45–60, and 60+, with roughly equal representation in each age group. Participants came from various segments (unemployed, students, blue-collar, and white-collar) with varying education levels (upto 12th grade, undergraduates, graduates, and postgraduates). Participants were briefed about the task, and were clearly informed that their voice samples will be used to develop and evaluate speech recognition models. Their voice samples were recorded only after they willingly agreed and signed a consent form. The participants were appropriately compensated for their work according to daily wages in their region. The entire process was reviewed and approved by our Institute Ethics Committee. §.§ Data collection For recording voice samples, we used Microsoft's open-source Karya platform <cit.>. Once a participant is identified, we onboard them by asking them to fill a web-form which collects participant's meta-data such as age, gender, district, primary language and topics/domains of interest. Once registered, the participants are asked to download and install the Karya application. The participant then performs the following tasks on the Karya. Read speech: To ensure good vocabulary coverage we use 1K sentences from Wikipedia articles covering 13 domains, as released by <cit.>. We ask non-native speakers of Hindi to read out these sentences as it is. Digital interactions with voice assistants: Following <cit.>, we ask speakers to record utterances typically found in digital transactions with voice assistants. These digital transactions cover interactions with (i) in-home assistants for everyday tasks such as setting an alarm, switching on the light, playing music, etc. (ii) digital payment services covering multiple intents such as checking account balance, transferring money, paying electricity bill, etc. (iii) online grocery shopping apps covering multiple intents such as placing an order, seeking a refund, changing delivery address, etc. and (iv) online government services covering multiple intents such as applying for a service, checking the status of application, renewing a service, etc. The diversity in the applications covered ensures that the benchmark has a good representation of number sequences, alphanumeric codes, brand names, product names, bank names, government scheme names, application specific terminology and code mixed content (English-Hindi) typically found in such interactions. Extempore conversations: We use a carefully curated list <cit.> of 2.5K questions from 21 domains such as tourism, government etc., and 28 topics of interest such as reading, painting etc. Next, we request each participant to select two topics they are interested in and two domains with which they are familiar and capable of answering questions about. Some sample questions include “Technology: How have smartphones made life better?”, “Government: Given a chance, what policies will you introduce to aid farmers in your area” , “Reading: Do you have a favorite book? If so, what is it and why do you like it?” and so on. While the examples shown here are in English, the questions are translated to Hindi and shown to the participants. In addition to the above we also use some icebreaker questions to warm up the participants. These included questions about their mother tongue, their everyday life, their state/district and so on. Named entities: To get a good representation of named entities typically encountered in downstream applications, we ask users to speak any 5 numbers , any 5 dates, any 5 person names, names of any 5 Indian cities, any 5 Indian states, any 5 Indian districts, any 5 countries, and any 5 international cities. Each participant thus reads 20 sentences across both read speech and digital interactions, and answers 8 questions on selected domains and topics of interest. §.§ Transcription We adhere to the guidelines as outlined in <cit.> for transcribing the collected audio samples. We use an open-source platform, Shoonya <cit.>, for transcription which supports multiple Indian languages and a maker-checker workflow. The workflow ensures that the initial transcript (maker) is verified by a senior transcriber (checker). All transcripts are generated in the native Devanagari script of Hindi. Our transcribers are language experts with several years of experience in transcription and translation tasks. We first split the larger audio files into segments using Silero Voice Activity Detection <cit.> and then provide these segments to the transcribers for transcription. Finally, we downsample the chunked audios to 16kHz, resulting in 16kHz mono 16-bit PCM wav audios. §.§ Statistics Table <ref> shows statistics of split across native speakers belonging to different languages. Table <ref> shows the statistics grouped across different content categories which allows for a fine-grained evaluation of downstream models on . These categories were created by grouping roughly related domains and topics of interest. For example domains like education, govt., health, legal are grouped into `public resources'. § EXPERIMENTAL SETUP Baselines: To establish a baseline, we evaluate the performance of the following existing models on . * MMS <cit.>: This is Meta's open-source 300M wav2-vec2 <cit.> model, supporting 1107 languages, including Hindi. * WhisperV3: This refers to the latest open-source Whisper <cit.> model, trained on 680k hours of data, having 1550M parameters and supporting 100+ languages, including Hindi * Azure: This refers to the Hindi speech to text systems, commercially made available by Microsoft through their SDKs. * Google Chirp: This refers to Google USM <cit.> model which is made commercially available through Google Cloud APIs. IndicASR model: We train a Conformer-L <cit.> with a hybrid RNNT-CTC <cit.> decoder. We trained 4 different variants of the model by starting with the pretrained checkpoint of an English ASR model, Nvidia-En-SSL <cit.>, and fine-tuning it on different datasets as described below. We found that starting with english checkpoint helped in faster convergence. * M1: This model was trained on the Hindi subset of the IndicVoices dataset which contains 65 hours. * M2: This is a multilingual model trained on the entire IndicVoices dataset which contains 1509 hours summed up across 22 Indian languages. * M3: This is a monolingual model trained on 2285 hours by combining the Hindi subsets of Vistaar <cit.>, Spring-Inx <cit.> and IndicVoices. * M4: A lot of extempore content in Indian languages is code-mixed with English, especially for non-native speakers. Hence, we do an interesting experiment where we train a model using 65 hours of Hindi data from IndicVoices plus an additional 65 hours of synthetic data. This synthetic data is obtained by taking 65 hours of English ULCA ASR data <cit.> which contains English content spoken by Indian users. We transliterate the English transcripts to Devanagari script using an open source transliteration model, IndicXlit <cit.>. Thus we created a English-Hindi mixed dataset which contains original Hindi audios as well as English audios which are transcribed using Devanagari script (the content is in English but written in Devanagari script, as is the case in code-mixing). We trained all the models for a maximum of 130k steps and employed early stopping with a patience of 5k steps. We set the max sequence length to 30 secs, used batch size of 16 audios per GPU on 8 GPUs with gradient accumulation of 4, resulting in an effective batch size of 512 audios. We used AdamW <cit.> as the optimizer with lr of 2.0 and Noam <cit.> as the LR scheduler. Evaluation metric: We used Word Error Rate (WER) as the metric to compare performance across models. § RESULTS AND DISCUSSION Performance across models: Referring to Table <ref>, we observe that our base model M1 outperforms all existing models with a minimum and maximum improvement of 2.9% WER and 13% WER, respectively. Among the baseline models, the massively multilingual open source models perform poorly as compared to the closed source commercial models from Azure and Google. Next, we compare our monolingual model, M1, with our multilingual model, M2, both trained on IndicVoices. It is observed that M2 outperforms its monolingual counterpart, by a margin of 2.7% WER. There could be two reasons for the better performance of the multilingual model (i) on aggregate it uses much more training data than the monolingual model although the amount of Hindi data is the same (ii) it sees training data in the native language of the accents studied in this work (although from a different set of speakers). We hypothesise that the second reason is more likely as otherwise the massively multilingual MMS and Whisper V3 models which have arguably trained on much larger data would also have performed better. Lastly, again referring to Table <ref>, we compare our multilingual model (M2) and our monolingual model (M3), trained using two additional sources of Hindi data: Vistaar <cit.>, Spring-Inx. Here, we clearly see the effect of adding more Hindi data and observe a further reduction of 2.4% WER while moving from M2 to M3. It would have been interesting to see the effect of training a multilingual model by combining all multilingual subsets of Vistaar, Spring and IndicVoices but due to computational constraints, we leave this as future work. Effect of adding English-Hindi mixed data: Referring to Table <ref>, we compare our monolingual model M1 with M4, which is trained with English-Hindi mixed data. Interestingly, M4 performs better than M1 by ≈1% WER. We hypothesize that since contains a significant code-mixed data, adding synthetically created code-mixed content helps when the training data is less (M1 uses only IndicVoices). In a separate experiment, we found that adding synthetic code-mixed on top of resource rich settings as in M2 and M3 does not help. Performance across accents: Figure <ref> contains the spliced WER of our best model M3 on different accents. It is evident that performance of M3 decreases as we move from regions where Hindi is more popularly spoken as the 2^nd language or is closely related to the region's native language to regions where this is not the case. We observe the model performs best for languages like Urdu and Sindhi with 10.5% WER and worst for Assamese with 20.5% WER. More generally, from Figure <ref> we understand that moving from Central and West India (where languages related to Hindi like Maithili, Urdu are spoken) to North East India (where languages like Assamese, Nepali are spoken) and South India (where Dravidian languages like Tamil, Telugu are spoken), we see a clear decline of performance. We hypothesise that this is due to strong influences of the primary language of the speaker which is increasingly different from Hindi. We do see surprises (e.g., we expected WER on South Indian languages, `kn' and `ml' to also be poor). Comparison with native accents: We now compare the the performance of our model on and the Hindi subset of the IndicVoices which only contains native speakers (see the right half of Table <ref>). We observe that the performance of M3 on IndicVoices, which consist of only native speakers of Hindi, is better than that on . This combined with the fine-grained results in Figure <ref> implies that is a good benchmark for evaluating performance across different accents. Performance across different content categories: In Table <ref>, we present a fine-grained evaluation of the model across different content categories. The model performs well on read speech with standard vocabulary from Wikipedia, as well as everyday tasks, icebreaker questions and some domains like business and culture. The model particularly struggles in utterances which are rich in named entities (task of fives, product reviews, online grocery shopping) and in certain domains (science and technology, agriculture and fisheries) which may have very domain-specific vocabulary. We list examples of errors in Table <ref>. § CONCLUSION We present , a comprehensive benchmark featuring 12.5 hours of Hindi audio from 132 speakers across 83 districts, allowing evaluation of Hindi ASR systems on multiple accents. Our evaluations reveal that existing open-source and commercial models fall short in accurately recognizing multi-accent Hindi speech, underscoring the challenge of accent diversity. However, by training models on multilingual data that encompass a broad range of speakers, we have achieved notable improvements, surpassing existing models by a significant margin. Our fine-grained analysis further emphasizes the performance gaps for speakers from North-East and South India, particularly with content laden with named entities and specialized terminology. By making our code, datasets, and models publicly available, we aim to spur further research and development of ASR systems supporting multiple accents. § ACKNOWLEDGEMENTS We would like to thank Digital India Bhashini, the Ministry of Electronics and Information Technology (MeitY[https://www.meity.gov.in/]) of the Government of India and the Centre for Development of Advanced Computing (C-DAC[https://www.cdac.in/index.aspx?id=pune]), Pune for generously supporting this work and providing us access to multiple GPU nodes on the Param Siddhi Supercomputer. We would like to thank the EkStep Foundation and Nilekani Philanthropies for their generous grant which went into hiring human resources as well as cloud resources needed for this work. We would like to thank the team of AI4Bharat for helping us to collect data from native speakers of different languages across the country. IEEEtran
http://arxiv.org/abs/2408.11098v1
20240820180003
Supermassive black holes from inflation constrained by dark matter substructure
[ "Shin'ichiro Ando", "Shyam Balaji", "Malcolm Fairbairn", "Nagisa Hiroshima", "Koji Ishiwata" ]
astro-ph.CO
[ "astro-ph.CO", "hep-ph" ]
UT-HET-140 RIKEN-iTHEMS-Report-24 KANAZAWA-24-04 KCL-2024-40 s.ando@uva.nl GRAPPA Institute, University of Amsterdam, 1098, XH Amsterdam, The Netherlands Kavli Institute for the Physics and Mathematics of the Universe, University of Tokyo, Chiba 277-8583, Japan shyam.balaji@kcl.ac.uk Physics Department, King’s College London, Strand, London, WC2R 2LS, United Kingdom malcolm.fairbairn@kcl.ac.uk Physics Department, King’s College London, Strand, London, WC2R 2LS, United Kingdom hiroshima-nagisa-hd@ynu.ac.jp Department of Physics, Faculty of Engineering Science, Yokohama National University, Yokohama 240–8501, Japan Department of Physics, University of Toyama, 3190 Gofuku, Toyama 930-8555, Japan RIKEN Interdisciplinary Theoretical and Mathematical Sciences (iTHEMS), Wako, Saitama 351-0198, Japan ishiwata@hep.s.kanazawa-u.ac.jp Institute for Theoretical Physics, Kanazawa University, Kanazawa 920-1192, Japan § ABSTRACT Recent James Webb Space Telescope observations of high-redshift massive galaxy candidates have initiated renewed interest in the important mystery around the formation and evolution of our Universe's largest supermassive black holes (SMBHs). We consider the possibility that some of them were seeded by the direct collapse of primordial density perturbations from inflation into primordial black holes and analyze the consequences of this on current dark matter substructures assuming non-Gaussian primordial curvature perturbation distributions. We derive bounds on the enhanced curvature perturbation amplitude from the number of dwarf spheroidal galaxies in our Galaxy, observations of stellar streams and gravitational lensing. We find this bound region significantly overlaps with that required for SMBH seed formation and enables us to probe Gaussian and non-Gaussian curvature perturbations corresponding to the SMBH seeds in the range O(10^5–10^12) M_⊙. Supermassive black holes from inflation constrained by dark matter substructure Koji Ishiwata August 26, 2024 =============================================================================== § INTRODUCTION The origin of supermassive black holes (SMBHs) in our universe remains unknown. Possible origins include stellar black holes from Population-III stars <cit.>. There are also scenarios where the direct collapse of dust clouds lead to more massive halos in which fragmentation is suppressed by some additional heating mechanism <cit.>. Different models for their evolution histories will be probed with future gravitational wave (GW) observations <cit.>. As of now, it is not yet clear whether there is enough time for SMBHs to gain mass quickly enough and to do so before the early times at which they are observed in the Universe (see e.g. Ref. <cit.> for a recent review). The James Webb Space Telescope (JWST) is revealing the high-redshift universe in novel ways, offering much more insight into the seeds of SMBHs by studying their high redshift population. For instance, there is evidence for SMBHs at 8<z<11 with candidates in the range M_BH= O(10^6–10^8)M_⊙ <cit.>. These observations mount further pressure on models with low mass progenitors without super-Eddington accretion or rapid merger rates in the early universe. Feedback from super-Eddington growth can hinder SMBH growth — jets and outflows from accretion push material away, so the duration of super-Eddington growth phases should be finite <cit.> but a realistic value for the duration is still unknown. Merger scenarios for the SMBH origin require highly clustered early populations, which can affect SMBH growth both positively and negatively, as they can act to eject SMBHs from galaxy centers <cit.>. These uncertainties motivate another possibility, namely that our Universe’s SMBHs did not acquire most of their mass through accretion or mergers, but are rather primordial black holes (PBHs) forming from the collapse of large density fluctuations produced during cosmic inflation. Unlike smaller PBHs, the horizon re-entry of these fluctuations should happen at later epochs, or correspondingly lower temperatures, in order to contain large-enough mass within the horizon. There are several schemes to achieve such enhanced perturbations, e.g., via a phase of ultra-slow-roll <cit.> or considering multiple fields <cit.>. The homogeneity and isotropy of cosmic microwave background (CMB) observations impose strict constraints on the amplitude A_s and spectral index n_s of scalar perturbations. For example, at the pivot scale k_*=0.05  Mpc^-1, The amplitude and the spectral index is constrained as A_s=(2.099± 0.029) × 10^-9 and n_s=0.9649± 0.0042 (Planck TT,TE,EE+lowE data <cit.>). At smaller scales, constraints on the curvature perturbations are much less stringent. Currently, the amplitude for comoving wavenumbers k > 𝒪(1)  Mpc^-1 is constrained through considerations of μ- and y-type distortions in CMB observations <cit.>, the overproduction of PBHs <cit.>, density profiles of ultracompact minihalos <cit.>, free-free emission in the Planck foreground analysis <cit.>, the galaxy luminosity function <cit.> and gravitational lensing <cit.>. The enhancement of the primordial power spectrum of curvature perturbations P_ζ at large k≃ O(10^2–10^4) Mpc^-1 is constrained to be lower than around O(10^-4) [The curvature perturbation in comoving coordinates ζ is used interchangeably with the comoving curvature perturbation ℛ in the literature because they describe the same physical quantity during inflation and on superhorizon scales.]. However, the amplitude of the primordial curvature perturbation needed to explain the abundance of PBHs as SMBH seeds can be decreased if the density distribution follows non-Gaussian statistics  <cit.>. If this is the case, the current constraints listed above can be avoided. In this work, we highlight how the imprint of small-scale perturbations during inflation affects the evolution of hierarchical galaxy structures, as manifested in the dark matter (DM) halos and subhalos. Two quantities are considered as DM substructure probes: the number density of the dwarf spheroidal galaxies (dSphs) of the Milky Way <cit.> and stellar stream observations <cit.> combined with lensing analysis. The former is representative of DM substructures with visible counterparts, while we can see the signatures of substructures that are too small to host galaxies in the latter. Previous works show that they are powerful indicators of DM properties (e.g. Refs. <cit.>). We show that the same scheme is applicable for probing SMBH seeds. The structure of this article is as follows. In Sec. <ref>, we explain the formulation to connect primordial curvature perturbations and SMBH seed formation. Sec. <ref> is devoted to explaining the relevant physics of DM halos. We then show our main results and discuss their implications in Sec. <ref>, and we summarize and conclude in Sec. <ref>. § PRIMORDIAL CURVATURE PERTURBATIONS AND PBH ABUNDANCE Power spectrum. We investigate models with a prominent feature at small scales with wavenumber k≳𝒪(1)  Mpc^-1 in the primordial power spectrum. To evaluate the impact of the curvature perturbation in this range, we add an additional feature, namely an extra bump, on top of the nearly scale-invariant curvature perturbation spectrum that matches the features seen in the CMB P_ζ= P_ζ^(0) +P_ζ^ bump , where P_ζ^(0)(k) = A_s (k/k_*)^n_s-1 and P_ζ^ bump(k;k_p) = {[ (A-P_ζ^(0)(k_p)) (k/k_p)^n_b k ≤ k_p; 0 k>k_p ]. . Here, we introduce three parameters denoted as A, k_p, and n_b, which describe the amplitude, the corresponding wavenumber, and the growth index of the bump, respectively. Considering single-field inflation models, Ref. <cit.> suggests a steepest spectral index of n_b=4. Conversely, Ref. <cit.> argues that the spectral index could reach as high as 8 after experiencing a dip in amplitude and subsequently peaking with an index less than 4. We show some illustrative examples of the primordial power spectrum with different amplitudes and growth indices as a function of the ratio k/k_p in Fig. <ref>. Formation and evolution of PBHs. We start by considering the mass of PBHs based on their time of formation in our Universe's history. See the literature (e.g. Ref. <cit.> for recent reviews) for more details. PBHs that formed at later times should be much heavier, as their sizes are governed by the size of the cosmological horizon. During the radiation-dominated era, the horizon contains the following amount of energy <cit.> M_H=3×10^9 M_⊙(10 keV/T)^2(3.36/g_*(T))^1/2 where M_H is the horizon mass, T is temperature of the Universe and g_*(T) is the number of relativistic species at temperature T. Eq. (<ref>) provides a good estimate of the PBH mass corresponding to the temperature of the Universe at the formation epoch. When a density fluctuation is large enough, it collapses to form a PBH, the mass of which is typically smaller, given by M_BH≃γ M_H, where γ≲ 1 quantifies the efficiency of collapse <cit.>. For example, it takes γ≃ 0.8 for a narrow spectrum <cit.>. From Eq. (<ref>), the PBHs should have formed after O(1) MeV temperatures with horizon masses of around O(10^5)M_⊙ in order to be SMBH seeds, while it should be before the onset of recombination with O(1) eV (or correspondingly, M_H∼ O(10^17) M_⊙) corresponding to horizon masses at the incredulity limit. While the mass of the PBH at formation is described using the comoving wavenumber of the fluctuation <cit.>, its evolution due to accretion and mergers can be parameterized by introducing two parameters A for accretion and M for mergers <cit.> M_ PBH, 0 = A M M_ seed ≃ 20 γ· A M(k/10^6 Mpc^-1)^-2 M_⊙. Here we fix γ=0.8. Our results are not sensitive to the value of this parameter. The initial fraction of causal horizons collapsing into PBHs can be calculated via the integral β = ∫_δ_c^∞ P(δ) dδ, where P(δ) is the probability distribution function of density contrast δ and δ_c is the critical density contrast required to collapse and form a black hole. The critical threshold has no universal value, and it can range anywhere between 0.3 ≲δ_c ≲ 0.66 <cit.>. Non-Gaussianity. The treatment of the non-Gaussianity in this work follows that provided in Ref. <cit.> as we will summarize below. Following the literature, three types of statistics for curvature perturbations are considered: the Gaussian (G), the chi-square (χ^2), and the cubic-Gaussian (G^3) distributions. To encompass these distributions, we express the curvature perturbation as ζ = h(ζ_ G), where ζ_ G is a Gaussian field. Adopting the local ansatz, the non-Gaussianity can be expressed in the following form h(ζ_ G) = ζ_ G + 3/5f_ NL(ζ_ G^2 - σ_ζ^2) + 9/25g_ NLζ_ G^3 + …, where σ_ζ^2=⟨ζ_G^2⟩ is the variance. From here, the probability distribution function for non-Gaussian ζ is calculated as P_ NG(ζ) dζ=∑_i=1^n|d h_i^-1(ζ)/dζ| P_ G(h^-1) dζ, where h_i^-1(ζ) is the i-th solution of h(ζ_ G)=ζ and n is the number of the terms. For curvature fluctuations dominantly following G-, χ^2-, and G^3-distributions, Eq. (<ref>) follows h(ζ_ G)=ζ_ G, h(ζ_ G)∝ (ζ_ G^2-σ_ζ^2), and h(ζ_ G)∝ζ_ G^3, respectively. The formation fractions are determined by evaluating the integral in Eq. (<ref>), where the density perturbations are expressed in terms of ζ  <cit.> β_ G = erfc( ζ_c /√(2 P_ζ / K) , ), β_χ^2 = erfc( √(1/2 + ζ_c/√(2P_ζ / K)) , ), β_ G^3 = erfc[ ( ζ_c /√( 8P_ζ / 15 K))^1/3]. We note that ζ_c is the critical value of the curvature perturbation which represents the deviation from flatness in a region. Nonlinearities in the Press-Schechter or peak theory formalism are accounted for by introducing the factor K in the standard threshold integral. In the following calculation, K in Eq. (<ref>) is fixed to 2. We note that mode coupling makes it hard to keep a strict χ^2 distribution at small scales <cit.>; but these uncertainties are negligible for our cases considering narrow spectra. For the following, we will consider when ζ_c=δ_c=0.45 <cit.>, as this is the condition required of the density contrast to form a black hole. We show the formation fraction in Fig. <ref> assuming ζ_c=0.45. Here we clearly see the benefits of non-Gaussianity in relaxing the power spectrum amplitude required to produce a certain fraction β of the Universe's energy in PBHs. For example, for a power spectrum value of P_ζ=10^-2, we get ≳10^6 (≳10^7) times enhancement compared to Gaussian in the PBH fraction β for the χ^2 (G^3) distributions. We relate β to the current energy density in PBHs of seed mass M_seed as <cit.> Ω_ PBH,0 = ρ_ PBH,0(M_seed, A, M)/ρ_ crit,0 ≃2 × 10^8 γ^1/2 A√(M_⊙/M_seed)β( M_seed/M_⊙) . It should be noted that the effects from accretion or mergers on the current energy density of PBHs, Ω_ PBH,0, are minor. In fact, the energy stored in the form of PBHs is not changed by mergers. Accretions would increase the energy density, however, it can be easily compensated by a change in parameter β. The parameter β exhibits a high degree of sensitivity to variations in the power spectrum. An amplification by several orders of magnitude, whether through accretion or merging, can be offset by less than an order of magnitude increase in P_ζ, which we will illustrate later in Sec. <ref>. To calculate the total energy density of PBHs in the present Universe, we evaluate the integral Ω^ tot_ PBH,0 = ∫Ω_ PBH,0 d ln M_seed. This quantity is to be compared to the energy density of SMBHs, Ω_ SMBH. Ref. <cit.> evaluates Ω_ SMBH from the number of the galaxies that can host massive black holes <cit.> and the mass range of SMBHs as 10^6–10^8M_⊙, obtaining 10^-7≲Ω_ SMBH≲10^-5 by dividing the sum of the mass in those SMBH by the total mass of the Universe. Following the estimate in the previous work, we set Ω_ PBH,0=10^-10 as a conservative benchmark to test the PBH origin scenario for SMBHs. § DARK MATTER SUBHALOS Peaked primordial power spectra yield altered consequences for the late-time evolution of structures. For example, models with peaked curvature perturbation spectra can be constrained with observations of DM substructures which can be traced by satellite galaxies. Following Ref. <cit.>, we adopt the extended Press-Schechter (EPS) formalism to discuss the number of subhalos in a larger host halo of the Milky Way, given the aforementioned model Eq. (<ref>) of curvature perturbations with an enhancement parameterized by (A, k_p, n_b). In the EPS theory, instead of halo mass and redshift (m, z), one adopts (S, δ_c^ha). δ_c^ha is the halo collapse threshold at redshift z in the linearly extrapolated spherical collapse model, δ_c^ha≡ 1.686/D(z), where D(z) is the linear growth factor. S ≡σ^2(m) is the variance of the density fluctuation which we evaluated as that at z=0, where we apply the sharp-k filter [The filter is suitable for power spectra with a steep cutoff <cit.>.]. The mass fraction contained in smaller halos which collapse at redshift z_2, where δ^ha(z_2) is written δ^ha_2, with corresponding mass scale S_2, characterized by (S_2, δ_2^ha) in their host halo (S_1, δ_1^ha) of z_1<z_2 is given by f(S_2, δ_2^ha|S_1, δ_1^ha)dS_2 = 1/√(2π)δ_2^ha-δ_1^ha/(S_2-S_1)^3/2 ×exp[-(δ_2^ha-δ_1^ha)^2/2(S_2-S_1)]dS_2. With this expression, one can compute the number of subhalos that accreted on their host with masses between m_a and m_a+dm_a between the redshifts z_a and z_a + dz_a i.e. d^2N_ sh/(dm_ad z_a). In our computation, instead of the simplest form of Eq. (<ref>) based on the spherical collapse model, we adopt the model III of Ref. <cit.>, which better fits numerical simulation results by adding a condition for main branch halos. Besides (m_a, z_a), the density profile of a subhalo at accretion is also characterized by its concentration parameter at the epoch, c_a. Assuming the Navarro-Frenk-White (NFW) profile <cit.> ρ(r)=ρ_s(r/r_s)^-1(1+r/r_s)^-2, the concentration parameter is defined as c_a=r_a/r_s where we denote the virial radius at accretion as r_a. The two parameters characterizing the profile, (ρ_s, r_s) are tied to the maximum circular velocity and the corresponding radius (V_ max, r_ max) as ρ_s=(4.625/4π G)(V_ max/r_s)^2, r_s=r_ max/2.163. We adopt the concentration-mass relation of Ref. <cit.> for the following calculation. Once a smaller halo has accreted onto a larger one and become the subhalo, it will start losing its mass through the tidal force exerted by the host. This process is described by the following differential equation (e.g. Ref. <cit.>) dm/dt = -g m/τ_ dyn(m/M(z))^η, where M(z) and τ_ dyn are the mass and dynamical time scale of the host halo, respectively, and g and η are parameters which depends on the host halo mass and redshift. We take values determined by fitting the tidal mass-loss rate with a single power law function <cit.> which agrees with the numerical results <cit.>. We obtain the accretion history of the host M(z) (also relevant for S_1 in Eq. (<ref>)) through the EPS theory <cit.>, which depends on the peak parameters (A, k_p, n_b) through S_1. We incorporate the change of the density profile parameters (r_s, ρ_s) from the evolution of V_ max and r_ max following the tidal mass loss of subhalos, taking the relationship derived in Ref. <cit.> which is suitable for the NFW profile, i.e., inner slope proportional to r^-1. § RESULTS AND DISCUSSION In this section, we explore the constraints on curvature perturbations that can source the SMBH seeds from current observations and prospects, considering DM substructure probes of the number count of satellite galaxies as well as stellar stream and lensing observations. As we will see below, these show complementary performance to CMB μ-type distortion measurements which were investigated in previous works <cit.>. In Fig. <ref>, the prediction of the subhalo mass function with several values of A are plotted in different lines. The left (right) panel corresponds to the case of n_b=4 (n_b=8). We fix the parameter k_p = 32 h Mpc^-1 where h=0.6736 <cit.>. The figure clearly shows that the subhalo mass function within the vicinity of the Milky Way is affected by the injection of power on this characteristic scale. For the same wavenumber, the maximum in the subhalo mass function shifts to higher masses as the amplitude of the power enhancement increases. The number of smaller subhalos, on the other hand, is suppressed because halos corresponding to wavenumbers of the power spectrum bump collapse so early that smaller substructures cannot form as they will be embedded inside these larger halos. §.§ Constraints using satellite counts For a given parameter set (A, k_p, n_b), we predict the number of subhalos that host dwarf satellite galaxies in them and compare this with observations. We assume that each subhalo above a certain mass hosts a galaxy. This enables us to avoid the uncertainty in galaxy formation conditions. We adopt the number of observed satellite galaxies in the Milky Way as a lower limit. Since implementing galaxy formation conditions will reduce the number of expected satellites compared with that of subhalos, our approach is conservative. The Dark Energy Survey and PanSTARRS1 survey identified 94 satellites with the kinematic data within the virial volume of the Milky Way after imposing the completeness correction <cit.>. The satellite galaxy with the smallest velocity dispersion, σ_V, is Leo V with σ_V = 2.3 km s^-1 <cit.>. We take this value as the minimum for subhalos to host galaxies, and scan the parameter region where the number of subhalos satisfying the V_ max^ Leo I = √(3)σ_V = 4 km s^-1 is larger than 94. §.§ Constraints using gravitational lensing and stellar stream data Stellar stream observations and gravitational lensing are pure probes of DM subhalos that induce gravitational perturbations and can provide indications to subhalos without baryonic counterparts. The existence of DM subhalos in the host will perturb the image of lensed galaxies behind the system, while passing of DM subhalos through stellar streams will perturb the distribution of stars and creates gaps in the streams. Through these measurements, one can estimate the number of subhalos in a given mass range. The observations of stellar streams and gravitational lensing are sensitive to the halos in the range of O(10^6–10^9) M_⊙. Each point with error bar in Fig. <ref> denotes the subhalo mass function within 300 kpc from the Milky Way halo center, which was derived in Refs. <cit.>. We interpret the points with downward arrows as limits on N_ sh≠ 0 due to the non-detection of stream perturbers or lensing at that mass at the 2σ confidence level. We performed chi-square analysis with the above data as χ^2(k_p,A,n_b) = ∑_i [N_i - N_ th(m_i|k_p,A,n_b)]^2/σ_i^2, where m_i, N_i, and σ_i are the i-th mass bin, mass function data value and its 1σ error shown in Fig. <ref>. For simplicity, we assume that the probability distribution follows a Gaussian while it is not constrained by current observations. N_ th(m_i|k_p,A,n_b) in Eq. (<ref>) is the theoretical prediction for subhalo number at m_i with the given parameters (k_p, A, n_b). The parameter n_b is fixed in our analysis. We then evaluate the excluded region on the (k_p,A) plane at the 95% confidence level by requiring Δχ^2(k_p,A) ≡χ^2(k_p,A)-χ^2_ min > 5.99. In the example shown in Fig. <ref>, we see that the model with e.g. A = 10^-2 predicts too small numbers of subhalos in the mass range where lensing and stellar stream observations are sensitive, hence it can be excluded. We will also discuss future projections in the following sections. §.§ Sensitivity to SMBH seeds Sensitivity to the possible SMBH seeds from current observations are summarized in Fig. <ref>. The left (right) panel shows the case which assumes n_b=4 (n_b=8). In each panel, we plot the excluded region from satellite number counts with light purple. The constraints obtained from lensing and stellar stream observations, which are shown in dark purple, surpass those from satellite number counts. By combining the lensing and stellar stream observations, the sensitivity improves by a factor of ≃ 10. If the parameters (A, k_p) are above these lines, our model yields fewer than 94 subhalos with V_ max > 4 km s^-1, hence an upper limit at the 95% confidence level is obtained. The sensitivity with subhalo number counts reaches as small as P_ζ∼3×10^-7 at k_p∼ O(10) Mpc^-1. We also plot the excluded region from CMB μ-distortions in the same figure with gray-solid lines. The best current constraints on spectral distortions come from the COBE/FIRAS instrument, which finds |μ|≲ 9× 10^-5 at the 95% confidence level <cit.>. The CMB μ-distortion is sensitive to the PBHs within the mass spectrum of O(10^4–10^12) M_⊙. The mass range corresponds to the horizon mass associated with the enhanced primordial curvature perturbations that coincide with comoving scales entering the horizon between z ≃ 10^6 and recombination. The interaction of the photon-baryon fluid through the Compton scattering erases the signature of the acoustic dumping of the scalar fluctuations that enter the horizon at z≳ 10^6. The μ-distortion is calculated as <cit.> ⟨μ⟩≃ 2.3 ∫_k_0^∞dk/k P_ζ( k ) W(k), where P_ζ(k) is convolved with a window function W(k) = exp( - [ k̂/1360]^2/1+ [ k̂/260]^0.3 + k̂/340) - exp( - [ k̂/32]^2 ). Here we denote k̂ = k / Mpc and k̂_0 =1. Current CMB μ-distortion bounds are sensitive to the amplitude down to P_ζ∼3×10^-3. The probe is complementary to those DM substructures, which can be more sensitive in the mass range of M_ PBH^ seed≳ O(10^9)M_⊙. In each panel, the amplitude of the primordial fluctuation which produces the current SMBH energy density of Ω_SMBH=10^-10 assuming three different statistics, β_ G, β_χ^2 and β_ G^3 <cit.> are shown with light cyan, dark cyan, and blue, respectively. Depending on the accretion efficiency A, the amplitude satisfying the SMBH abundance can be in between the upper (A=1) or lower (A=10^5) boundaries of the bands. The sensitivity with DM structure formation is sufficient to probe non-Gaussian statistics of χ^2 and G^3 distributions at scales k_p≲ O(10^2) Mpc^-1. For n_b=8, the accessible region gets slightly narrower, however, up to k_p∼80 Mpc^-1 can still be probed. These regions correspond to PBH seed masses ≳ 10^9M_⊙. From the measurements of the CMB μ-distortion by COBE/FIRAS, the Gaussian statistics case over scales of 10^1≲ k_p ·Mpc≲ 3× 10^4 for both n_b are already excluded. For a narrower range of 10^2≲ k_p·Mpc≲ 10^4, COBE/FIRAS observations exclude the Gaussian case if n_b=4 while it does not for n_b=8. In all cases, the most non-Gaussian (G^3) case is not yet explored by μ-distortion measurements. §.§ Prospects for future observations Upcoming surveys such as the Rubin Observatory Legacy Survey of Space and Time (LSST) <cit.> can extend these constraints to much smaller scales. In Fig. <ref>, we showcase examples assuming N_ sh(A,k_p,n_b)/N_ sh(A=0)< 2 for four subhalo mass ranges, 10^5–10^6, 10^6–10^7, 10^7–10^8, and 10^8–10^9 M_⊙ with light to dark magenta. Here we parametrize the abundance ratio by taking the ratio of the abundance to the case with A=0, i.e. without an enhancement in the power spectrum. Using this approach, we highlight that there is strong potential to explore the PBH origin of the SMBH by probing its mass down to ≃ O(10^5) M_⊙ for n_b=4 and ≃ O(10^7)M_⊙ for n_b=8. SMBH seeds of mass 10^5–10^6 M_⊙ would naturally explain the observed black holes at high redshifts by JWST. In this approach, we can probe power spectrum amplitudes down to ≃ O(10^-7) for a certain range of scales. The constraints can be comparably strong to those expected with future μ-distortion measurements by PIXIE <cit.>, which are shown with dashed lines in the figure. Also for n_b=8, the DM substructure probe can be advantageous relative to to μ-distortion measurements. We also remark in Fig. <ref> the relative strength of the current DM substructure constraints for small k_p to future μ-distortion probes with PIXIE. § CONCLUSION In this work, we have studied features in the primordial curvature perturbation power spectrum that give rise to over-densities which collapse to form PBHs large enough to serve as progenitors for SMBHs. As previously discussed in Ref. <cit.>, PBH progenitor scenarios for SMBH seeds are in conflict with CMB μ-distortion observations if a Gaussian distribution is assumed for the primordial curvature perturbations. If primordial curvature perturbations follow non-Gaussian distributions, such constraints can be evaded hence increased sensitivities from future observations are required to test such models. In such situations, the halo mass function of subhalos around the Milky Way would also be altered, leading to different numbers of satellite galaxies and smaller dark subhalos. We investigated the region of parameter space which can be constrained from current observations on the number counts of the satellite galaxies and dark subhalo estimates with stellar stream and lensing observations. The DM substructure analysis excludes large regions of amplitude for peak wavenumbers less than O(100) Mpc^-1, corresponding to PBH seed masses greater than ≃ O(10^9) M_⊙, which cannot be probed with current CMB μ-distortion measurements. Potential for future observations to extend the probable range with the scheme of this article is also discussed. We highlight the possibility of probing the PBH origin of SMBHs down to masses of approximately O(10^5) M_⊙ and power spectrum amplitudes as low as O(10^-7) over certain scales. We have shown that DM substructure bounds are comparably powerful to future CMB μ-distortion measurments for probing the enhanced curvature perturbations. Recent works indicate that the second-order gravitational waves (e.g. Ref. <cit.>) and Lyman-α data (e.g. Ref. <cit.>) could also be important for probing certain models which predicts enhanced curvature perturbations. Those probes may have comparable sensitivity in some regions to our work, however, we remark that the prospects shown here with DM substructures are conservatively obtained. If we include propagation of non-Gaussian effects in our DM substructure analysis, we expect it could be even more constraining. § ACKNOWLEDGEMENTS SB and MF are supported by the STFC under grant ST/X000753/1. SA and MF are extremely grateful for support via the Royal Society International Exchange project “Probing particle nature of dark matter using small-scale distribution” which made this work possible. The work of SA was also supported by MEXT KAKENHI under grant numbers JP20H05850, JP20H05861, and JP24K07039. The work of NH was partly supported by MEXT KAKENHI Grant Number 20H05852, 22K14035, and MEXT Leading Initiative for Excellent Young Researchers Grant Number 2023L0013. The work of KI was supported by JSPS KAKENHI Grant Number JP20H01894 and JSPS Core-to-Core Program Grant No. JPJSCCA20200002. apsrev4-1
http://arxiv.org/abs/2408.12428v1
20240822142205
VR4UrbanDev: An Immersive Virtual Reality Experience for Energy Data Visualization
[ "Saeed Safikhani", "Georg Arbesser-Rastburg", "Anna Schreuer", "Jürgen Suschek-Berger", "Hermann Edtmayer", "Johanna Pirker" ]
cs.HC
[ "cs.HC" ]
Graz University of Technology Graz Austria s.safikhani@tugraz.at Graz University of Technology Graz Austria Graz University of Technology Graz Austria Ludwig-Maximilians-Universität München Munich Germany § ABSTRACT In this demonstration paper, we present our interactive virtual reality (VR) experience, which has been designed to facilitate interaction with energy-related information. This experience consists of two main modes: the world in miniature for large-scale and first-person for real-world scale visualizations. Additionally, we presented our approach to potential target groups in interviews. The results of these interviews can help developers for future implementation considering the requirements of each group. VR4UrbanDev: An Immersive Virtual Reality Experience for Energy Data Visualization Johanna Pirker August 26, 2024 ================================================================================== § INTRODUCTION The rise in popularity of consumer-level head-mounted displays (HMD) is attracting interest in immersive virtual reality (VR) applications from a variety of entertainment and research fields. Building Information Modeling (BIM) holds significant promise in architecture, engineering, and construction (AEC) <cit.>. Previous studies showed that VR can be beneficial for education/training <cit.>, design/data exchange <cit.>, and management/collaboration <cit.>. The current trend in engineering is moving towards considering energy efficiency in every design. While VR has been utilized in various applications in AEC, we have not yet seen a comprehensive implementation of VR for energy data visualization and interactions. In this work, we present a VR application that allows for demonstrating different energy-related information, using either real-time or historical data. We used data gathered from the university campus as proof of concept for our application. However, this application can be easily expanded to a much larger area. Our campus has integrated various IoT systems to collect energy-related data, including electrical energy and energy used for heating and cooling. Leveraging this data, we developed an initial prototype for VR-enabled data visualization and data-driven planning. We then interviewed different user groups to gather preliminary feedback and identify areas for improvement and further application of our approach. § TECHNICAL DESCRIPTION We utilized Unreal Engine (UE) 5.3 for creating our VR interaction logic, visuals, and data transfer. UE provides us with realistic rendering features and a comprehensive set of tools, enabling us to develop different interactions in a pleasant visual interface. As the project required us to include large-scale map information, we integrated the Cesium plugin for UE. This plugin allows for integrating real-world maps based on sources such as Microsoft Bing and Google Maps. We utilized the OpenXR plugin for seamless VR communication between the HMD and UE to extend the range of supported devices. As a result, the current version of the project supports devices such as SteamVR, Meta Quest, and Windows Mixed Reality. § VIRTUAL ENVIRONMENT Our VR application includes two main modes: The world in miniature mode: is set in a minimalist room and is designed to provide a large-scale visualization and interaction with the map (Figure <ref>-top). We used a circular desk in this room as the main interaction area. In the center of the room, there is a circular desk called the "interaction desk" that serves as the main interaction point for the map. Users can scale and pan the map using this desk. The interaction desk is divided into two layers: the top layer for the map and the bottom layer for gadgets and visualization mode selection. Users can rotate these layers separately by grabbing and moving the edge of each layer. The buildings on the campus map can be interacted with by hovering over them and touching them. Doing so will open up a detailed model of the building, with the option to access additional information by pressing the information button. Depending on the current visualization mode and gadget, certain information will be displayed on the walls of the room (Figure <ref>-bottom). The bottom layer consists of two areas: gadgets and visualization modes (Figure <ref>-top). The gadgets area is a placeholder for different gadgets that can be used for various interactions with the map. For example, users can place a measuring gadget on the map to measure the distance between two points. Another available gadget is world-teleportation. By placing this gadget on the map, users can be transferred to that location in first-person mode. The visualization mode area contains several interactive buttons for switching or adding visualizations to the map. For instance, users can activate the isolation mode to highlight the campus area or use the point of interest mode to display important locations on the map. The first-person mode: is specifically designed to offer users a real-world scale experience. To access this mode, users need to use a dedicated gadget in the miniature mode. Once in the mode, users can view different energy-related information for each building in a specific area. As moving around in real-world scale can be time-consuming, we will provide users with a local map option to help them teleport quickly to a specific location on the map. § USER INTERFACE AND INTERACTIONS Through our previous experiences in developing VR interactions, we learned that users prefer integrated interactions over a copy of traditional desktop interactions in VR. However, using too many 3D interactions with different functionalities can be challenging for new users to learn and cumbersome to use in the long term. To address this, we decided to simplify the design of interactions and make them consistent throughout the entire experience. Our main interaction is grabbing objects and manipulating them accordingly, along with pressing physically-behaved buttons. The grab interaction is generalized throughout the entire experience and can be performed with one or two hands for different applications. Additionally, physical buttons behave similarly to real-world buttons, allowing users to immediately understand what to expect from them. We considered two different types of locomotion in addition to physical movement, namely teleportation and smooth movement. The smooth movement system is similar to conventional game locomotion using joysticks which provides a continuous experience of movement but might result in motion sickness. On the other hand, teleportation has less chance of causing motion sickness, but it might lead to disorientation. Users can freely choose their preferred mode based on their sensitivity to motion sickness in both modes. This project will be expanded with additional features such as different data visualization modes and building manipulation. § INTERVIEW WITH EXPERTS To evaluate our approach, we conducted interviews with stakeholder groups that may use the environment for their specific tasks: Energy researchers: interactive data visualization for research Planners: visualizing design decisions and external collaboration Facility managers: monitoring hidden infrastructures We carried out a total of 10 interviews, each structured into three parts. The first part covered the interviewee's job, their use of software tools for visualizing energy data, and prior VR experience. The second part involved testing our VR experience. Finally, participants provided feedback on the experience and its usability, discussing potential integration into their workflow. Initial results indicate that participants found the tool user-friendly and appreciated the virtual environment, though some struggled to see how it could provide added value to their daily work practices. Potential applications ranged from high-level planning, such as outdoor area design, to detailed analysis and fault detection, like visualizing indoor sensor positions and comparing various data types to ensure system functionality. We will incorporate the feedback and ideas based on the analysis of this interview into our project. This will include simplified and streamlined visualizations, improvements to the interaction and locomotion systems, as well as visualizing the influence of design decisions. In the future, we plan to conduct another round of interviews with more participants to further evaluate our approach. This work was supported by the Austrian Research Promotion Agency (FFG) program City of Tomorrow, project no. FO999893555. ACM-Reference-Format
http://arxiv.org/abs/2408.11732v1
20240821155622
K2-399 b is not a planet. The Saturn that wandered through the Neptune desert is actually a hierarchical eclipsing binary
[ "J. Lillo-Box", "D. W. Latham", "K. A. Collins", "D. J. Armstrong", "D. Gandolfi", "E. L. N. Jensen", "A. Castro-González", "O. Balsalobre-Ruza", "B. Montesinos", "S. G. Sousa", "J. Aceituno", "R. P. Schwarz", "N. Narita", "A. Fukui", "J. Cabrera", "A. Hadjigeorghiou", "M. Kuzuhara", "T. Hirano", "M. Fridlund", "A. P. Hatzes", "O. Barragán", "N. M. Batalha" ]
astro-ph.EP
[ "astro-ph.EP", "astro-ph.SR" ]
calc ();a,
http://arxiv.org/abs/2408.12427v1
20240822142102
BKL bounces outside homogeneity: Gowdy symmetric spacetimes
[ "Warren Li" ]
gr-qc
[ "gr-qc", "math.AP" ]
A Riemannian Approach for Spatiotemporal Analysis and Generation of 4D Tree-shaped Structures Tahmina Khanam10000-0002-9549-2700 Hamid Laga10000-0002-4758-7510 Mohammed Bennamoun20000-0002-6603-3257 Guanjin Wang10000-0002-5258-0532 Ferdous Sohel10000-0003-1557-4907 Farid Boussaid20000-0001-7250-7407 Guan Wang30000-0003-3029-0996 Anuj Srivastava40000-0001-7406-0338 August 26, 2024 ==================================================================================================================================================================================================================================================================================== § ABSTRACT We study the phenomenon of bounces, as predicted by Belinski Khalatnikov and Lifshitz (BKL), as an instability mechanism within the setting of the Einstein vacuum equations in Gowdy symmetry. In particular, for a wide class of inhomogeneous initial data we prove that the dynamics near the t = 0 singularity are well–described by ODEs reminiscent of Kasner bounces. Unlike previous works regarding bounces, our spacetimes are not necessarily spatially homogeneous, and a crucial step is proving so-called asymptotically velocity term dominated (AVTD) behaviour, even in the presence of nonlinear BKL bounces and other phenomena such as spikes. (A similar phenomenon involving bounces and AVTD behaviour, though not spikes, can also be seen in our companion paper <cit.>, albeit in the context of the Einstein–Maxwell–scalar field model in surface symmetry.) One particular application is the study of (past) instability of certain polarized Gowdy spacetimes, including some Kasner spacetimes. Perturbations of such spacetimes are such that the singularity persists, but the intermediate dynamics – between initial data and the singularity – feature BKL bounces. § INTRODUCTION §.§ The Einstein equations in Gowdy symmetry In this article, we study the structure and (in)stability properties of spacelike singularities arising in solutions of the 1+3-dimensional Einstein vacuum equations. The Einstein vacuum equations are given by the following geometric equation for the Ricci curvature of a Lorentzian metric 𝐠 on a manifold ℳ^1+3: 𝐑𝐢𝐜_μν[𝐠] = 0, . More specifically, we study phenomena related to spacelike singularities in the context of 𝕋^3 Gowdy symmetric solutions to the Einstein equations. A simple characterization of 𝕋^3 Gowdy symmetry[ A more geometric characterization is a spacetime with 𝕋^3 spatial topology and containing two spacetlike Killing vector fields, here given by ∂/∂σ and ∂/∂δ, and whose associated twist constants vanish. See the original paper of Gowdy <cit.>, or <cit.>. ] is a spacetime (ℳ, 𝐠) such that there exists a global coordinate system (t, θ, σ, δ) ∈ (0, + ∞) × (𝕊^1)^3 such that the spacetime metric 𝐠 may be written as: 𝐠 = - t^-1/2e^λ/2 ( - dt^2 + d θ^2) + t [ e^P (d σ + Q d δ)^2 + e^-Pd δ^2], where the functions P, Q and λ depend only on the variables t ∈ (0, + ∞) and θ∈𝕊^1. In the sequel, we view 𝕊^1 as the closed interval [- π, π] with endpoints identified. For a Gowdy symmetric spacetime with metric 𝐠 in the gauge of (<ref>), the Einstein vacuum equations (<ref>) may be written as the following system of PDEs for the variables (P, Q): (t ∂_t)^2 P - (t ∂_θ)^2 P = e^2P (t ∂_t Q)^2 - e^2P (t ∂_θ Q)^2, (t ∂_t)^2 Q - (t ∂_θ)^2 Q = - 2 (t ∂_t P) (t ∂_t Q) + 2 (t ∂_θ P) (t ∂_θ Q) together with two equations for λ. t ∂_t λ = (t ∂_t P)^2 + (t ∂_θ P)^2 + e^2P (t ∂_t Q)^2 + e^2P (t ∂_θ Q)^2, ∂_θλ = 2 ( t ∂_t P ∂_θ P + e^2P t ∂_t Q ∂_θ Q ). Note that the former equations (<ref>) and (<ref>) decouple from the latter two equations for λ, and in studying the evolutionary problem one usually solves for P and Q using (<ref>) and (<ref>), then afterwards one integrates (<ref>) to solve for λ. The final equation (<ref>) is a constraint that is propagated by the remaining equations, as long as it is satisfied at some initial time. A further remarkable feature of 𝕋^3 Gowdy symmetric spacetimes is that the equations (<ref>) and (<ref>) have a hidden geometric structure. It turns out these two equations are exactly a system of wave maps (see <cit.> for a general definition of wave maps) between a domain manifold _> 0×𝕋^2 with metric g_0 = - dt^2 + d θ^2 + t^2 d χ^2, and a target manifold ^2 with metric g_R = dP^2 + e^2P dQ^2. More precisely, the equations (<ref>)–(<ref>) describe wave maps from (_>0×𝕋^2, g_0) to (^2, g_R) which are independent of χ. Such wave maps conserve the energy functional: ℰ(t) 1/2∫_𝕊^1[ (∂_t P)^2(t, θ) + (∂_θ P)^2(t, θ) + e^2P (∂_t Q)^2 (t, θ) + e^2P (∂_θ Q)^2(t, θ)] d θ. Note that (^2, g_R) is isometric to hyperbolic space. Indeed, the change of variables x = Q, y = e^-P puts g_R into the familiar upper half plane model g_R = y^-2(dx^2 + dy^2). See <cit.> and references therein for further details regarding the wave map structure. A consequence of conservation of energy (<ref>) is that solutions to the Gowdy symmetric system (<ref>)–(<ref>) arising from regular initial data will persist and remain regular in the entire interval t ∈ (0, + ∞). As a result, 𝕋^3 Gowdy symmetric spacetimes are a popular model for studying the Strong Cosmic Censorship conjecture, with the strategy often to show geodesic completeness in the t → + ∞ direction[Note that geodesic completeness is not necessary to show future inextendibility, see <cit.>.] and singularity formation in the t → 0 direction. The first major breakthrough regarding Strong Cosmic Censorship came in studying the subclass of polarized Gowdy spacetimes, defined as follows: A polarized Gowdy spacetime is a 𝕋^3 Gowdy symmetric solution to the Einstein vacuum equation with Q ≡ 0 everywhere. In particular P(t, θ) solves the linear hyperbolic PDE: (t ∂_t)^2 P - (t ∂_θ)^2 P = 0. By studying (<ref>), in <cit.> the authors prove Strong Cosmic Censorship in the polarized Gowdy class by showing that solutions are geodesically complete in the t →∞ solution and that generically the spacetime metric 𝐠 exhibits curvature blow-up in the t → 0 direction. The essential step, particularly in the context of t → 0, is understanding asymptotics for P, as we explain in Section <ref>. In a series of influential works, Ringström <cit.> then extended the proof of Strong Cosmic Censorship to the full class of Gowdy symmetric spacetimes. Ringström's strategy was again to prove geodesic completeness in the t →∞ direction, and that generically one can show asymptotics for P as t → 0 that imply curvature blow-up to the past. As we see later, the asymptotics are more complicated than in the polarized case, and Ringström importantly allows for a finite number of mild irregularities in the asymptotic profile for P known as “spikes”. Such spike-containing solutions were constructed earlier in <cit.>. §.§ Asymptotics towards the singularity As mentioned above, solutions to the Gowdy symmetric system (<ref>)–(<ref>) arising from regular initial data exist globally in time t ∈ (0, + ∞). Armed with this global existence result, a reasonable goal is to understand in furher detail the behaviour of solutions as one approaches the spacelike singularity at t = 0. For instance, one would like to understand the asymptotic behaviour of the functions P(t, θ) and Q(t, θ) in the limit t → 0. By considering a (Fuchsian) series expansion at t = 0, i.e. an asymptotic expansion whose terms are powers of t with coefficients depending on θ, one could conjecture that, e.g. from <cit.>: P(t, θ) = - V(θ) log t + Φ(θ) + o(1), Q(t, θ) = q(θ) + t^2V(θ)[ Ψ(θ) + o(1) ], where V, Φ, q, Ψ: 𝕊^1 → are regular functions of θ. Upon substituting into the equations (<ref>)–(<ref>), one sees that these expansions are self-consistent, at least under the assumption that for each θ∈𝕊^1, one of the following holds: * The coefficient V(θ) satisfies 0 < V(θ) < 1, or * V(θ) > 0, but ∂_θ q(θ) = 0, or * V(θ) < 1, but Ψ(θ) = 0. Note: in the polarized Gowdy case with Q ≡ 0, any value of V(θ) is permitted. Deriving asymptotics of the form (<ref>)–(<ref>) was a crucial step in Ringström's proof of the Strong Cosmic Censorship for Gowdy symmetric spacetimes <cit.>. In particular, for our applications we shall make use of the following, see <cit.> in the polarized case and <cit.> for the unpolarized case but with with the assumption 0 < V(θ) < 1. Let (P, Q) be a solution to the Gowdy symmetric wave map system (<ref>)–(<ref>), arising from regular initial data (P, Q, t ∂_t P, t ∂_t Q)|_t = t_0∈ (C^k+1)^2 × (C^k)^2 for some k ≥ 1. Then for all θ_0 ∈𝕊^1, there exists some V(θ_0) ∈ such that - t ∂_t P (t, θ_0) → V(θ_0) as t → 0, in other words - t ∂_t P(t, θ) → V(θ) pointwise. Suppose moreover that either Q ≡ 0 i.e. the solution lies in the polarized Gowdy subclass, or we assume a priori that 0 < V(θ) < 1 for all θ∈𝕊^1. Then V(θ) is C^k-1 and moreover there exist C^k-1 functions Φ(θ), q(θ) and r(θ) such that the following limits hold uniformly in the C^k-1 norm as t → 0: - t ∂_t P(t, θ) → V(θ), P(t, θ) + V(θ) log t →Φ(θ), e^2P t ∂_t Q (t, θ) → r(θ), e^2P (Q(t, θ) - q(θ)) →r(θ)/2 V(θ). Note that these imply (<ref>) and (<ref>), given Ψ(θ) = r(θ)/2V(θ) e^-2Φ(θ). Under the same assumptions, if λ(t, θ) solves the remaining equations (<ref>)–(<ref>), then λ(t, θ) = V^2 (θ) log t + L(θ) + o(1), where L(θ) obeys the asymptotic momentum constraint dL = - 2 V · d Φ. In a context where (<ref>)–(<ref>) hold, a straightforward change of coordinates from the gauge (<ref>) implies that the metric behaves in a Kasner-like fashion near the t = 0 singularity – see Section <ref> for a discussion of Kasner-like singularities – and moreover one can describe the Kasner exponents by p_1(θ) = V^2(θ) - 1/V^2(θ) + 3, p_2(θ) = 2 - 2 V(θ)/V^2(θ) + 3, p_3(θ) = 2 + 2 V(θ)/V^2(θ) + 3. These exponents depend on θ, but always satisfy the Kasner relations p_1 + p_2 + p_3 = p_1^2 + p_2^2 + p_3^2 = 1. We remark that in <cit.>, Ringström proves much stronger statements than Theorem <ref>. For instance, he shows that for any θ_0 ∈𝕊^1 with 0< V(θ_0) < 1, then there exists a neighborhood I of θ_0 in 𝕊^1 such that V(θ) is C^k-1 in I ⊂𝕊^1 and moreover the limits (<ref>)–(<ref>) hold when restricted to θ∈ I. Furthermore, it is proven that for generic[Here generic means open and dense in the C^∞-topology on initial data sets (P, Q, t ∂_t P, t ∂_Q)|_t = t_0.] smooth initial data, 0 < V(θ) < 1 for all θ∈𝕊^1 ∖ S, where S is a finite set of points known as “spikes”. See the review article <cit.> for details. However, understanding Strong Cosmic Censorship in full generality and the issue of spikes is not the main focus of the present article. Our starting point is instead the following two interrelated questions. Can one characterize a wide class of initial Cauchy data (for Gowdy spacetimes) at t = t_0 > 0, including data which is not at first glance completely compatible with the asymptotics of Theorem <ref> (e.g. data such that 0 < - t ∂_t P(t_0, θ) < 1 does not hold everywhere but Q(t_0, θ) ≢0), such that the corresponding spacetime reaches a t = 0 singularity for which one has the asymptotics (<ref>)–(<ref>) with quantitative estimates depending on initial data[This differs from the approach in <cit.>, where the strategy is more akin to showing that the solutions not achieving the appropriate asymptotics are non-generic in some sense; in particular there is little use of the relationship between the asymptotic information and initial data at some t_0 > 0.]? Moreover, for all such data can one fully understand the intermediate dynamics of the spacetime between the initial data at t = t_0 and the asymptotics at t = 0? These questions are interesting since one expects that at t = 0 itself, the asymptotic quantity V(θ) = lim_t → 0(-t∂_t P(t, θ)) must generically obey 0 < V(θ) < 1, thus there must be some nonlinear instability / transition between t = t_0 and t = 0. Later, we identify this nonlinear instability mechanism as a “bounce” akin to those identified by Belinski, Khalatnikov and Lifshitz in <cit.>, see Section <ref>. In this article, we prove both the existence of such a class of initial data (Theorem <ref>) together with a full quantitative understanding of the intermediate dynamics (Theorem <ref>), thus answering the above questions in the affirmative. Moreover this class is open in the C^∞ topology on initial data, thus our main results will represent a qualitative stability result in the sense that in the regime considered, certain aspects such as curvature blow-up, are stable to perturbation. In answering these questions we also uncover an interesting corollary of Theorem <ref>, which we interpret as an instability result (see Corollary <ref> for a precise statement): consider a polarized Gowdy solution given by (P, Q) = (P, 0) arising from suitably regular initial data. From Theorem <ref>, the quantity V_0(θ) = lim_t → 0 (- t ∂_t P(t, θ)) is allowed to take any value. Now consider a perturbation of (P, 0) to (P̃, Q̃) with Q̃≢0. The instability result then says under a “low-velocity” assumption 0 < V(θ) < 2, the new Ṽ(θ) associated to P̃(t, θ) via (<ref>) will always satisfy 0 < Ṽ≤ 1. In fact, we show that modulo spikes, Ṽ(θ) ≈min{ V(θ), 2 - V(θ) }, quantifying the “bounce” instability exactly. Note that neither the unperturbed solution (P, 0), nor the perturbation, are required to be spatially homogeneous, and this result together with our companion article <cit.> can be considered the first rigorous evidence of BKL-type bounces outside of homogeneity. We refer the reader to <cit.> for a detailed comparison between the two results. (BKL bounces of this sort are much better understood in the spatially homogeneous setting, where the dynamics reduce to a system of finite dimensional ODEs <cit.>.) §.§ Our main theorems in rough form Our two main theorems concern solutions (P, Q) to the Gowdy symmetric system (<ref>)–(<ref>); we no longer consider the remaining equations (<ref>)–(<ref>). Local existence for this system yield that for initial data P_D, Q_D, Ṗ_D, Q̇_D: 𝕊^1 → in a Banach space X, e.g. X = (C^k+1)^2 × (C^k)^2 or X = (H^s+1)^2 × (H^s)^2 for k ≥ 1 or s ≥3/2, and any t_0 > 0 then there exists an interval I ⊂ (0, + ∞) containing t_0 for which (P, Q) solves (<ref>)–(<ref>), and (P, Q, t ∂_t P, t ∂_t Q) ∈ C(I, X), (P, Q, t ∂_t P, t ∂_t Q)|_t = t_0 = (P_D, Q_D, Ṗ_D, Q̇_D). For X as above, conservation of energy (<ref>) implies global existence, i.e. one may take I = (0, + ∞) We now introduce our two main theorems. The first, Theorem <ref> is a stability theorem, and characterizes a subset in the moduli space of initial data such that for (P_D, Q_D, Ṗ_D, Q̇_D) and t_0 > 0 in this subset, certain quantities including -t ∂_t P, e^P t ∂_θ Q and e^P t ∂_t Q remain bounded for t ∈ (0,t_0). The second, Theorem <ref>, referred to as the bounce theorem, characterizes the (nonlinear) dynamics arising from initial data as above, including a relationship between the initial data and the eventual asymptotics of (P, Q) as described in (<ref>)–(<ref>). To precisely define this subset of initial data, we introduce two real-valued parameters 0 < < 1 and > 0. Then the initial data is chosen to satisfy three conditions: * (Weak subcriticality) The functions P_D, Q_D, Ṗ_D, Q̇_D satisfy, for all θ∈𝕊^1: - Ṗ_D > , (1 + Ṗ_D)^2 + (e^P_D t ∂_θ Q_D)^2 < (1 - )^2. * (Energy boundedness) For some N = N() ∈, an Nth order L^2 norm of (P_D, Q_D, Ṗ_D, Q̇_D) is bounded by , see already (<ref>) and (<ref>). * (Closeness to singularity) The time t_0 at which initial data is prescribed is chosen to satisfy 0 < t_0 ≤ t_* for some t_* depending on and . Note in the more precise Theorem <ref> and Theorem <ref> we introduce additional parameters ' and ” which N and t_* could depend on. See more details in Section <ref>. We remark also that taking the union of all such initial data as 0 < < 1 and > 0 vary, one characterises the class of initial data to which our results apply as a subset of the “moduli space of initial data” for the Gowdy system (<ref>)–(<ref>) which is moreover open in the C^∞ topology. We defer further discussion of these conditions to after the statements below. Our first theorem, the stability theorem, is as follows: For 0 < < 1 and > 0, let t_0 > 0 and (P_D, Q_D, Ṗ_D, Q̇_D) be initial data obeying the three conditions above. Then the corresponding solution (P, Q) to the Gowdy symmetric system (<ref>)–(<ref>) is such that for all t ∈ (0, t_0), - t ∂_t P(t, θ) ≥, (1 + t ∂_t P(t, θ))^2 + (e^P t ∂_θ Q(t, θ))^2 ≥ (1 - )^2. Furthermore, there exist functions V, q: 𝕊^1 →ℝ, with V bounded and q continuous, such that - t ∂_t P(t, θ) → V(θ) pointwise and Q(t, θ) → q(θ) uniformly as t → 0. If one further assumes > 1/3, then we have that q is C^1 and the convergence Q(t, θ) → q(θ) holds in the C^1 topology. Our second main theorem concerns the dynamical behaviour of certain quantities along causal curves. To fix notation, let γ: I → (0, +∞) ×𝕊^1 be any causal curve parameterised by its t-coordinate. Using the Gowdy metric (<ref>) and assuming γ to not depend on σ and δ, the causal nature means γ(t) = (t, θ(t)) with |θ'(t)| < 1. Then define: 𝒫_γ(t) - t ∂_t P (γ(t)), 𝒬_γ(t) e^P t ∂_θ Q (γ(t)). It is key that our theorem holds uniformly in the choice of causal curve γ, meaning the ODEs for 𝒫_γ and 𝒬_ being independent for different choices of γ with distinct endpoints, at least up to error terms which reflect AVTD behaviour, see Section <ref>. This reflects explicitly that our result is spatially inhomogeneous. Let (P, Q) be a solution to the Gowdy symmetric system (<ref>)–(<ref>) arising from initial data as in Theorem <ref>. Then for γ(t) a timelike curve as above, the quantities 𝒫_γ(t) and 𝒬_γ(t) obey the ODEs: t d/dt𝒫_γ = 𝒬_γ^2 + ℰ_𝒫, t d/dt𝒬_γ = (1 - 𝒫) 𝒬_γ + ℰ_𝒬, where the error terms vanish quickly as t → 0. Furthermore, 𝒬_γ(t) converges to 0 as t → 0, while there exists some 𝒫_γ, ∞ such that 𝒫_γ(t) →𝒫_γ, ∞ as t → 0. In fact, for V as in Theorem <ref>, 𝒫_γ, ∞ = V(θ_0) where θ_0 = lim_t → 0θ(t) is the θ-coordinate of the past endpoint of γ. Finally, we estimate 𝒫_γ, ∞ depending on the value of ∂_θ q(θ_0) = lim_t → 0∂_θ Q(γ(t)): * If either the limit ∂_θ q(θ_0) does not exist or ∂_θ q (θ_0) exists and is nonzero, then necessarily V(θ_0) = 𝒫_γ, ∞≤ 1 and moreover in the case that 𝒬_γ(t_0) is small, 𝒫_γ, ∞≈min{𝒫_γ(t_0), 2 - 𝒫_γ(t_0) } + O (𝒬_γ(t_0)). * Let > 1/2. By Theorem <ref>, ∂_θ q(θ_0) exists. If θ_0 is such that ∂_θ q(θ_0) = 0, then one instead has: 𝒫_γ, ∞≈𝒫_γ(t_0) + O (𝒬_γ(t_0)). One interprets Theorem <ref>(i) as an instability result for the quantity 𝒫_γ in the following sense: consider initial data as in Theorem <ref> such that for some θ∈𝕊^1 one sets - Ṗ_D = - t ∂_t P (t_0, θ) > 1 and 𝒬_γ(t_0) small. Then for timelike curves through (t_0, θ), and in the expected-to-be-generic setting where ∂_θ Q(γ(t)) does not converge to 0, (i) suggests that as t → 0, 𝒫_γ(r) will transition from - Ṗ_D > 1 to (approximately) 2 - Ṗ_D < 1. An application of particular interest concerns not just the instability of 𝒫_γ along γ but actually an instability of the global asymptotics of certain polarized Gowdy spacetimes with Q = 0, with the instability is triggered by unpolarized perturbations. In light of Theorem <ref>, in the original polarized spacetime the function V(θ) always exists and is smooth. To be compatible with Theorem <ref>, suppose that 0 < V(θ) < 2, but now suppose also that V exceeds 1 somewhere. Upon adding perturbations with Q ≠ 0, one expects the asymptotics will change; by considering Theorem <ref>(i) with γ a constant θ-curve, a generic θ will now have 𝒫_γ, ∞ = Ṽ(θ_0) ≈min{ V(θ_0), 2 - V(θ_0) }. An illustration of this is seen in Figure <ref> below (see Corollary <ref> for a more general statement). Here, in the polarized Gowdy spacetime with Q = 0, the asymptotic quantity V(θ) remains close to the (inhomogeneous) value of the initial data quantity - t ∂_t P(t_0, θ). However, in the non-polarized case with Q ≠ 0, in order to be compatible with 0 < V(θ) < 1 we instead see a bounce for most θ∈𝕊^1. Note, however, there may certain choices of θ_0 ∈𝕊^1 with ∂_θ q(θ_0) = 0 and where by (ii)[One might hope that the requirement of > 1/2 can be removed.] instead Ṽ(θ_0) ≈ V(θ_0). If such a θ_0 is an isolated point this means that Ṽ(θ) is discontinuous at θ = θ_0, and θ_0 is a “true spike” as defined in <cit.>. As mentioned in <cit.>, true spikes have previously been understood through a series of transformations (related to inversions and the so-called Gowdy–to–Ernst transformation) – but these previous works do not explain how spikes are related to initial data. Our result is therefore a first step in understanding the dynamical formation of spikes from suitable initial data. We return briefly to the three conditions above. Our first condition, weak subcriticality, is chosen such that we include spacetimes where the nonlinear bounces above can occur, but such that these bounces are not too wild enough to make quantities such as 𝒫_γ and 𝒬_γ uncontrollable. The second condition of energy boundedness is natural as our proof relies on L^2 energy estimates at top order. Note that our choice of energy, see Section <ref>, will be compatible with the asymptotics of Theorem <ref>. We will, however, allow the initial energy bound, > 0, to be large, meaning we allow large spatial variation. In this case, our final condition, closeness to singularity, is required to prevent this spatial variation from playing a major role in the dynamics. Note that t_* and play a similar role to that of ζ_1 and ζ_0 in <cit.>. §.§ Relationship to BKL and the Kasner map We put our results in the context of the physics and mathematics literature regarding spacelike singularities arising in solutions of the Einstein equations. While celebrated “singularity theorems” of Penrose <cit.> and Hawking <cit.> asser that “singularities” are a robust prediction of Einstein's equations (<ref>), it is a known shortcoming that these theorems do not prove singularity formation in the sense of blow-up, but instead (causal) geodesic incompleteness, and thus provide no qualitative or quantitative description of spacetimes near their singular, or incomplete, boundaries. Our approach is instead to outline the heuristics of Belinski, Khalatnikov and Lifshitz (often abbreviated to BKL) in their investigation of near-singularity solutions to Einstein's equation <cit.>. <cit.> proposes the following ansatz for singular solutions to the Einstein equations: a spacetime ℳ^1+3 = (0, T) ×Σ^3 with singularity located at {0}×Σ has spacetime metric 𝐠 with leading order expansion: 𝐠 = - d τ^2 + ∑_I=1^3 τ^2 p_I(x)ω^I(x) ⊗ω^I(x) + ⋯. The p_I(x) are functions on Σ known as Kasner exponents, while {ω^I(x)} is a frame of 1-forms on Σ. Naïvely inserting these into (<ref>), BKL determine that the Kasner exponents must satisfy the Kasner relations: ∑_I = 1^3 p_I(x) = 1, ∑_I = 1^3 p_I^2(x) = 1. Note (<ref>) implies that aside from the exceptional case where (p_1, p_2, p_3) = (1, 0, 0) and permutations thereof, exactly one of the p_I(x) must be negative. A consistency check using the Einstein equations suggests that for the ansatz (<ref>) to remain valid all the way to t = 0, the 1-form ω^I associated to the negative exponent p_I must be integrable in the sense of Frobenius, meaning ω^I ∧ d ω^I = 0. On top of the Kasner relations (<ref>), this integrability condition provides another obstruction to the validity of (<ref>)[In the original paper of Khalatnikov and Lifshitz <cit.>, this observation together with a naïve function counting argument led them to believe that singularities are in fact non-generic! However in the later paper <cit.> this observation waas instead understood as an instability; this is the modern viewpoint.]. In <cit.> BKL then give heuristics suggesting what happens if this integrability condition fails. In <cit.>, BKL assume Asymptotically Velocity Term Dominated (AVTD) behaviour, meaning that `spatial derivatives' are subdominant in comparison to ∂_τ-derivatives. This has the effect that dynamics in the future light cones emanating from distinct points (0, p) ∈{ 0 }×Σ on the singularity are decoupled. With this assumption they then provide a computation suggesting that in such a light cone, the ansatz (<ref>) is valid for τ≫τ_B for τ_B some critical time, but for τ≪τ_B the metric transitions to a form that again resembles (<ref>) but with Kasner exponents p_I(x) replaced by new exponents ṕ_I(x). Between these regimes, the spacetime undergoes a nonlinear transition from one regime to another, often denoted in the literature as a BKL or Kasner bounce. Typically these nonlinear transitions, or bounces, then cascade indefinitely, giving rise to BKL's chaotic and oscillatory approach to singularity. Within this computation, BKL even propose a formula describing how the Kasner exponents change during the transition: if p_2 < 0 and ω^2 ∧ d ω^2 ≠ 0 then ṕ_1 = p_1 + 2 p_2/1 + 2 p_2, ṕ_2 = -p_2/1 + 2 p_2, ṕ_3 = p_3 + 2 p_2/1 + 2 p_2. To relate this to our results in Gowdy symmetry, we need to apply a change of variables from the areal time coordinate t in (<ref>) to the Gaussian time coordinate τ in (<ref>). Using (<ref>), we expect dτ≈ t^-1/4 e^λ/4 dt, while by (<ref>) we have λ(t, θ) ≈ V^2(θ) log t + O(1). So τ∼ t^V^2 + 3/4. We then formally set ω^1 = dθ, ω^2 = d σ + q(θ) d δ and ω^3 = d δ, where q(θ) = lim_t → 0 Q(t, θ). Through Theorem <ref>, this allows one to make a formal analogy between (<ref>) and (<ref>), with Kasner exponents as in (<ref>). While ω^1 = d θ and ω^3 = d δ are closed and thus integrable in the sense of Frobenius, one checks that ω^2 ∧ d ω^2 = q'(θ) d σ∧ d θ∧ d δ. Hence the consistency check fails if q'(θ) ≠ 0 and p_2 < 0. From (<ref>), p_2 = p_2(θ) < 0 if and only if V(θ) > 1. So one expects V(θ) = 1 to be a threshold between stable and unstable behaviour. Indeed, for 0 < V(θ) < 1 we expect stable[The additional requirement of V(θ) > 0 is more due to our choice of gauge.] self-consistent behaviour as in Theorem <ref>, while the bounce result Theorem <ref> indicates that V(θ) = 1 is a threshold. Furthermore, one may check that via the correspondence (<ref>) and the transition map (<ref>), a bounce exactly corresponds to a transition V(θ) ↦V́(θ) = 2 - V(θ). In the “low-velocity” regime of 0 < V(θ) < 2 that we consider, note that there is at most one Kasner bounce before either V(θ) or V́(θ) lies in the interval (0, 1). We review the heuristics regarding how the bounce map (<ref>) was found. BKL's proposal was that assuming AVTD, then in the future light cone emanating from any point on the singularity the metric is well-approximated by something spatially homogeneous. For spatially homogeneous 𝐠, the Einstein equations (<ref>) reduce to a system of finite dimensional nonlinear autonomous ODEs for a correctly chosen time coordinate. The bounce map (<ref>) then arises from understanding orbits of this ODE system, in particular heteroclinic orbits between its unstable fixed points. In the mathematical literature, progress regarding spacelike singularities and the BKL ansatz has largely regarded the “subcritical” setting, where either the integrability condition ω^I ∧ d ω^I = 0 holds due to symmetry, or due to the addition of matter fields e.g. scalar fields or so-called stiff fluids, which allow more general stable regimes to be found. We mention the breakthrough work of Fournodavlos–Rodnianski–Speck <cit.>, as well as the related <cit.>. There are also related results which involve prescribing the asymptotic data i.e. p_I(x), ω^I(x) in (<ref>) and “solving from the singularity” to determine a spacetime achieving this near-singularity ansatz at leading order, see in particular Fournodavlos–Luk <cit.>. Regarding the study of nonlinear bounces, to the best of the author's knowledge all previous work concerns only spatially homogeneous spacetimes, where the Einstein equations reduce an exact system of nonlinear ODEs. See for instance studies of solutions for various Einstein–matter systems in Bianchi symmetry <cit.>, as well as a recent work of the author together with Van de Moortel <cit.>. In particular the current article and our companion article <cit.>, which concerns the Einstein–Maxwell–scalar field model in surface symmetry, are the first works to understand BKL bounces, albeit only a single such bounce, outside of the spatially homogeneous setting. See <cit.> for a thorough introduction to the model discussed there and a detailed comparison of the two papers. Before moving to a sketch of the proof, we propose two conjectures which would go beyond a single BKL bounce, but remains in the 1+1-dimensional setting. The first conjecture still concerns Gowdy symmetry but with multiple bounces. Note that this would mean exiting the low-velocity regime with 0 < V(θ) <2. There exists a large, open class of initial data for the Gowdy symmetric system which exhibit any finite number of BKL bounces. One expects that this would require a different choice of gauge from (<ref>), since the results of this paper suggest that ∂_θ is not a good “spatial derivative” outside of the low-velocity regime; see already Section <ref>. To go beyond a finite number of bounces to a possibly infinite number, one needs to move beyond Gowdy symmetry, and into the realm of more general 𝕋^2-symmetric spacetimes, see <cit.> for definitions. One can describe initial data for the Einstein vacuum equations in 𝕋^2 symmetry which exhibit an infinite number of BKL bounces. The reason to study 𝕋^2 symmetry is that the symmetry assumption no longer enforces that any of the ω^I(x) are integrable in the sense of Frobenius, thus giving the potential for infinite bounces. This is expected to significantly more difficult than Conjecture <ref> since the corresponding autonomous ODE system is chaotic. Nevertheless, one would hope the resolution of Conjecture <ref> would be a key initial step towards understanding the heuristics of BKL in vacuum outside of symmetry. §.§ Sketch of the proof The proofs of the stability result Theorem <ref> and the bounce result Theorem <ref> are done in tandem and there are three major steps. Recalling the main evolution equations (<ref>)–(<ref>), we caricature these steps as follows: * Analysis in the spatially homogeneous case: In this step, we ignore certain terms in (<ref>)–(<ref>) involving ∂_θ-derivatives which one expects to become negligible as t → 0. We do, however, keep terms involving the expression e^P t ∂_θ Q. The result is a nonlinear ODE system for the quantities - t ∂_t P and e^P t ∂_θ Q; one then analyzes this system to determine bounds for these quantities. * Linearization of the ODE system: In the second step, we consider the best possible behaviour of the terms involving further ∂_θ-derivatives i.e. the non-spatially homogeneous corrections. We do this by taking commuting ∂_θ with the ODE system in Step 1, resulting in a new linear ODE system for new quantities whose coefficients are given by the solution (i.e. some orbit) of the ODE system in Step 1. * Energy estimates: Finally, one must ensure that we can close our argument without loss of derivatives (necessary due to the existence of top order terms such as ∂_θ^2 P in (<ref>).) One achieves this via L^2 energy estimates, which are allowed to blow up but only at a controlled rate as t → 0. We now explain in a little more detail how these steps apply to the Gowdy symmetric system. For Step 1, with 𝒫 = - t ∂_t P and 𝒬 = e^P t ∂_θ Q, the spatially homogeneous ODE system one gets is: t ∂_t 𝒫 = 𝒬^2, t ∂_t 𝒬 = (1 - 𝒫) 𝒬. (Eventually we also include another ODE variable ℛ = e^P t ∂_t Q but we ignore ℛ this for now.) The upshot is that one can study the 2-dimensional ODE system (<ref>) via phase plane analysis, as we see in Figure <ref> below. The direction of the arrows in Figure <ref> is with respect to t ↓ 0. The ODE system (<ref>) contains a line of fixed points at 𝒬 = 0. Each of these fixed points represents an exact Kasner solution[A Kasner solution to the Einstein equations (<ref>) is spatially homogeneous and takes the form of (<ref>) with p_I(x) constant and ω^I equal to the exact differential d x^I for coordinates x^I. See <cit.>, or the original paper of Kasner <cit.>.] with exponents p_I as in (<ref>) where V(θ) is constant and equal to the value of 𝒫 at the fixed point. These fixed points are (orbitally) stable as t ↓ 0 if 𝒫≤ 1 and unstable otherwise. The dynamics of the ODE system in the remaining region 𝒬≠ 0 may be described as a union of heteroclinic orbits linking an unstable fixed point to a stable fixed point. In fact, one may solve the system exactly using a conserved quantity 𝒦 = (𝒫-1)^2 + 𝒬^2. Thus the heteroclinic orbits, which we identify as BKL bounces, link the unstable fixed point (𝒫 = α, 𝒬 = 0) to the stable fixed point (𝒫 = 2 - α, 𝒬 = 0), for any α > 1. We link this to our conditions preceding Theorem <ref>. We note that the condition (<ref>) of weak subcriticality together with the conserved quantity 𝒦 = (𝒫 - 1)^2 + 𝒬^2 means that any (𝒫, 𝒬)-orbit associated to 𝒪_, remains in the bounded region {𝒦≤ (1 - )^2 }. This will be essential in Steps 2 and 3. In Step 2, we commute the ODEs (<ref>) with ∂_θ. This yields: t ∂_t (∂_θ𝒫) = 2 𝒬∂_θ𝒬, t ∂_t (∂_θ𝒬) = - 𝒬∂_θ𝒫 + (1 - 𝒫) ∂_θ𝒬. We see this is a linear ODE system for ∂_θ𝒫 and ∂_θ𝒬 whose coefficients are functions of the dynamical 𝒫 and 𝒬 orbit from Step 1. The fact that the orbit remains in the bounded region {𝒦≤ (1 - )^2 } allows one to prove, using the linear system (<ref>), that |∂_θ𝒫| + |∂_θ𝒬| = O(t^- (1 - )). So these ∂_θ-derivatives are permitted to blow up as t → 0, but at a controlled rate. This blow-up rate is sharp in the following sense: if our (𝒫, 𝒬)-orbit is a constant orbit at an unstable fixed point with 𝒫 = 2 - - ε with ε small then generic solutions of (<ref>) have ∂_θ𝒬 = O(t^-(1 - ) + ε). This choice of (𝒫, 𝒬)-orbit is related to spikes; it is unsurprising that near spikes certain ∂_θ-derivatives will blow up. But in any case, the outcome of Step 2 is that heuristically, each ∂_θ-derivative comes with a loss of t^-(1- ), for instance one expects ∂_θ^2 P = O(t^- 2 (1 - )); this confirms the AVTD expectation that ∂_θ-derivatives cost fewer powers of t than ∂_t-derivatives (note this requires > 0 and is exactly where we require that our spacetimes are “low-velocity” with 0 < 𝒫 < 2). This is good news, since in (<ref>) the term t^2 ∂_θ^2 P will be O(t^2 ), and thus negligible as t → 0. So throwing this term away to derive the first ODE of (<ref>) in Step 1 is justifiable, at least at the heuristic level. Similar arguments hold for the other terms we threw away in Steps 1 and 2. Step 3 concerns making the argument of the above paragraph rigorous. In particular, we must overcome the issue of derivative loss due to terms such as ∂_θ^2 P. This is achieved via energy estimates; let ℰ^(K)(t) represent an L^2-based energy which resembles (<ref>) but controlling ∂_θ^K P and ∂_θ^K Q in place of P and Q, see already Definition <ref>. We explain the main energy estimate: commuting (<ref>) with ∂_θ^K yields the following (t ∂_t)^2 ∂_θ^K P - (t ∂_θ)^2 ∂_θ^K P = 2 e^2P( t ∂_t Q · t ∂_t ∂_θ^K Q - t ∂_θ Q · t ∂_θ^K+1 Q )^(I) + 2 e^2P( (t ∂_t Q)^2 - (t ∂_θ Q)^2 ) ∂_θ^K P_(II) + lower order quantities_(III). A standard argument will then yield the differentiated energy estimate: | t d/dtℰ^(K)(t) | ≤√( 2 ℰ^(K)(t) )·( (I) + (II) + (III) _L^2). It remains to estimate each of (I), (II) and (III) in L^2. Here we need to use Step 1 and Step 2, the key observation being that e^P t ∂_θ Q, represented by 𝒬 in Step 1, is bounded by 1 -. It turns out that e^P t ∂_t Q is similarly bounded, and thus for appropriately defined ℰ^(K)(t) the terms (I) and (II) may be bounded by: (I) _L^2 + (II) _L^2≤ A_* √( 2 ℰ_ϕ^(K) (r) ), where A_* is a constant only depending on . For the remaining lower order terms (III), we will also have to use Step 2. To simplify the exposition, in this sketch we replace (III) by just the single term c ·∂_θ (e^P t ∂_θ Q) · e^P t ∂_θ^K Q, where c may depend on K. One can think of this as c ·∂_θ𝒬· e^P t ∂_θ^K Q, and thus via Step 2 is O(t^-(1- )) · e^P t ∂_θ^K Q. This seems alarming at first sight, since this O(t^- (1 - )) prefactor is not integrable with respect to dt/t as t → 0. But e^P t ∂_θ^K Q is not dependent on the Kth order energy ℰ^(K)(t), but instead ℰ^(K-1)(t). So (III) _L^2≤ C_, K t^- (1 - )√( 2 ℰ^(K-1)(t)). Our eventual estimate for (III) will also involve ℰ^(k)(t) for all k < K, see already Propositions <ref> and <ref>. But sticking to our simplified (III), combining all of the above, as well as combining with a similar energy estimate using the Q-wave equation (<ref>), yields | t d/dtℰ^(K)(t) | ≤ 2 A_* ℰ^(K)(t) + 2 C_, K t^- (1 - )√(ℰ^(K) (t) )·√(ℰ^(K-1)(t)). When K = 0, the last term on the right hand side is absent. Because of the 2 A_*, even the 0th order energy ℰ^(0)(t) will blow up as t^-2 A_* as t → 0. Further, the appearance of t^- (1 - ) means that as K increases the rate of blow up also increases. Indeed, one can use (<ref>) to show: ℰ^(K)(t) ≤ D_, K, t^-2 A_* - 2 K (1 - ), where D_, K, depends on and the data, as well as the regularity index K. It is crucial that A_* is independent of K. The reason for this is that one now applies an interpolation argument together with the above, much like what is done in <cit.>, to show e.g. that ∂_θ^2 P = O(t^2 (1 - ) - ), with → 0 as N →∞, where N is the maximum regularity index for which one applies the energy estimate. This means that one may interpret the ignored terms in Step 1 and Step 2 as negligible errors, as claimed. To apply Steps 1 to 3 to the nonlinear problem, one uses a standard bootstrap argument. In our proof, we shall actually perform Step 3 first. That is, one bootstraps L^∞ bounds on 0th order and 1st order quantities, see (<ref>)–(<ref>), and uses these to derive the energy estimate (<ref>). Then one combines this estimate, an interpolation argument, and the ODE analysis of Steps 1 and 2, to improve the bootstrap assumptions. This will yield the stability result Theorem <ref>, while the bounce result Theorem <ref> follows from more detailed ODE analysis. §.§ Acknowledgements The author thanks Mihalis Dafermos for valuable advice in the writing of this manuscript. We also thank Igor Rodnianski and Hans Ringström for insightful discussions and suggestions. § PRECISE STATEMENT OF THE MAIN THEOREMS Below we state the detailed versions of our main results, the stability result Theorem <ref> and the bounce result Theorem <ref>. We first introduce various parameters , ', ”, , N, , … that appear throughout. §.§ Setup of the initial data §.§.§ Notation and key parameters * The real number 0 < < 1 captures the size of - t ∂_t P allowable in our results. Many of the subsequent parameters will depend on , as do the constants appearing in our quantitative estimates. We further define ', ” to be related constants that satisfy 0 < /2 < ' < < ” < 1. * The natural number N represents the largest number of ∂_θ-derivatives with which we commute the Gowdy system (<ref>)–(<ref>) in deriving energy estimates. Our choice of N depends on , ', ”, and we expect N →∞ as → 0. We often use K to denote an integer with 0 ≤ K ≤ N. * The real number > 0 represents the maximal admissible size of initial data in the L^2 sense. See already the initial data bounds (<ref>)–(<ref>) below. * The real number C_* > 0 is used in the bootstrap argument, see Section <ref>. We choose C_* depending on , ', ” and , though we do not make this explicit. The number A_* > 0 will depend on C_* and represents the rate at which our energies may blow up. * The real number t_* > 0 represents how close to the t = 0 singularity our initial data is required to be in order to obtain our results. That is, our results apply for initial data at t = t_0 for 0 < t_0 ≤ t_*. Our choice of t_* will depend on and and we expect t_* → 0 as either → 0 or →∞. * Other constants, often denoted C, will be allowed to depend on all of aforementioned parameters. The notation will be used to represent quantities depending on all of these parameters (e.g. , , t_*) such that ↓ 0 as t_* ↓ 0. We often allow abuse of notation such as “+ =” or “C_* =” etc. §.§.§ Initial L_infty and L_2 bounds For the Gowdy symmetric system (<ref>)–(<ref>), initial data is given for (P, Q) and their t ∂_t–derivatives at { t = t_0 } for some 0 < t_0 ≤ t_*. That is, we let P_D, Q_D, Ṗ_D, Q̇_D: 𝕊^1 → with (P_D, Q_D, Ṗ_D, Q̇_D) = (P, Q, t ∂_t P, t ∂_t Q)|_t = t_0 We characterize an open set 𝒪_, (perhaps more precisely 𝒪_, ', ”,) to which our results apply. Initial data (P_D, Q_D, Ṗ_D, Q̇_D) in this open set will obey the following pointwise bounds: - Ṗ_D ≥”, (1 - Ṗ_D)^2 + (e^P_D t_0 ∂_θ Q_D)^2 ≤ (1 - ”)^2 e^P_DQ̇_D ≤ t_0^, e^P_D≤ t_0^, as well as the following L^2 energy bounds at t = t_0, where N depends on and > 0: 1/2∑_K = 0^N ( ∫_𝕊^1 (∂_θ^K Ṗ_D)^2 + (t_0 ∂_θ^K+1 P_D)^2 + (t_0^∂_θ^K P_D)^2 dθ) ≤, 1/2∑_K = 0^N ( ∫_𝕊^1 e^2P_D( ( ∂_θ^K Q̇_D )^2 + (t_0 ∂_θ^K+1 Q_D)^2 ) dθ) ≤. Finally, one assumes 0 < t_0 ≤ t_*, with t_* is chosen small depending on , , ', ”. §.§ Theorem <ref> – Stability Theorem <ref> is the precise version of our stability result, see the rough version Theorem <ref>. Consider initial data (P_D, Q_D, Ṗ_D, Q̇_D) for the Gowdy symmetric system (<ref>)–(<ref>), which obeys (<ref>), (<ref>), (<ref>), (<ref>) with 0 < t_0 ≤ t_*. Then for N sufficiently large depending on and for t_* chosen sufficiently small, depending on , N and , for 0 < t ≤ t_0 we have the following L^∞ bounds: ≤ - t ∂_t P ≤ 2 - , |e^P t ∂_θ Q| ≤ 1 - . There exists C = C(, ', ”, ) and A_* = A_*() such that one has the L^2 energy bound: ∑_K = 0^N ∫_𝕊^1 (t ∂_t ∂_θ^K P)^2 + t^2 (∂_θ^K+1 P)^2 + ( ∂_θ^K P)^2 + e^2P (t ∂_t ∂_θ^K Q)^2 + e^2P t^2 (∂_θ^K+1 Q)^2 dθ≤ C t^-2A_* -2K (1 - '). Furthermore, there exists V: 𝕊^1 → bounded so that -t ∂_t P(t, θ) → V(θ) pointwise as t → 0. Finally for k = 1 whenever > 1/3 and k = 0 otherwise, there exists a C^k function q: 𝕊^1 → so that Q(t, θ) → q(θ) as t → 0 in the C^k topology. §.§ Theorem <ref> – BKL bounces In Theorem <ref>, we make precise our bounce result, see the rough version in Theorem <ref>. The idea is that certain quantities will obey a system of nonlinear ODEs, plus error terms, and that one may use properties of the ODEs to understand certain aspects of the dynamics. Let (P, Q) be a solution to the Gowdy symmetric system (<ref>)–(<ref>), arising from initial data (P_D, Q_D, Ṗ_D, Q̇_D) obeying (<ref>)–(<ref>) and 0 < t_0 ≤ t_* as in Theorem <ref>. Let γ: (0, t_0] → (0, t_0] ×𝕊^1 be a timelike curve parameterized by the t-coordinate, and let 𝒫_γ(t) = - t ∂_t P(γ(t)) and 𝒬_γ(t) = e^P t ∂_θ Q (γ(t)). Then there exist error terms ℰ_𝒫(t), ℰ_𝒬(t) depending on γ but with |ℰ_𝒫(t)|, |ℰ_𝒬(t)| ≤ t^' uniformly in the choice of γ, such that t d/dt𝒫_γ = 𝒬_γ^2 + ℰ_𝒫, t d/dt𝒬_γ = (1 - 𝒫_γ) 𝒬_γ + ℰ_𝒬. Since γ is timelike there exists θ_0 ∈𝕊^1 such that γ(t) → (0, θ_0) as t → 0. Further, 𝒬_γ(t) converges to 0 as t → 0, while 𝒫_γ(t) converges to 𝒫_γ, ∞ = V(θ_0). Finally, there exists C = C(, ', ”, ) > 0 such that: * If the lim_t → 0∂_θ Q( γ(t)) does not converge to 0, then necessarily V(θ_0) = 𝒫_γ, ∞≤ 1 and moreover | 𝒫_γ, ∞ - min{𝒫_γ(t_0), 2 - 𝒫_γ(t_0) }| ≤ C · (t_0^' + 𝒬^2_γ(t_0))^1/2 . * Let > 1/2. By Theorem <ref>, ∂_θ q(θ_0) exists, and lim_t → 0∂_θ Q(γ(t)) →∂_θ q(θ_0). If ∂_θ q(θ_0) = 0, then one instead has: | 𝒫_γ, ∞ - 𝒫_γ(t_0) | ≤ C · t_0^2' - 1. where 1/2 < ' < is chosen appropriately. A corollary of Theorem <ref> is the following stability / instability statement regarding nonpolarized perturbations of a class of polarized Gowdy solutions. Let P(t, θ) be a smooth solution to the polarized Gowdy equation (<ref>), arising from initial data given by (P_D, Ṗ_D) at t = t_1 > 0. By Theorem <ref>, there exists smooth V, Φ: 𝕊^1 → such that P(t, θ) = - V(θ) log t + Φ(θ) + o(1) as t → 0. Suppose that 0 < V(θ) < 2 for all θ∈𝕊^1. Then there exists N ∈ℕ such that for (possibly non-polarized) perturbations of this data with (P̃_D, Q̃_D, Ṗ̃_D, Q̃̇̃_D) - (P_D, 0, Ṗ_D, 0) _(H^N+1)^2 × (H^N)^2≤ε, then for ε sufficiently small the perturbed solution still has - t ∂_t P̃(t, θ) →Ṽ(θ) pointwise as t → 0, for Ṽ satisfying 0 < Ṽ(θ) < 2. Next, suppose further that 1/2 < V(θ) < 3/2 for the original polarized solution. Then for perturbations as above, we moreover have that ∂_θQ̃ (t, θ) →∂_θq̃(θ) uniformly for some C^1 function q̃: 𝕊^1 →. Furthermore, for any ε̃ > 0, ε may be chosen small enough depending on ε̃ such that: * |Ṽ(θ) - min{ V(θ), 2 - V(θ) } | ≤ε̃ if ∂_θq̃ (θ) ≠ 0, while * |Ṽ(θ) - V(θ) | ≤ε̃ if ∂_θq̃ (θ) = 0. We consider Corollary <ref> a stability / instability result in the sense that while the perturbed spacetime retains the spacelike singularity and curvature blowup, in the case where the original V(θ) has 1 < V(θ_0) < 2 for some θ_0 ∈𝕊^1 a generic unpolarized perturbation will have a corresponding V̂(θ_0) with V̂(θ_0) ≈ 2 - V(θ_0), which is “far away” from V(θ_0). Our result applies in particular when the background unperturbed spacetime is an exact Kasner spacetime with P(θ) = - V log t where 0 < V < 2 (and Kasner exponents given by (<ref>)). When 1 < V < 2, this gives a precise instability mechanism for a certain range of Kasner exponents. Note our methods do not allow us to access this mechanism outside this range, especially regarding spatially inhomogeneous perturbations. Our instability is triggered when ∂_θ q(θ) ≠ 0, while if ∂_θ q (θ) = 0 and 1/2 < V(θ) < 3/2 our result suggests that the instability is suppressed. Note that in light of <cit.> it is true that for an open and dense subset of perturbations, the instability is indeed triggered at all but finitely many θ∈𝕊^1; at the remaining points we leave open the possibility of spikes. § BOOTSTRAP ASSUMPTIONS, ENERGIES AND INTERPOLATION LEMMAS §.§ The L_infty bootstrap assumptions As explained in Section <ref>, let C_* = C_*(, ) > 0 be a large real number to be chosen later in the argument. We often make reference to the following four low order L^∞ bootstrap assumptions: B1 |t ∂_t P| ≤ C_*, |e^P t ∂_θ Q| ≤ C_*, B2 |e^P t ∂_t Q|, |e^-P| ≤ C_* t^', B3 |∂_θ P|, |e^P t ∂_θ^2 Q| ≤ C_* t^-(1 - ), B4 |t ∂_t ∂_θ P |, |e^P t ∂_t ∂_θ Q| ≤ C_* t^-(1 - ). §.§ Energies Let 0 ≤ K ≤ N. Define the following Kth order energies at fixed t: ℰ^(K)_P(t) 1/2∫_𝕊^1( (t ∂_t ∂_θ^K P)^2 + t^2 (∂_θ^K+1 P)^2 + (∂_θ^K P)^2 ) dθ, ℰ^(K)_Q(t) 1/2∫_𝕊^1 e^2P( (t ∂_t ∂_θ^K Q )^2 + t^2 (∂_θ^K+1 Q)^2 ) dθ, ℰ^(K)(t) ℰ^(K)_P(t) + ℰ^(K)_Q(t). To recover asymptotics for Q(t, θ) without the e^P weight we also make use of the following energy: ℰ^(K)_Q,u(t) 1/2∫_𝕊^1( (t ∂_t ∂_θ^K Q)^2 + t^2 (∂_θ^K+1 Q)^2 + (∂_θ^K Q)^2 ) dθ. §.§ Sobolev–type inequalities Let N, K be integers with 0 ≤ N < K, and let f: 𝕊^1 → be such that ∂_x^K f ∈ L^2(𝕊^1). Then the following L^∞–L^2 interpolation inequality holds: ∂_x^N f _L^∞(𝕊^1)≲_N, K f _L^∞(𝕊^1)^1 - α∂_x^K f _L^2(𝕊^1)^α, where α = N/K - 1/2. This is standard, see for instance <cit.>. Let M, N ≥ 0 be integers and let K = M + N. Then for f, g sufficiently regular one has ∂_x^M f ∂_x^N g _L^2(𝕊^1)≲_M, N f _L^∞(𝕊^1) ∂_x^K g _L^2(𝕊^1) + g _L^∞(𝕊^1) ∂_x^K f _L^2(𝕊^1). See Appendix B of our companion paper <cit.>; in that article we also introduce a weight function w: 𝕊^1 →_> 0, which may simply be set identically equal to 1 here. § THE ENERGY ESTIMATE HIERARCHY In this section, we derive energy estimates for ℰ^(K)(t), at orders 0 ≤ K ≤ N, where N = N() is chosen sufficiently large, by commuting the equations (<ref>)–(<ref>) with up to K ∂_θ-derivatives. See Section <ref> for an introduction to the main ideas in our hierarchy of energy estimates. To handle “error” terms in the hierarchy (where the precise coefficients arising in the commuted equations are not crucial), it is useful to introduce the following schematic notation: expressions such as ∑_k_p + k_1 + … + k_i = K∂_θ^k_p f * ∂_θ^k_1 g * ⋯ * ∂_θ^k_i g will represent some linear combination of products of the form ∂_θ^k_p f ·∂_θ^k_1 g ⋯∂_θ^k_i g such that i ≥ 1, k_p ≥ 1 and k_j ≥ 1 for all 1 ≤ j ≤ i and k_p + k_1 + … + k_i = K. We emphasize that unless explicitly stated otherwise, in these schematic sums i and j will be positive integers, as are the indices k_p, k_1, k_i etc. In the event that any index e.g. k_p is allowed to be 0, this will be explicitly stated, and similarly if there are further constraints on any index. §.§ Energy estimates for P Let (P, Q) be a solution to the Gowdy symmetric system (<ref>)–(<ref>) obeying the bootstrap assumptions (<ref>)–(<ref>). Then there exists a constant A_* depending only on C_*, as well as a constant C^(K) depending on C_* and the regularity index K ∈{0, 1, …, N }, such that | t d/dtℰ_P^(K) (t) | ≤ 2 A_* ℰ^(K)(t) + ∑_k=0^K-1 C^(K) t^- 2(1 - ) (K - k)ℰ^(k)(t), where it is understood that the final term is absent if K=0. For any K ≥ 1, commuting the wave equation for P (<ref>) with ∂_θ^K yields equation1 (t ∂_t)^2 ∂_θ^K P - t^2 ∂_θ^2 ∂_θ^K P = -48mu 2 ∂_θ^K P e^2P( (t ∂_t Q)^2 - (t ∂_θ Q)^2 ) + 2 e^2P (t ∂_t Q) (t ∂_t ∂_θ^K Q) + 2 e^2P (t ∂_θ Q)(t ∂_θ^K+1 Q) a -36mu + t^2 e^2P∑_i ≥ 0, k_r < K k_1 + ⋯ + k_i + k_r + k_s = K∂_θ^k_1 P * ⋯ * ∂_θ^k_i P * ∂_t ∂_θ^k_r Q * ∂_t ∂_θ^k_s Q b -36mu + t^2 e^2P∑_i ≥ 0, k_r < K k_1 + ⋯ + k_i + k_r + k_s = K∂_θ^k_1 P * ⋯ * ∂_θ^k_i P * ∂_θ^k_r + 1 Q * ∂_θ^k_s + 1 Q c Note that in the uncommuted case K = 0, (<ref>) is replaced by e^2P( (t ∂_t Q)^2 + (t ∂_θ Q)^2 ), while the terms (<ref>)–(<ref>) are absent; this case will be simpler in the subsequent estimate. The first line (<ref>) is the leading order contribution that gives rise to the 2 A_* ℰ^(K)(t) on the RHS of (<ref>). It remains to estimate the remaining lines (<ref>) and (<ref>) in L^2. For this purpose, we will use the product estimate Lemma <ref>. It will be convenient to replace the schematic expression (<ref>) by b'∑_i ≥ 0, 0 ≤ k_r, k_s < K k_1 + ⋯ + k_i + k_r + k_s = K∂_θ^k_1 P * ⋯ * ∂_θ^k_i P * ∂_θ^k_r (e^P t ∂_t Q) * ∂_θ^k_s (e^P t ∂_t Q), noting these two schematic expressions are equivalent since expanding (<ref>) simply means that more derivatives can fall on P. We can apply Lemma <ref> with w = 1 to this expression; since we can use (<ref>) to bound |e^P t ∂_t Q| ≤ C_* if k_r, k_s = 0, repeated use of this lemma yields: (<ref>)_L^2≲ (1 + C_*^2) ∑_k=0^K-1 ( ∂_θ^k P _L^2 + ∂_θ^k (e^P t ∂_t Q) _L^2 ) · ( ∂_θ P _L^∞ + ∂_θ (e^P t ∂_t Q) _L^∞ )^K - k. Note in this expression it is critical that the sum does not include K = k; these top order objects appeared instead in (<ref>). To explain a little further how one arrives at this estimate, note that summands in (<ref>) containing i+1 terms in the product (not including undifferentiated copies of e^P t ∂_t Q which are estimated by (<ref>)), the maximum number of ∂_θ derivatives landing on either P or e^P t ∂_t Q is exactly K - i. Then repeated use of Lemma <ref> allows us to put exactly K-i ∂_θ derivatives on one of these, which we estimate in L^2, while the remaining i terms are estimated by either ∂_θ P _L^∞ or ∂_θ ( e^P t ∂_t Q ) _L^∞. Setting k = K - i ∈{1, …, K-1}, we obtain the desired estimate (<ref>). The next step is to represent the right hand side of (<ref>) in terms of our energies. One issue is that our energy ℰ^(K)_Q(t) in Definition <ref> controls e^P (t ∂_t ∂_θ^K Q)^2 _L^2 as opposed to ∂_θ^K (e^P t ∂_t Q) _L^2. However, this will not be a major issue, since by Lemma <ref>, which we defer to after this proof, will tell us that assuming (<ref>), ∂_θ^k (e^P t ∂_t Q) _L^2≲∑_j=1^k ( ∂_θ^j P _L^2 + e^P (t ∂_t ∂_θ^j Q) _L^2 ) · ( ∂_θ P _L^∞ + ∂_θ (e^P t ∂_t Q) _L^∞)^k-j. Thereby combining this with (<ref>) and the expression for ℰ^(K)(t) in Definition <ref> – and using that (<ref>) and (<ref>) are equivalent – and finally using the remaining bootstrap assumptions (<ref>)–(<ref>) to estimate ∂_θ P _L^∞ + ∂_θ (e^P t ∂_t Q) _L^∞, one therefore deduces that (<ref>)_L^2≲_C_*, K∑_k=0^K-1 t^(1 - ) (K - k)√(ℰ^(k)(t) ). The estimate for (<ref>) is essentially the same; we first rewrite (<ref>) in the schematic form c'∑_i ≥ 0, 0 ≤ k_r, k_s < K k_1 + ⋯ + k_i + k_r + k_s = K∂_θ^k_1 P * ⋯ * ∂_θ^k_i P * ∂_θ^k_r (e^P t ∂_θ Q) * ∂_θ^k_s (e^P t ∂_θ Q), then use the product estimate Lemma <ref> along with the forthcoming Lemma <ref> to express the L^2 norm of this in terms of familiar quantities: (<ref>)≲∑_k=0^K-1 ( ∂_θ^k P _L^2 + e^P ( t ∂_θ^k+1 Q ) _L^2) · ( ∂_θ P _L^∞ + ∂_θ (e^P t ∂_θ Q) _L^∞)^K-k We then combine with the definition of the energies ℰ^(K)(t), and use the bootstrap assumption (<ref>) to estimate ∂_θ P _L^∞ + ∂_θ (e^P t ∂_θ Q) _L^∞; one yields (<ref>)_L^2≲_C_*, K∑_k=0^K-1 t^(1 - ) (K - k)√(ℰ^(k)(t) ). Now, to conclude, for 0 ≤ K ≤ N we write t ∂_t ( 1/2( (t ∂_t ∂_θ^K P)^2 + t^2 (∂_θ^K+1 P)^2 + (∂_θ^K P)^2 ) ) = (t ∂_t ∂_θ^K P) [ (t ∂_t)^2 ∂_θ^K P - t^2 ∂_θ^K+2 P + ∂_θ^K P ] + t^2 (∂_θ^K+1 P)^2 + ∂_θ( t ∂_t ∂_θ^K P · t^2 ∂_θ^K+1 P ). Integrating over θ∈𝕊^1 so that the last term vanishes, and inserting the commuted wave equation, one has the following identity for the t d/dt derivative of the energy: t d/dtℰ^(K)_P(t) = ∫_𝕊^1[ (t ∂_t ∂_θ^K P) ( (<ref>) + ∂_θ^K P ) + t^2 (∂_θ^K+1 P)^2 ] dx + ∫_𝕊^1[ (t ∂_t ∂_θ^K P) ( (<ref>) + (<ref>)) ] dx. From the bootstrap assumptions (<ref>)–(<ref>), it is clear that (<ref>)_L^2≤ 10 C_*^2 √(ℰ^(K)(t)). Using Cauchy–Schwarz, the first integral in the above expression can thus be bounded as: | ∫_𝕊^1[ (t ∂_t ∂_θ^K P) ( (<ref>) + ∂_θ^K P ) + t^2 (∂_θ^K+1 P)^2 ] dx | ≤ (10 C_*^2 + 9) ℰ^(K)(t). On the other hand, the latter integral can be estimated using (<ref>) and (<ref>); one yields that for some C^(K) > 0, | ∫_𝕊^1[ (t ∂_t ∂_θ^K ϕ) ( (<ref>) + (<ref>)) ] dx | ≤√(C^(K))∑_k=0^K-1 t^-(1 - )(K - k)√(ℰ^(k)(t))·√(ℰ^(K)(t)). Combining these and applying Young's inequality, Proposition <ref> follows, with 2 A_* = 10 C_*^2 + 10. We end this subsection with the promised Lemma <ref>, used to relate ∂_θ^k (e^P t ∂_t Q) _L^2 to e^P t ∂_t ∂_θ^k Q _L^2 in the estimate (<ref>), and ∂_θ^k (e^P t ∂_θ Q) _L^2 to e^P t ∂_θ^k+1 Q _L^2 in (<ref>). Let f: 𝕊^1 → be a sufficiently regular function, with |e^P f| ≤ C_*. Then for P: 𝕊^1 → regular function, and k ∈, one has ∂_θ^k (e^P f) _L^2≲ (1 + C_*) ∑_j=0^k ( ∂_θ^j P _L^2 + e^P ∂_θ^j f _L^2 ) · ( ∂_θ P _L^∞ + ∂_θ (e^P f) _L^∞ )^k-j. We prove (<ref>) using induction on k ∈. The base case k = 1 follows from | ∂_θ (e^P f) | ≤ | ∂_θ P | · | e^P f | + | e^P ∂_θ f | ≤ C_* · |∂_θ P| + |e^P ∂_θ f|. Now, let us suppose that (<ref>) holds for k ≤ K ∈. To go from k = K to k = K + 1, we use our schematic notation to expand ∂_θ^K+1 (e^P f), and write ∂_θ^K+1 (e^P f) = ∂_θ^K+1 P · e^P f + e^P ∂_θ^K+1 f + ∑_k_1 + ⋯ + k_j + k_f = K + 1∂_θ^k_1 P * ⋯ * ∂_θ^k_j P * ∂_θ^k_f (e^P f), where crucially each of the k_1, …, k_j, k_f will not exceed K. Therefore, one may apply the product estimate Lemma <ref> to find ∑_k_1 + ⋯ + k_j + k_f = K + 1∂_θ^k_1 P * ⋯ * ∂_θ^k_j P * ∂_θ^k_f (e^P f) _L^2 ≲∑_j=0^K ( ∂_θ^j P _L^2 + ∂_θ^j (e^P f) _L^2 ) · ( ∂_θ P _L^∞ + ∂_θ (e^P f) _L^∞ )^K + 1 - j. We now apply the induction hypothesis to change ∂_θ^j (e^P f) _L^2 to e^P ∂_θ^j P _L^2. We thus deduce ∑_k_1 + ⋯ + k_j + k_f = K + 1∂_θ^k_1 P * ⋯ * ∂_θ^k_j P * ∂_θ^k_f (e^P f) _L^2 ≲ (1 + C_*) ∑_j=0^K ( ∂_θ^j P _L^2 + e^P ∂_θ^j f _L^2 ) · ( ∂_θ P _L^∞ + ∂_θ (e^P f) _L^∞ )^K + 1 - j. On the other hand, the first two terms on the right hand side of (<ref>) have L^2 norm bounded by C_* ∂_θ^K+1 P _L^2 + e^P ∂_θ^K+1 f _L^2. This proves the estimate (<ref>) for k = K + 1 and completes the proof of the lemma. §.§ Energy estimates for Q We now prove the analogous energy estimate for ℰ^(K)_Q(t). Let (P, Q) be a solution to the Gowdy symmetric system (<ref>)–(<ref>) obeying the bootstrap assumptions (<ref>)–(<ref>). Then there exists a constant A_* depending only on C_*, as well as a constant C^(K) depending on C_* and the regularity index K ∈{0, 1, …, N }, such that | t d/dtℰ_Q^(K) (t) | ≤ (2 A_* + 2 C_* t^) ℰ^(K)(t) + ∑_k=0^K-1 C^(K) t^- 2(1 - ) (K - k)ℰ^(k)(t), where it is understood that the final term is absent if K=0. Commuting the Q wave equation (<ref>) with ∂_θ^K, one yields for 1 ≤ K ≤ N: equation1 (t ∂_t)^2 ∂_θ^K Q - t^2 ∂_θ^2 ∂_θ^K Q = -72mu - 2 t ∂_t ∂_θ^K P · t ∂_t Q - 2 t ∂_t P · t ∂_t ∂_θ^K+1 Q + 2 t ∂_θ^K+1 P · t ∂_θ Q + 2 t ∂_t P · t ∂_θ^K+1 Q a -36mu + ∑_k_p + k_q = K( t ∂_t ∂_θ^k_p P * t ∂_t ∂_θ^k_q Q + t ∂_θ^k_p + 1 P * t ∂_θ^k_q + 1 Q ). b The base case K = 0 is just the equation (<ref>) and will be easier to deal with. Note it again crucial that neither of k_p, k_q is allowed to equal K. In light of the e^2P weight appearing in ℰ^(K)_Q(t) in Definition <ref>, we will be required to estimate e^P times the right hand side of this equation in L^2. For the lower order term (<ref>), we use a similar method to the estimate for (<ref>) in Proposition <ref>. For the first term in (<ref>), it is convenient to write instead e^P ∑_k_p + k_q = K t ∂_t ∂_θ^k_p P * t ∂_t ∂_θ^k_q Q = ∑_i ≥ 0 k_1 + ⋯ + k_i + k_p + k_q = K∂_θ^k_1 P * ⋯ * ∂_θ^k_i P * t ∂_t ∂_θ^k_p P * ∂_θ^k_q (e^P t ∂_t Q) Then one may estimate this in L^2 in the same way as the term (<ref>), eventually getting e^P ∑_k_p + k_q = K t ∂_t ∂_θ^k_p P * t ∂_t ∂_θ^k_q Q ≲_C_*∑_k=0^K-1 ( ∂_θ^k P _L^2 + t ∂_t ∂_θ^k P _L^2 + e^P t ∂_t ∂_θ^k Q _L^2) ( ∂_θ P _L^∞ + t ∂_t ∂_θ P _L^∞ + ∂_θ (e^P t ∂_t Q) _L^∞ )^K - k Thereby using Definition <ref> and the bootstrap assumptions (<ref>)–(<ref>), we get e^P ∑_k_p + k_q = K t ∂_t ∂_θ^k_p P * t ∂_t ∂_θ^k_q Q ≲_C_*, K∑_k=0^K-1 t^- (1 - ) (K - k)√(ℰ^(k)(t) ). The same method allows us to estimate the second term in (<ref>) in the same way, thus e^P (<ref>)_L^2≲_C_*, K∑_k=0^K-1 t^- (1 - ) (K - k)√(ℰ^(k)(t) ). To conclude, we first write the derivative identity; for 0 ≤ K ≤ N: t ∂_t ( 1/2( e^2P (t ∂_t ∂_θ^K Q)^2 + e^2P t^2 (∂_θ^K+1 Q)^2 ) ) = e^P (t ∂_t ∂_θ^K Q) e^P [ (t ∂_t)^2 ∂_θ^K Q - t^2 ∂_θ^K+2 Q ] + ∂_θ( e^2P t ∂_t ∂_θ^K Q · t^2 ∂_θ^K+1 Q ) + t ∂_t P · e^2P (t ∂_t ∂_θ^K Q)^2 + (1 + t ∂_t P) e^2P t^2 (∂_θ^K+1 Q)^2 - 2 t ∂_θ P · e^2P t ∂_t ∂_θ^K Q · t ∂_θ^K+1 Q. Next, integrating this over θ∈𝕊^1, one deduces that t d/dtℰ^(K)_Q(t) = ∫_𝕊^1[ e^P t ∂_t ∂_θ^K Q ( e^P (<ref>) + t ∂_t P · e^P t ∂_t ∂_θ^K Q ) + (1 + t ∂_t P) e^2P t^2 (∂_θ^K+1 Q)^2 ] dx - ∫_𝕊^1 2 t ∂_θ P · e^P t ∂_t ∂_θ^K Q · e^P t ∂_θ^K+1 Q dx + ∫_𝕊^1[ (t ∂_t ∂_θ^K Q) ( e^P (<ref>)) ] dx. Finally, using the expression for (<ref>), the bootstrap assumptions (<ref>) and (<ref>), and the expression for ℰ^(K)(t) in Definition <ref>, one may check that the first line of the right hand side is bounded by say (10 C_*^2 + 9) ℰ^(K)(t). On the other hand, by using the bootstrap assumption (<ref>) for the second integral and (<ref>) for the third, one yields: | ∫_𝕊^1 2 t ∂_θ P · e^P t ∂_t ∂_θ^K Q · e^P t ∂_θ^K+1 Q dx | ≤ 2 C_* t^ℰ^(K)(t), | ∫_𝕊^1[ (t ∂_t ∂_θ^K Q) ( e^P (<ref>)) ] dx | ≲√(ℰ^(K)(t) )·∑_k=1^K-1 t^(1 - )(K - k)√(ℰ^(k)(t) ). Therefore applying Young's inequality, one deduces (<ref>) for 2 A_* = 10(C_*^2 + 1). §.§ The energy hierarchy We now use Propositions <ref>, <ref> together with the initial data assumption (<ref>)–(<ref>), to show that the total energy of order K, ℰ^(K)(t), grows at most polynomially in t as t ↓ 0, and moreover that the rate of blow-up depends linearly in K. Let (P, Q) be a solution to the Gowdy symmetric system (<ref>)–(<ref>) in the interval t ∈ [t_b, t_0], such that the solution obeys the bootstrap assumptions (<ref>)–(<ref>). Assuming also the bounds (<ref>)–(<ref>) for the initial data, then there exist a constant A_* depending only on C_*, as well as constants C^(K) depending on C_*, K and , such that for 0 ≤ K ≤ N, the total energy ℰ^(K)(t) satisfies the bound: ℰ^(K)(t) ≤ C^(K) t^- 2 A_* - 2 K (1 - ). Combining Propositions <ref> and <ref>, it is straightforward to show that for some A_* > 0 depending only on C_* and constants C^(K) > 0 (we allow A_* to differ from the previous propositions), one has the following derivative estimate: | t d/dtℰ^(K)(t) | ≤ (2 A_* + 2 C_* t^ ) ℰ^(K)(t) + C^(K)∑_k = 0^K-1 r^- (1 - ) (K-k)ℰ^(k)(t). As we integrate “backwards” i.e. towards r = 0, the derivative estimate we actually use is the following: t d/dtℰ^(K)(t) ≥ - (2 A_* + 2 C_* t^) ℰ^(K)(t) - C^(K)∑_k = 0^K-1 r^- (K-k)ℰ^(k)(t). In fact, using the integrating factor t^2 A_*, which is crucially independent of K, we write: t d/dt( t^2 A_*ℰ^(K)(t) ) ≥ - 2 C_* t^( t^2A_*ℰ^(K)(t) ) - C^(K)∑_k = 0^K-1 t^- 2 (1 - ) (K-k)( t^2 A_*ℰ^(k)(t) ). We will now use (<ref>) and induction on K ∈{0, …, N } that t^2 A_*ℰ^(K)(t) ≲ t^- 2 (1 - ) K, where the implied constant is now allowed to depend on C_*, K and . This is equivalent to (<ref>). Note that the dependence on comes from the fact that the initial data bound (<ref>)–(<ref>) implies that ∑_k=0^N t_0^2 A_*ℰ^(k)(t_0) ≤. For the base case K = 0, one simply applies Grönwall's inequality to (<ref>) for K = 0; then for t ∈ [t_b, t_0] t^2A_*ℰ^(0)(t) ≤exp(F(t_0, t)) · t_0^2 A_*ℰ^(0)(t_0), where F(s_a, s_b) = ∫^s_a_s_b 2 C_* t̃^d t̃/t̃. Since F(s_a, s_b) is uniformly bounded for s_a, s_b ∈ [t_b, t_0], it follows from the initial data bound (<ref>) that (<ref>) holds for K = 0. Moving onto the induction step, assume that (<ref>) holds for 0 ≤ K < K̅≤ N; we wish to prove it also holds for K = K̅. Applying Grönwall's inequality to (<ref>) for K = K̅, we have that t^2 A_*ℰ^(K̅)(t) ≤exp(F(t_0, t)) · t_0^2 A_*ℰ^(K̅)(t_0) + ∫^t_0_t exp( F(t̃, t)) C^(K̅)∑_k=0^K̅-1t̃^-2(1 - )(K̅-k)t̃^2 A_*ℰ^(k)(t̃) d t̃/t̃. It therefore follows from the initial data bound (<ref>) and the inductive hypothesis for t̃^2 A_*ℰ^(k)(t̃) that t^2 A_*ℰ^(K̅)(t) ≲ + ∫_t^t_0∑_k=0^K̅-1t̃^-2(1 - )(K̅ - k)·t̃^- 2 kd t̃/t̃≲ t^-2 (1 -) K̅ as required. This completes the proof of the proposition. §.§ An auxiliary energy estimate In order to recover precise asymptotics for Q(t, θ) as t → 0 we also make use of the following energy estimate for the energy ℰ_Q,u^(K)(t), see Definition <ref>. Let (P, Q) be a solution to the Gowdy symmetric system (<ref>)–(<ref>) obeying the bootstrap assumptions (<ref>)–(<ref>). Then there exists a constant A_* depending only on C_*, as well as a constant C^(K) depending on C_* and the regularity index K ∈{0, 1, …, N} such that | t d/dtℰ_Q, u^(K)(t) | ≤ 2 A_* ℰ^(K)_Q, u(t) + C^(K)√(2 ℰ^(K)_Q, u(t))·( t^-(1 - )√(2 ℰ^(K-1)_Q,u(t)) + t^'√(2 ℰ^(K)(t))). With initial data as in Proposition <ref>, one can moreover show that for some C^(K) > 0, ℰ_Q, u^(K)(t) ≤ C^(K) t^-2 A_* - 2K(1 - ). Here C^(K) may also depend on the initial data (Q_D, Q̇_D). For the auxiliary estimate we again commute the wave equation (<ref>) with ∂_θ^K, though we shall group terms in a slightly different way to (<ref>) and (<ref>). We have: equation1 (t ∂_t)^2 ∂_θ^K Q - t^2 ∂_θ^2 ∂_θ^K Q = - 2 t ∂_t P · t ∂_t ∂_θ^K+1 Q + 2 t ∂_θ P · t ∂_θ^K+1 Q a -36mu + ∑_k_p + k_q = K 0 ≤ k_q < K( t ∂_t ∂_θ^k_p P * t ∂_t ∂_θ^k_q Q + t ∂_θ^k_p + 1 P * t ∂_θ^k_q + 1 Q ). b The difference between this and (<ref>)–(<ref>) is that the top order term containing P (i.e. k_p = K) is now included in (<ref>). We shall now estimate (<ref>) in L^2 without the e^P weight. Using Lemma <ref> and the fact that 1 ≤ k_p ≤ K, one finds (<ref>)_L^2≲ t ∂_t ∂_θ^K P _L^2 t ∂_t Q _L^∞ + t ∂_t ∂_θ P _L^∞ t ∂_t ∂_θ^K-1 Q _L^2 + t ∂_θ^K+1 P _L^2 t ∂_θ Q _L^∞ + t ∂_θ^2 P _L^∞ t ∂_θ^K Q _L^2. For the first and third terms on the right hand side, the L^2 norm is controlled by ℰ^(K)_P(t), while the L^∞ norm is controlled via the bootstrap assumptions (<ref>) and (<ref>), which together give t ∂_t Q _L^∞, t ∂_θ Q _L^∞≤ C_*^2 t^. For the second and fourth terms, the L^2 norm is controlled by ℰ^(K - 1)_Q,u(t), while the L^∞ norms are controlled using (<ref>)–(<ref>). Combining all of these will yield that (<ref>)≲ t^'√(ℰ^(K)_P(t)) + t^-(1 - )√(ℰ^(K-1)_Q,u(t)). Continuing, we write down the derivative identity: t ∂_t ( 1/2( (t ∂_t ∂_θ^K Q)^2 + t^2 (∂_θ^K+1 Q)^2 + (∂_θ^K Q)^2 ) ) = (t ∂_t ∂_θ^K Q) [ (t ∂_t)^2 ∂_θ^K Q - t^2 ∂_θ^K+2 Q + ∂_θ^K Q ] + t^2 (∂_θ^K+1 Q)^2 + ∂_θ( t ∂_t ∂_θ^K Q · t^2 ∂_θ^K+1 Q ). Integrating over θ∈𝕊^1, one thus yields that t d/dtℰ^(K)_Q(t) = ∫_𝕊^1[ (t ∂_t ∂_θ^K Q) ( (<ref>) + ∂_θ^K Q ) + t^2 (∂_θ^K+1 Q)^2 ] dx + ∫_𝕊^1[ (t ∂_t ∂_θ^K Q) ·(<ref>)] dx. Using the bootstrap assumptions (<ref>) and (<ref>) to deal with (<ref>), and using (<ref>) to deal with (<ref>), one therefore deduces the estimate (<ref>) in a similar way to say Proposition <ref>. Inserting the bound (<ref>) into (<ref>), we find that: t d/dtℰ^(K)_Q, u(t) ≥ - 2 A_* ℰ^(K)_Q, u(t) - 2 C^(K)√(ℰ^(K)_Q, u(t))·( t^-(1 - )√(ℰ_Q, u^(K-1)(t)) + t^' - A_* - K(1 - )). Equivalently, t d/dt√(t^2A_*ℰ^(K)_Q, u(t))≥ - C^(K)( t^-(1- )√(t^2A_*ℰ_Q, u^(K-1) (t) ) + t^-' - A_* - K(1 - )). The integrated estimate (<ref>) then follows easily using induction on K ∈{0, …, N }. § DERIVATION OF ODES §.§ Low order interpolation estimates The goal in this section is to use the energy estimates of Proposition <ref> together with the Sobolev interpolation in Lemma <ref> to provide L^∞ bounds for low order ∂_θ derivatives of P and Q, allowing us to treat (<ref>)–(<ref>) as ODEs for certain quantities without worrying about losing derivatives. Let (P, Q) be as in Proposition <ref>. Given any 0 < ' <, for N chosen sufficiently large there exists a family of constants = ( C_*, , t_*) > 0 with ↓ 0 as t_* ↓ 0, such that for any 0 ≤ k ≤ 3 and t ∈ [t_b, t_0] one has ∂_θ^k P (t, ·) _L^∞ + t ∂_t ∂_θ^k P (t, ·) _L^∞ + e^P t ∂_t ∂_θ^k Q (t, ·) _L^∞ + e^P t ∂_θ^k+1 Q _L^∞≤ t^-k (1 - '). We use the energy estimate (<ref>) for K = N, alongside Lemma <ref> applied to f = t ∂_t Q and f = t ∂_θ Q, to derive the following top order L^2 estimate: ∂_θ^N P _L^2^2 + t ∂_t ∂_θ^N P _L^2^2 + ∂_θ^N (e^P t ∂_t Q) _L^2^2 + ∂_θ^N (e^P t ∂_θ Q) _L^2^2 ≤ 2 C^(N) t^- 2 A_* - 2 N (1 - ). Note that while C^(N) depends on N, the number A_* does not, and we later choose N depending on A_*. We now interpolate between (<ref>) and the low-order L^∞ bootstrap assumptions (<ref>) and (<ref>). Applying Lemma <ref>, with f ∈{P, t ∂_t P, e^P t ∂_t Q, e^P t ∂_θ Q } and 0 ≤ k ≤ 3 one finds: ∂_θ^k f (t, · ) _L^∞≲_N f (t, ·) _L^∞^1-α ∂_θ^N f (t, ·) _L^2^α, where α = k/N - 1/2. Inserting the bound (<ref>) and the bootstrap assumptions (<ref>) and (<ref>), one finds ∂_θ^k f (t, ·) _L^∞≲_N, C_*( t^- 2 A_* - 2 N (1 - ) )^α/2 = t^- k (1 - )·( t^- 2 A_* - (1- ))^α/2. Note that as N →∞, α→ 0. In particular, for N chosen sufficiently large (depending on A_*, and ' <) one can guarantee that the second term in the product (t^-2 A_* - (1- ))^α/2 can be bounded by say t^- - '/2. Thus for this choice of N, we deduce that for f as above, ∂_θ^k f (t, ·) _L^∞≤ C_N, C_* t^-k(1 - ) - - '/2 = C_N, C_* t^-k(1 - ')· t^(k - 1/2)( - '). To go from (<ref>) to (<ref>), there are two more steps required. The first is that we can expand out ∂_θ^k (e^P t ∂_t Q) and ∂_θ^k (e^P t ∂_θ Q) to yield, for some suitably modified constant C_N, C_*, ∂_θ^k P (t, ·) _L^∞ + t ∂_t ∂_θ^k P (t, ·) _L^∞ + e^P t ∂_t ∂_θ^k Q (t, ·) _L^∞ + e^P t ∂_θ^k+1 Q _L^∞≤ t^-k (1 - ')· (C_N, C_* t^1/2( - ')). Note we used here that k - 1/2≥1/2. Then recalling that t ≤ t_0 ≤ t_*, letting = C_N, C_*· t_*^1/2( - ') completes the proof of the lemma. Using Proposition <ref>, we also derive interpolated estimates for Q, without the e^P weight. Let (P, Q) be as in Proposition <ref>. For , ', N as in Lemma <ref> there exists a family of constants = ( C_*, , t_*) > 0 with ↓ 0 as t_* ↓ 0, such that for any 0 ≤ k ≤ 3 and t ∈ [t_b, t_0] one has ∂_θ^k Q (t, ·) _L^∞ + t ∂_t ∂_θ^k Q (t, ·) _L^∞≤ t^-k (1 - '). The proof is identical to that of Lemma <ref>, using instead the energy ℰ_Q,u^(N)(t) and the estimate (<ref>). §.§ The main bounce ODEs Let (P, Q) be as in Proposition <ref>. Then for N chosen sufficiently large there exists a family of constants = (C_*, , t_*) > 0, with ↓ 0 as r_* ↓ 0, such that for all (t, x) ∈ [t_b, r_0] ×𝕊^1 and all a ∈ [-1, 1]: | (t ∂_t + a t ∂_θ) (- t ∂_t P) - ( (e^P t ∂_θ Q)^2 - (e^P t ∂_t Q)^2 ) | (t, x) ≤ t^', | (t ∂_t + a t ∂_θ) (e^P t ∂_θ Q) - (1 + t ∂_t P)(e^P t ∂_θ Q ) | (t, x) ≤ t^', | (t ∂_t + a t ∂_θ) (e^P t ∂_t Q) + (t ∂_t P)(e^P t ∂_t Q) | (t, x) ≤ t^'. Each of these will follow straightforwardly from Lemma <ref> and basic manipulation of the equations (<ref>) and (<ref>). For (<ref>), from (<ref>) one has: (t ∂_t + a t ∂_θ) (- t ∂_t P) - ( (e^P t ∂_θ Q)^2 - (e^P t ∂_t Q)^2 ) = - t^2 ∂_θ^2 P - a t^2 ∂_t ∂_θ P. Then applying Lemma <ref>, the right hand side is bounded by |t^2 ∂_θ^2 P| + |a t^2 ∂_t ∂_θ P| ≤ t^2 - 2(1 - ') + t^1 - (1 - ')≤ 2 t^', and the bound (<ref>) follows upon redefining appropriately. For (<ref>), one simply applies the product rule on ∂_t (e^P t ∂_θ Q) to get: (t ∂_t + a t ∂_θ) (e^P t ∂_θ Q) - (1 + t ∂_t P)(e^P t ∂_θ Q ) = e^P t^2 ∂_t ∂_θ Q + a t ∂_θ (e^P t ∂_θ Q). Applying Lemma <ref> repeatedly, one bounds the right hand side of this by (2 + C_*) t^', so (<ref>) follows upon further redefining . For (<ref>), one uses the equation (<ref>), to derive (t ∂_t + a t ∂_θ) (e^P t ∂_t Q) + (t ∂_t P)(e^P t ∂_t Q) = e^P t^2 ∂_θ^2 Q + 2 e^P t ∂_θ Q · t ∂_θ P + a t ∂_θ (e^P t ∂_t Q), and by Lemma <ref> is bounded by (2 + 3 C_*) t^', thus completing the proof upon redefining . §.§ The equations of variation On top of the ODEs for -t ∂_t P, e^P t ∂_θ Q and e^P t ∂_t Q exhibited in Corollary <ref>, we shall also require corresponding ODEs for their ∂_θ-derivatives; that is, the linear system obtained by linearizing the ODEs of Corollary <ref> around a given solution for (- t ∂_t P, e^P t ∂_θ Q, e^P t ∂_t Q). Let (P, Q) be as in Proposition <ref>. Then for N chosen sufficiently large there exists a family of constants = (C_*, , t_*) > 0, with ↓ 0 as r_* ↓ 0, such that for all (t, x) ∈ [t_b, r_0] ×𝕊^1 and all a ∈ [-1, 1]: | (t ∂_t + a t ∂_θ) (- t ∂_t ∂_θ P) - 2 (e^P t ∂_θ Q) ∂_θ (e^P t ∂_θ Q) + 2 (e^P t ∂_t Q) ∂_θ (e^P t ∂_t Q) | (t, x) ≤ t^- 1 + 2', | (t ∂_t + a t ∂_θ) (∂_θ(e^P t ∂_θ Q)) - t ∂_t ∂_θ P e^P t ∂_θ Q - (1 + t ∂_t P) ∂_θ (e^P t ∂_θ Q) | (t, x) ≤ t^- 1 + 2 ', | (t ∂_t + a t ∂_θ) (∂_θ (e^P t ∂_t Q)) + t ∂_t ∂_θ P e^P t ∂_t Q + t ∂_t P ∂_θ (e^P t ∂_t Q) | (t, x) ≤ t^-1 + 2'. Each of (<ref>)–(<ref>) will be derived by commuting the equations (<ref>)–(<ref>) used in the proof of Corollary <ref> with a ∂_θ-derivative, then using Lemma <ref> to bound the right hand side. We demonstrate this by deriving (<ref>); by commuting (<ref>) with ∂_θ, one yields (t ∂_t + a t ∂_θ) (∂_θ (e^P t ∂_t Q)) + t ∂_t ∂_θ P e^P t ∂_t Q + t ∂_t P ∂_θ (e^P t ∂_t Q) = 3 t ∂_θ P t ∂_θ^2 Q + t e^P t ∂_θ^3 Q + t ∂_θ^2 P e^P t ∂_θ Q + a t ∂_θ^2 (e^P t ∂_t Q). By Lemma <ref>, each ∂_θ-derivative on the right hand side, not including the one derivative in e^P t ∂_θ Q, costs t^- 1 + ', thus the additional power of t on the right hand side means | (t ∂_t + a t ∂_θ) (∂_θ (e^P t ∂_t Q)) + t ∂_t ∂_θ P e^P t ∂_t Q + t ∂_t P ∂_θ (e^P t ∂_t Q) | ≲_C_* t^1 - 2 (1 - ') = t^-1 + 2 '. Redefining to absorb the implied constant, one deduces (<ref>). § LOW ORDER ODE ANALYSIS In this section, we apply the ODEs derived in Section <ref> to derive L^∞ bounds for 0th and 1st order quantities. Eventually, these will be used to improve the bootstrap assumptions (<ref>)–(<ref>). §.§ The bounce ODE For t_0 ≤ t_* < 1 sufficiently small, let 𝒫, 𝒬, ℛ: [t_b, t_0] ⊂_>0→ satisfy the following ODEs, where for some 0 < ' < the error terms ℰ_i obey |ℰ_i| ≤ t^' for i = 1, 2, 3: t ∂_t 𝒫 = 𝒬^2 - ℛ^2 + ℰ_1, t ∂_t 𝒬 = (1 - 𝒫) 𝒬 + ℰ_2, t ∂_t ℛ = 𝒫ℛ + ℰ_3. Suppose furthermore that for some ” >, one has (𝒫 - 1)^2(t_0) + 𝒬^2(t_0) ≤ (1 - ”)^2 and |ℛ(t_0)| ≤ t_0^'. Then for chosen sufficiently small (depending on all of , ', ”) the solution obeys the following bounds for t ∈ [t_b, t_0]. (𝒫 - 1)^2(t) + 𝒬^2(t) ≤ (1 - )^2, |ℛ(t)| ≤ t^'. We proceed via using a continuity / bootstrap argument, with bootstrap assumption which is exactly (<ref>). That is, we assume (<ref>) holds on an interval [t̃, t_0] ⊂ [t_b, t_0], and show that we may in turn improve upon (<ref>) in this interval. For the improvement step, we use an approximate monotonicity property of the ODE system; let 𝒦 be defined by 𝒦 (𝒫 - 1)^2 + 𝒬^2 + ℛ^2. Using (<ref>)–(<ref>), one may show that t ∂_t 𝒦 = ℛ^2 + 2 ℰ_1 (𝒫 - 1) + 2 ℰ_2 𝒬 + 2 ℰ_3 ℛ. Thus using ℛ^2 ≥ 0 and taking (<ref>) as a bootstrap assumption, t ∂_t 𝒦 may be bounded by t ∂_t 𝒦≥ - ( 4(1- ) + 2 t^') t^'. Thus for = (, ', ”) chosen sufficiently small, one may guarantee that ∫^t_0_t∂_t 𝒦(t̃) d t̃≤1/2( ( 1 - + ”2 )^2 - (1 - ”)^2 ). Similarly using our assumptions on initial data we may choose sufficiently small such that 𝒦(t_0) ≤ (1 - ”)^2 + 1/2( (1 - + ”2 )^2 + (1 - ”)^2 ). Combining (<ref>) and (<ref>), it is clear that one has 𝒦(t) ≤ (1 - + ”2)^2 < (1 - )^2 for t ∈ [t̃, t_0], thereby improving the first inequality in (<ref>). For the second inequality in (<ref>), from (<ref>) and 𝒫≥ (which follows from the first part of (<ref>)), one has the following differential inequality for ℛ^2: t ∂_t ( ℛ^2 ) ≥ 2 ℛ^2 + 2 ℰ_3 ℛ. Using an integrating factor and the assumptions on ℛ and ℰ_3, we may thus write t ∂_t ( t^- 2 ℛ^2 ) ≥ 2 t^- 2 ( - '). Our assumption at t = t_0 means t_0^- 2 ℛ^2(t_0) ≥^2 t_0^-2 ( - '). Thus for chosen sufficiently small, we integrate the above inequality in the direction of decreasing t and yield that t^-2 ℛ^2 < t^-2( - '). Multiplying both sides by t^2, this is a strict improvement of the second inequality in (<ref>). This completes our continuity argument, completing the proof of Lemma <ref>. §.§ The equations of variation For 0 < t_0 ≤ t_* < 1, let 𝒫, 𝒬, ℛ: [t_b, t_0] → satisfy the assumptions of Lemma <ref>. Further, let ℒ, ℳ, 𝒩: [t_b, t_0] → obey the following ODEs, where |ℰ_i| ≤ t^-1 + 2 ' for i=4, 5, 6: t ∂_t ℒ = 2 𝒬ℳ - 2 ℛ𝒩 + ℰ_4, t ∂_t ℳ = - 𝒬ℒ + (1 - 𝒫) ℳ + ℰ_5, t ∂_t 𝒩 = ℛℒ + 𝒫𝒩 + ℰ_6. For some > 0, impose the following conditions at initial data: |ℒ(t_0)| + |ℳ(t_0)| + |𝒩(t_0)| ≤ t_0^- 1 +. We further assume that 2 ' >. Then for chosen sufficiently small depending on , ', ” and , there exists a constant D > 0 depending on the same parameters such that for all t ∈ [t_b, t_0], one has |ℒ (t)| + |ℳ (t)| + |𝒩 (t)| ≤ D t^- 1 + . We shall rewrite the system (<ref>)–(<ref>) in the following matrix form: t ∂_t [ ℒ; ℳ; 𝒩 ] = [ 0 2 𝒬 - 2 ℛ; - 𝒬 1 - 𝒫 0; ℛ 0 𝒫 ]_𝐋[ ℒ; ℳ; 𝒩 ] + [ ℰ_4; ℰ_5; ℰ_6 ]. The goal is to bound the operator norm of the matrix 𝐋 = 𝐋(r), as a function of t ∈ [t_b, t_0], in such a way that allows one to integrate this equation. Note our operator norm will be with respect to the ℓ^2-norm on ^3, i.e. for a 3 × 3 matrix 𝐌 we write 𝐌_opsup_x∈^3 ∖{0}𝐌x_ℓ^2/x_ℓ^2. In fact, we shall actually estimate the operator norm of 𝐋̃ = [ 0 2 𝒬 - 2 ℛ; - 𝒬 1 - 𝒫 0; ℛ 0 0 ] = 𝐋 + 𝐏, where we note 𝐏 = (0, 0, 𝒫) is a positive definite matrix with respect to the ℓ^2 inner product. Our strategy will be to estimate 𝐋̃_op differently depending on the size of 𝒬(t). The idea is that when 𝒬(t) is small, the largest matrix element of 𝐋̃ will be 1 - 𝒫, which is bounded by 1 - by Lemma <ref>. On the other hand, when 𝒬(t) is not small, we will only have a weaker quantitative bound 𝐋̃_op≤ 10; this will be mitigated using an estimate on the size of the interval for which 𝒬 small is not small. Case 1: |𝒬(t)| ≤1/4 (” - ): Recall from the proof of Lemma <ref>, particularly (<ref>) and (<ref>), that for t ∈ [t_b, t_0] one has (𝒫 - 1)^2 + 𝒬^2 ≤( 1 - + ”/2)^2. In particular, the Hilbert–Schmidt[Recall the Hilbert–Schmidt norm of a 3 × 3 matrix 𝐀 is given by the ℓ^2-norm of all its matrix elements: 𝐀_HS = ( ∑_i, j = 1^3 𝐀_ij^2 )^1/2. ] norm of 𝐋̃ is given by: 𝐋̃_HS^2 = (𝒫-1)^2 + 𝒬^2 + 4 𝒬^2 + 5 ℛ^2 ≤( 1 - + ”/2)^2 + 1/4 (” - )^2 + 5 ℛ^2. Without the 5 ℛ^2 term, the right hand side is strictly less than (1 - )^2. From Lemma <ref>, we have ℛ^2 ≤ t^2 ', and in particular for t < t_* chosen small enough one has 𝐋̃_HS≤ 1 -. Since for any 3 × 3 matrix 𝐌 one has 𝐌_op≤𝐍_HS one concludes that: 𝐋̃(t) _op≤ 1 - whenever |𝒬(t)| ≤1/4 (” - ). Case 2: |𝒬(t)| > 1/4(” - ): If one assumes no additional smallness for |𝒬(t)| then one may only use Lemma <ref> to bound the individual matrix elements of 𝐋̃(t). Using Lemma <ref>, we simply crudely bound each nonzero matrix element of 𝐋̃(t) by 4, then proceeding via the Hilbert–Schmidt norm as above one can show 𝐋̃_op≤√(80)≤ 10. As mentioned previously, this will be mitigated using control on the size of the set B = { t ∈ [t_b, t_0] : |𝒬(t)| > 1/4 (” - ) }, at least upon assuming sufficient smallness on and t_*. To justify this, suppose that < 1/64 (” - )^2 and that ℛ^2 ≤ t_*^2 '≤1/64(” - )^2. Then using (<ref>) and the equation (<ref>) for t ∂_t 𝒫, t ∂_t 𝒫≥1/16 (” - )^2 - ℛ^2 - ≥1/32 (” - )^2 while t ∈ B. Furthermore, for t ∉B, from Lemma <ref> one still has t ∂_t 𝒫≥ - C t^' for some C > 0. But from (<ref>), 𝒫(r) is certainly bounded between 0 and 2. So: 2 ≥∫_t^t_0 t ∂_t P (t̃) dt̃/t̃ = ∫_t̃∈ B t ∂_t P(t̃) d t̃/t̃ + ∫_t ∉B t ∂_t P (t̃) d t̃/t̃≥μ(B) ·1/32(” - )^2 - C '^-1 t_*^'. Here μ(B) is the measure of the set B with respect to the measure dt/t. Let us choose t_* small enough so that C '^-1 t_*^'≤ 2. Then collecting all this information, 𝐋̃(t) _op≤ 10, whenever |𝒬| > 1/4(” - ) where B = { t ∈ [t_b, t_0] : |𝒬(t)| > 1/4 (” - ) } has μ(B) ≤ 128 (” - )^-2. We now use (<ref>) and (<ref>) to complete the proof of the lemma. We introduce the further notation: 𝐱 = [ ℒ; ℳ; 𝒩 ], 𝐞 = [ ℰ_4; ℰ_5; ℰ_5 ]. So that we further rewrite the ODE system (<ref>) as t ∂_t 𝐱 = ( 𝐋̃ + 𝐏 ) 𝐱 + 𝐞. Using the positivity of 𝐏, one then shows that 𝐱_ℓ^2^2 = 𝐱^T 𝐱 satisfies t ∂_t 𝐱_ℓ^2^2 ≥ 2 𝐱^T 𝐋̃𝐱 + 2 𝐱^T 𝐞≥ - 2 𝐋̃_op 𝐱_ℓ^2^2 - 2 √(3) t^-1 + 2 '𝐱_ℓ^2. Note the final inequality follows from the definition of the operator norm and using Cauchy–Schwarz on 𝐱^T 𝐞, together with the assumed bounds on the error terms ℰ_i. Therefore t ∂_t 𝐱_ℓ^2≥ - 𝐋̃_op·𝐱_ℓ^2 - √(3) t^-1 + 2 '. The idea is now simply to apply Grönwall's inequality to this differential inequality. Recalling that we always integrate in the direction of decreasing t, the crucial bound will be the following, which follows from (<ref>) and (<ref>): ∫^t_1_t_2𝐋̃(t̃) _op d t̃/t̃ ≤∫_t ∈ [t_2, t_1] ∖ B𝐋̃_op d t̃/t̃ + ∫_t ∈ [t_2, t_1] ∩ B𝐋̃_op d t̃/t̃ ≤∫_t_2^t_1 (1 - ) d t̃/t̃ + ∫_t ∈ B 10 dt̃/t̃ ≤ (1 - ) log( t_1/t_2 ) + 10 μ(B) = (1 - ) log( t_1/t_2 ) + 1280 (” - )^-2. To conclude, integrating (<ref>) and inserting our initial data bounds yields the following integral inequality, where β(t) = √(3) ( t_0^-1 + + (1 - 2 ')^-1 t^-1 + 2 '): 𝐱(t) _ℓ^2≤∫^t_0_t 𝐋̃(t̃) _op·𝐱 (t̃) _ℓ^2 dt̃/t̃ + β(t). Thus Grönwall's inequality in integral form implies that 𝐱(t) _ℓ^2≤β(t) + ∫^t_0_t β(t̃) ·𝐋̃(t̃) _op·exp( ∫^t̃_t𝐋̃(t̃̃̃) d t̃̃̃/t̃̃̃ ) d t̃/t̃. By (<ref>) and the fact 𝐋̃(t) _op≤ 10 holds everywhere, we can bound the integrand here by: β(t̃) ·𝐋̃(t̃) _op·exp( ∫^t̃_t𝐋̃(t̃̃̃) d t̃̃̃/t̃̃̃ ) ≲β(t̃) ·( t̃/t)^1 - ≲( t̃/t_0 t)^1 - + t̃^2 ' - t^-1 + . Therefore, using that 2 ' >, inserting this into the above yields that 𝐱(t) _ℓ^2≲ t^-1 +. By the definition of 𝐱, and since the ℓ^2 and ℓ^∞ norms on ℝ^3 are uniformly equivalent, the lemma follows. § THE STABILITY RESULT §.§ Proof of Theorem <ref> The proof of our stability result Theorem <ref> follows from a boostrap argument. By local existence for the Gowdy wave map system, for initial data as given there exists some t_b ∈ (0, t_0) such that a solution to the Gowdy system (<ref>)–(<ref>) exists in the interval t ∈ [t_b, t_0], which moreover satisfies the bootstrap assumptions (<ref>)–(<ref>). We show that assuming this, one may then improve upon the bootstrap assumptions, for instance showing the same (<ref>)–(<ref>) hold with C_* replaced by say C_*/2. By a standard continuity argument, t_b may then be any number in the interval (0, t_0), and the corresponding solution thus obeys (improved) bootstrap assumptions in the whole of t ∈ (0, t_0). We then apply the results of Section <ref> and Section <ref> to conclude. §.§.§ Improving the bootstrap assumptions Let (P, Q) be a solution of the Gowdy symmetric system in our bootstrap region t ∈ [t_b, t_0] with assumptions on initial data as given. Then the results of Section <ref> and Section <ref> all apply, in particular Proposition <ref>, Lemma <ref>, Corollary <ref> and Corollary <ref>. We now proceed in the following steps: Step 1: ODE analysis on timelike curves The first step will be to use Corollary <ref>, Corollary <ref> and the results of Section <ref> to provide L^∞ bounds for the following 6 key quantities: - t ∂_t P, e^P t ∂_θ Q, e^P t ∂_t Q, - t ∂_t ∂_θ P, e^P t ∂_θ^2 Q, e^P t ∂_t ∂_θ Q. To do set, let γ: [t_b, t_0] → (0, +∞) ×𝕊^1 with γ(t) = (t, θ(t)) be a C^1 past-directed timelike curve. One example is a curve of constant θ i.e. with θ'(t) = 0. In view of the metric (<ref>), the timelike character is equivalent to |θ'(t)| < 1. For such a curve, γ, define 𝒫(t) - t ∂_t P(γ(t)), 𝒬(t) e^P t ∂_θ Q(γ(t)), ℛ(t) e^P t ∂_t Q(γ(t)), ℒ(t) - t ∂_t ∂_θ P(γ(t)), ℳ(t) ∂_θ(e^P ∂_θ Q) (γ(t)), 𝒩(t) ∂_θ ( e^P t ∂_t Q) (γ(t)). The initial data assumption (<ref>) means that (𝒫(t_0) - 1)^2 + 𝒬^2(t_0) ≤ ( 1 - ”)^2 and the second assumption (<ref>) means that for ≤ t_*^ - ', one also has ℛ(t_0) ≤ t_0^'. Combining these initial data bounds with Corollary <ref>, it is clear that along the curve γ, 𝒫, 𝒬 and ℛ satisfy the assumptions of Lemma <ref>, with a = dθ/dt. Applying this lemma yields that: (1 - 𝒫(t))^2 + 𝒬^2(t) ≤ (1 - )^2, ℛ(t) ≤ t^'. We also provide bounds for ℒ, ℳ and 𝒩. To do this, we use the ODEs in Corollary <ref>. With error terms bounded as in this corollary, it is evident that the quantities ℒ, ℳ and 𝒩 satisfy the system (<ref>)–(<ref>) in Lemma <ref>. We also need to verify the conditions on initial data for the ODEs. To do so, we use the initial L^2 bounds (<ref>) and (<ref>). It follows from these, Sobolev embedding in 𝕊^1, and also (<ref>)–(<ref>), that one has |∂_θṖ_D(θ)|, |∂_θ (e^P_D t_0 ∂_θ Q_D )(θ)|, |∂_θ (e^P_DQ̇_D)(θ)| ≤ C_𝕊^1, where C_𝕊^1 is a Sobolev constant that is independent of and . Thus one has |ℒ(t_0)|, |ℳ(t_0)|, |𝒩(t_0)| ≤ C_𝕊^1≤ t_0^-1 +, so long as t_0 ≤ t_* is chosen small enough to absorb the C_𝕊^1. Thus applying Lemma <ref>, we deduce that for some constant D > 0 depending on , etc but not C_*, one bounds |ℒ(t)| + |ℳ(t)| + |𝒩(t)| ≤ D t^-1 + . Step 2: Improving the bootstrap assumptions (<ref>) and (<ref>) Using (<ref>), and allowing γ to vary over all timelike curves, in particular all curves with constant θ-coordinate, we find that |t ∂_t P(t, x)| ≤ 2 - and |e^P t ∂_θ Q(t, x)| ≤ 1 - for all (t, x) ∈ [t_b, t_0] ×𝕊^1. Thus choosing C_* ≥ 2(2 - ), one easily improves upon (<ref>). Note that since 𝒫 cannot change sign, (<ref>) actually yields - t ∂_t P(t, x) ≥. Similarly using (<ref>), |e^P t ∂_t Q(t, x)| ≤ t^', so the first part of (<ref>) is improved for C_* ≥ 2. For the second part of (<ref>), we simply use: e^-P(t, x) ≤ e^-P(t_0, θ) ·exp ( - ∫^t_0_t (- t ∂_t P)(t̃, θ) d t̃/t̃ ) ≤ e^-P_D(θ) exp( log( t/t_0 )), where in the second step we used that - t ∂_t P(t, x) ≥. Using the initial data assumption (<ref>), we thus have e^P (t, x) ≤ t^. Since ' < and t ≤ 1, choosing C_* ≥ 2 also improves the second part of (<ref>). Step 3: Improving the bootstrap assumptions (<ref>) and (<ref>) We shall improve these using (<ref>). Allowing γ to vary over all timelike curves, for all (t, θ) ∈ [t_b, t_0] ×𝕊^1: |t ∂_t ∂_θ P(t, θ)| ≤ D t^-(1 - ), |∂_θ(e^P t ∂_θ Q)(t, θ)| ≤ D t^- (1 - ), |∂_θ( e^P t ∂_t Q)(t, θ)| ≤ D t^ - (1 - ). We now integrate the first inequality in (<ref>). Using (<ref>) and Sobolev embedding to bound |∂_θ P(t_0, θ)|, we deduce that |∂_θ P(t, θ)| ≤ D t^-(1 - ) also, where D is modified appropriately. We now expand the second and third inequalities in (<ref>), yielding that |e^P t ∂_θ^2 Q| ≤ D t^- (1 - ) + |∂_θ P| · |e^P t ∂_θ Q|, |e^P t ∂_t ∂_θ Q| ≤ D t^- (1 - ) + | ∂_θ P | · |e^P t ∂_t Q|. Now using the above bound for |∂_θ P| and the bounds from Step 2, by modifying D appropriately we also have |e^P t ∂_θ^2 Q|, |e^P t ∂_t ∂_θ Q| ≤ D t^- (1 - ). Choosing C_* ≥ 2D, we thereby improve the remaining bootstrap assumptions. §.§.§ Completion of the proof To complete the proof, it remains to verify the bounds (<ref>) and (<ref>), and finally prove the statements contained in the final paragraph of Theorem <ref>. The former two bounds follow immediately from (<ref>) and Proposition <ref> respectively. To show that -t ∂_t P(t, θ) converges pointwise as t → 0, note that this is equivalent to showing that 𝒫(t), as defined in (<ref>), has a limit as t → 0, for curves γ of constant θ-coordinate. From Step 1, we know that 𝒫 solves an ODE as in Lemma <ref>, namely t ∂_t 𝒫 = 𝒬^2 - ℛ^2 + ℰ_1, where |ℰ_1| ≲ t^'. By (<ref>), one also has ℛ^2 ≲ t^2 '. We can integrate the above and write: 𝒫(t) - 𝒫(t_0) + ∫^t_0_t (ℰ_1 - ℛ^2) dt̃/t̃ = - ∫^t_0_t𝒬^2 dt̃/t̃. By the lower bound for 𝒫(t) in (<ref>) and the aforementioned bounds for ℰ_1 and ℛ implying a lower bound for the LHS, it is clear that the right hand side attains a limit as t → 0. Thus the quantity 𝒫(t) also attains a limit as t → 0. Finally, we show that Q(t, θ) and ∂_θ Q (t, θ) converge uniformly as t → 0, where for the latter statement we require > 1/3. Starting with Q(t, θ), we multiply (<ref>) by t^-, for some > 0 to be determined: t ∂_t (t^- t ∂_t Q) = (- 2 t ∂_t P - ) · t^- t ∂_t Q + t^2 - ∂_θ^2 Q + 2 t^2 - ∂_θ P ∂_θ Q. We shall choose < '. Therefore, by (<ref>), we have - 2 ∂_t P - > 0, while by Lemma <ref> and Lemma <ref>, we can bound the final two terms on the right hand side by |t^2 - ∂_θ^2 Q| + |t^2 - ∂_θ P ∂_θ Q| ≲ t^2 - - 2(1 - ') = t^2 ' - . Since 2 ' - ≥' > 0, this is therefore integrable with respect to dt/t as t → 0. Therefore integrating (<ref>) yields that t^- t ∂_t Q is uniformly bounded in the region 0 < t ≤ t_0. This means that: Q(t, θ) = Q(t_0, θ) - ∫_t^t_0 (t^- t ∂_t Q(t̃, θ)) t̃^-1 + d t̃. indeed converges uniformly as t → 0 to some q(θ), and |Q(t, θ) - q(θ)| ≲ t^. To prove a similar statement for ∂_θ Q, we need to differentiate (<ref>) in θ, t ∂_t (t^- t ∂_t ∂_θ Q) = (- 2 t ∂_t P - ) · t^- t ∂_t ∂_θ Q - 2 t ∂_t ∂_θ P · t^- t ∂_t Q + t^2 - ∂_θ^3 Q + 2 t^2 - ∂_θ P ∂_θ^2 Q + 2 t^2 - ∂_θ^2 P ∂_θ Q. To bound the last term on the first line, we use Step 2. In other words, we use the estimates (<ref>), which together yield t ∂_t Q ≲ t^2 '. Therefore, using also Lemma <ref> we get that: |t ∂_t ∂_θ P · t^- t ∂_t Q| ≲ t^-(1 - ')· t^2 ' - = t^3 ' - 1 - . For the second line of (<ref>), we again combine Lemma <ref> and Lemma <ref>, to get |t^2 - ∂_θ^3 Q| + |t^2 - ∂_θ P ∂_θ^2 Q| + |t^2 - ∂_θ^2 P ∂_θ Q| ≲ t^2 - - 3(1- ') = t^3' - 1 - . For > 1/3, we may choose ' > 1/3 also and then choose < 3' - 1. For this choice, - 2 t ∂_t P - > 2 ' - (3 ' - 1) = 1 - ' ≥ 0, so the first term on the right hand side of (<ref>) can be ignored, and by integrability of t^3 ' - 1 - with respect to dt/t one yields that t^- t ∂_t ∂_θ Q is uniformly bounded in the region 0 < t ≤ t_0. One now concludes as in the previous case, and one yields: |∂_θ Q(t, θ) - ∂_θ q (θ) | ≲_ t^, for any < 3 ' - 1. § BKL BOUNCES §.§ Proof of Theorem <ref> Under the assumptions of Theorem <ref>, we may apply the stability result Theorem <ref>; in particular Corollary <ref> applies, and the ODEs (<ref>) follow exactly as in the proof of Theorem <ref> – note that the term involving ℛ can now be ignored since we have ℛ(t) ≲ t^' from (<ref>). It remains to prove the various convergence results stated in Theorem <ref>. The convergence of 𝒫_γ(t) to some 𝒫_γ, ∞ = V(θ_0) follows in the same manner to the pointwise convergence of -t ∂_t P(t, θ) in the proof of Theorem <ref>. To show the convergence of 𝒬_γ(t) to 0, we first use that from the first ODE in (<ref>) and the convergence of 𝒫_γ, the integral ∫_0^t_0𝒬_^2(t̃) d t̃/t̃ is finite, or in other words 𝒬_^2(t) converges to 0 in an averaged sense. In particular there is a sequence {t_k} with t_k → 0 such that 𝒬_γ^2(t_k) → 0 as k →∞. To upgrade this sequential convergence to convergence of 𝒬_γ(t), we use the second equation in (<ref>), or more precisely the ODE t ∂_t 𝒬_^2 = 2 𝒬_γ^2 (1 - 𝒫_γ) + 2 ℰ_𝒬𝒬_. Integrating this gives that for any 0 < t ≤ t_k ≤ t_0, we have: |𝒬_γ^2(t) - 𝒬_γ^2(t_k)| ≤ C ∫^t_k_0 𝒬_γ^2(t̃) d t̃/t̃ + C ∫^t_k_0 t̃^'dt̃/t̃. By the finiteness of the integral (<ref>), the right hand side is finite, and in fact converges to 0 as k →∞. Since we already know 𝒬_γ^2(t_k) → 0, this implies that 𝒬_γ^2(t), and therefore also 𝒬_γ(t), converges to 0 as t → 0. Moving onto (i), it will be necessary to use the conserved quantity 𝒦 encountered in the proof of Lemma <ref>. Recall from there that defining 𝒦_γ = (𝒫_γ - 1)^2 + 𝒬_γ^2, one can show | t d/dt𝒦_γ| ≲ t^'. Using the convergence of 𝒫_γ and 𝒬_γ, it holds that 𝒦_γ(t) →𝒦_γ, ∞ (𝒫_γ, ∞ - 1)^2 as t → 0, and moreover from the above that |𝒦_γ(t) - 𝒦_γ, ∞| ≲ t^'. Inserting t = t_0, one has: | ( 𝒫_γ(t_0) - 1)^2 - (𝒫_γ, ∞ - 1)^2 | ≲ t_0^' + 𝒬^2_γ(t_0). In (i), we assumed that ∂_θ Q(γ(t)) did not converge to 0 as t → 0. However, since 𝒬_γ(t) = e^P t ∂_θ Q(γ(t)) does converge to 0, this implies that e^P(γ(t)) t → 0 as t → 0. Now simply note that t d/dtlog( e^P(γ(t)) t) = 1 - 𝒫_γ(t) → 1 - 𝒫_γ,∞ as t → 0, and thus in order for e^P(γ(t)) t → 0 we must have 𝒫_γ, ∞≤ 1. Combining this with (<ref>) yields the estimate (<ref>). (For instance, one uses that if |x^2 - y^2| ≤ Z with x, y ≥ 0 then |x - y| ≤ Z^1/2.) For (ii), if > 1/2 and ∂_θ Q(γ(t)) converges to 0, it will be helpful to use (<ref>) from the proof of Theorem <ref>, or rather its generalization to |∂_θ Q(γ(t)) - ∂_θ q (θ_0) | ≲_ t^, for any < 3 ' - 1. Note that this generalization, where γ is allowed to be any timelike curve rather than only a constant θ-curve, is proved easily using previous methods. In (ii), we may insert ∂_θ q(θ_0) = 0, and therefore |∂_θ Q(γ(t))| ≲ t^. Furthermore, from -t ∂_t P ≤ 2 - it holds that e^P ≤ t^-2 +, therefore one has |e^P t ∂_θ Q(γ(t))| ≲ t^- 1 + +. But by the definition of 𝒬_γ(t), this means that |𝒬_γ(t)| ≲ t^- 1 + +. Note that so long as > 1/2, ' < and < 3 ' - 1 may be chosen such that - 1 + + > 0. In the case of (ii), we therefore have from (<ref>) that |t ∂_t 𝒫_γ(t) | ≲ t^2(- 1 + + ) + t^'. For convenience, let use choose σ = ' - + 1/2, where > ' > 3/2 -. Then t^2(-1 + + ) = t^2 ' - 1, and integrating the above immediately yields (<ref>). §.§ Proof of Corollary <ref> Lastly, we shall apply our Theorems <ref> and <ref> to prove the stability / instability corollary. Let P(t, θ) be as stated. By Theorem <ref>, for all k ∈ℕ the convergence - t ∂_t P(t, θ) → V(θ) holds in the C^k norm. Since we are assuming 0 < V(θ) < 2, since θ∈𝕊^1 it holds that there exists ∈ (0, 1) such that < V(θ) < 2 -. In fact, by the above convergence we can find , ', ” and t_0 > 0 such that for N chosen (depending on ) as in Theorem <ref>, we have that for 0 < t ≤ t_0 and all θ∈𝕊^1: 0 < ' < < ” < - t ∂_t P (t, θ) < 2 - ” < 2 - < 2 - ' < 2, and there exists some > 0 so that for all 0 < t ≤ t_0: ℰ(t) = 1/2∑_K = 0^N ∫_𝕊^1( (t ∂_t ∂_θ^K P)^2 + t^2 (∂_θ^K+1 P)^2 + t^2 (∂_θ^K P)^2 ) dθ≤/2. (This follows because in fact ℰ(t) →1/2∑_K=0^N ∫ (∂_θ^K V)^2 dθ as t → 0.) We note that t_0 above may differ from the initial data time t_1 > 0 in the statement of Corollary <ref>. This is mitigated a standard Cauchy stability argument, for any ε_0 > 0 there exists ε > 0 such that a perturbation of size ε at t = t_1 implies a perturbation of size ε_0 at t = t_0 i.e. (P̃_D, Q̃_D, Ṗ̃_D, Q̃̇̃_D) - (P_D, 0, Ṗ_D, 0) _(H^N+1)^2 × (H^N)^2≤ε implies (P̃(t_0, θ), Q̃(t_0, θ), t ∂_t P̃(t_0, θ), t ∂_t Q̃(t_0, θ) ) - (P(t_0, θ), 0, t ∂_t P(t_0, θ), 0)_(H^N+1)^2 × (H^N)^2 ≤ε_0. Due to this we may instead consider perturbations of initial data as perturbations at time t = t_0. Thereby by choosing ε (and thus ε_0) small enough one can guarantee that (1 + t ∂_t P̃)^2 (t_0, θ) + (e^P̃ t_0 ∂_θP̃)^2 (t_0, θ) ≤ (1 - ”)^2, |e^P̃ t ∂_t Q̃(t_0, θ)| ≤ t_0^, e^P̃(t_0, θ) ≤ t_0^ as well as the energy bound 1/2∑_K=0^N∫_𝕊^1( (t ∂_t ∂_θ^K P̃)^2 + t^2 (∂_θ^K+1P̃)^2 + t^2 (∂_θ^K P̃)^2 + e^2P̃ (t ∂_t ∂_θ^K Q̃)^2 + e^2 P̃ (t ∂_θ^K+1Q̃)^2 ) d θ≤. That is, the perturbed data at t = t_0 satisfies the assumptions (<ref>)–(<ref>). Moreover, t_0 can be chosen small enough so that 0 < t_0 ≤ t_* = t_*(, ), so that one may apply Theorems <ref> and <ref>. The remainder of the argument is then a direct application of these theorems. Applying Theorem <ref> to the perturbed data, it is clear that -t ∂_t P̃(t, θ) converges to some Ṽ(θ) pointwise, with ≤Ṽ(θ) ≤ 2 -. In the setting where 1/2 < V(θ) < 3/2, we are allowed to in fact choose > ' > 1/2. By the final statement of Theorem <ref>, ∂_θQ̃→∂_θq̃(θ) uniformly as claimed. We next apply Theorem <ref>. In the case that ∂_θq̃(θ_0) ≠ 0, Theorem <ref>(i) implies that for γ the timelike curve with constant θ = θ_0, Ṽ(θ_0) = 𝒫̃_γ, ∞ = min{ - t ∂_t P̃(t_0, θ_0), 2 + t ∂_t P̃(t_0, θ_0) } + O(t_0^'/2) + O( e^P̃ t_0 ∂_θQ̃(t_0, θ_0)) = min{ - t ∂_t P(t_0, θ_0), 2 + t ∂_t P(t_0, θ_0)} + O(t_0^'/2) + O(ε_0) = min{ V(θ_0), 2 - V(θ_0) } + O(t_0^'/2) + O(ε_0). Since we have license to choose t_0 and ε_0 small depending on ε̃, Corollary <ref>(i) follows. Similarly, in the case that ∂_θq̃(θ_0) = 0, Theorem <ref>(ii)implies that for the same timelike curve γ, Ṽ(θ_0) = 𝒫̃_γ, ∞ = - t ∂_t P̃(t_0, θ_0) + O(t_0^2 ' - 1) = - t ∂_t P(t_0, θ_0) + O(t_0^2 ' - 1) + O(ε_0) = V(θ_0) + O(t_0^2 ' - 1) + O(ε_0). Again choosing t_0 and ε_0 appropriately small yields Corollary <ref>(ii). abbrvnat_mod
http://arxiv.org/abs/2408.11249v1
20240820235951
The Dilemma of Uncertainty Estimation for General Purpose AI in the EU AI Act
[ "Matias Valdenegro-Toro", "Radina Stoykova" ]
cs.AI
[ "cs.AI", "cs.CY" ]
[ The Dilemma of Uncertainty Estimation for General Purpose AI in the European Union Artificial Intelligence Act equal* Matias Valdenegro-Toroequal,rugfse Radina Stoykovaequal,ruglaw rugfseDepartment of AI, University of Groningen ruglawTransboundary Legal Studies, University of Groningen Matias Valdenegro-Torom.a.valdenegro.toro@rug.nl Radina Stoykovar.stoykova@rug.nl Machine Learning, ICML, Artificial Intelligence Act (AI act), Accuracy, Uncertainty Estimation, General Purpose AI Models, Systemic Risk 0.3in ] § ABSTRACT The AI act is the European Union-wide regulation of AI systems. It includes specific provisions for general-purpose AI models which however need to be further interpreted in terms of technical standards and state-of-art studies to ensure practical compliance solutions. This paper examines the AI act requirements for providers and deployers of general-purpose AI and further proposes uncertainty estimation as a suitable measure for legal compliance and quality assurance in training of such models. We argue that uncertainty estimation should be a required component for deploying models in the real world, and under the EU AI Act, it could fulfill several requirements for transparency, accuracy, and trustworthiness. However, generally using uncertainty estimation methods increases the amount of computation, producing a dilemma, as computation might go over the threshold (10^25 FLOPS) to classify the model as a systemic risk system which bears more regulatory burden. § INTRODUCTION The AI act (AIA) is the first comprehensive regulation of AI systems in the European Union, that was formally signed in June 2024 <cit.>. It is expected to enter into force in 2025. Specific attention in the negotiations for AIA was given to transparency and model evaluation obligations for providers and deployers of general-purpose AI models (GPAI). The legislator considered that GPAI can have significant risks to society and fundamental rights. When such models under perform this can lead to negative consequences for individuals which vary from enforcing stereotypes in society to triggering legal consequences and public safety concerns. For example, Chat GPT was often discussed in terms of gender and racial bias, <cit.> as well as its inability to filter potentially dangerous prompts for mixing poisons or explosives. A notorious case in US involved an attorney who used Chat GPT to prepare a filing for a civil case and ended up citing non-existing case law due to hallucinations in the model. <cit.> In another example, a Canadian airline company was forced to honor a refund policy which was hallucinated by the company`s chat bot. <cit.> Therefore, the AI act specifies concrete transparency documentation and model evaluation requirements for GPAI with particular focus on metrics to evaluate the model such as accuracy and performance metrics, quality of datasets assurance, and robustness against errors. Apart from these general accountability requirements, the AIA relies on multi-stakeholder cooperation between industry, academia, and standardisation bodies to establish concrete standards, technical specifications, and best practices for testing and evaluation of AI systems which will support the implementation of the AIA. In this paper, we propose uncertainty estimation as a standard measure for GPAI. We argue that to enable general-purpose AI (GPAI) models evaluation and human oversight and to ensure legal compliance, it would be useful for providers and deployers of AI systems to be informed on the model confidence in output. For a human it is natural that she can express full confidence, partial confidence, or reply with a simple "I don't know" <cit.>. Similarly, it should be expected that AI models can perform the task of confidence estimation themselves as well, as this information is useful for developers and deployers to give a weight to AI model responses, and as a proxy measure about really trusting the prediction, same as with other human-generated opinions and documents. As development of trustworthy AI models is a core principle in AI act, this paper proposes and studies the feasibility of uncertainty estimation as a mandatory component of training and evaluation of AI models, as it is currently not widely adopted and AI model developers do not generally build models with these advanced capabilities. However, GPAI providers might be reluctant to implement such measure as its use during training will increase the amount of computation for the model development which will require compliance with the more stringent regime of GPAI with systemic risk. Therefore, we further examine the benefits, computational costs, and limitations of the measure as a GPAI quality benchmark. This paper is structured as follows: Section 1 introduced the topic, while Section 2 focuses on background and methods for uncertainty estimation in AI. Section 3 summarises the most important legal provisions for regulation of GPAI. In Section 4 we discuss the feasibility of uncertainty estimation as a benchmark for quality assurance and legal compliance with the AIA regulation, while Section 5 provides discussion and conclusions. § UNCERTAINTY IN MACHINE LEARNING The power of GPAI models is that they can make predictions of many kinds, but these predictions are only beneficial if they are approximately correct <cit.>. Incorrect predictions, or popularly known as "hallucinations" are defined as "generated content that is nonsensical or unfaithful to the provided source content" <cit.> indicating that they are simply wrong outputs. <cit.> Hallucinations result from data or modelling problems and are not useful predictions to a human given some context or prompt. Determining truth in AI models is very difficult as these models are not trained to produce an objective "truth", but to reproduce tokens from the training set, which make more or less meaningful answers, but there are no guarantees for correctness. The overall concept of estimating AI confidence is the field of uncertainty estimation in machine learning, and there are many techniques for this purpose, relying on different assumptions <cit.>. The overall issue with this field is that estimating AI model confidence usually requires additional computational resources, and it needs to be explicitly considered during the training process. Large AI models like Large Language Models and Vision-Language Models often do not have proper confidence estimation capabilities <cit.>, by outputting confidences that are not a reflection of true confidence, as correct and incorrect answers have similar high confidences, and this prevents discrimination of correct and incorrect predictions <cit.> <cit.>. The overall concept of confidence estimation requires that incorrect predictions have lower confidence than correct predictions, ideally with incorrect predictions having 0% confidence, and correct predictions having 100% confidence. §.§ Methods for Uncertainty Estimation Methods to estimate uncertainty for machine learning models can be broadly divided into two categories: direct methods like ensembles that directly provide uncertainty estimates, and sampling methods like MC-Dropout, where forward passes of the model correspond to samples of a posterior probability distribution. In both kinds of methods, samples or forward passes are combined to build an output probability distribution. μ(x) = M^-1∑^M_i model_i(x) σ^2(x) = M^-1∑^M_i [model_i(x) - μ(x)]^2 Where M is the number of forward passes or models in the ensemble, and model_i represents the predictions of the i-th model in the ensemble or the i-th forward pass sample. The variance of the predicted probability distribution σ^2(x) is a measure of uncertainty, the larger the variance, the more uncertain the prediction is, and more likely to be incorrect. The mean of the predicted probability distribution μ(x) corresponds to the combined prediction that is given to the end user. Direct Methods. The most popular method is Ensembles, where any neural network is trained M times on the same dataset, and due to random weight initialization, the model converges to different weights. At inference time, each model in the ensemble (the model_i) makes a prediction and they are combined using Eq <ref> and <ref>. A typical value is M = 5. Sampling Methods. Monte Carlo Dropout is a popular sampling technique, where Dropout layers are inserted in the neural network architecture, but these layers are active both during training and inference, and the random neuron dropping effect of Dropout is enabled when making predictions, producing stochastic outputs. The output distribution is reconstructed using Eq <ref> and <ref> via M forward passes of the network, with a typical value M ∈ [10, 50]. Other Methods. There are methods that use a single network architecture, avoiding the need for ensembles of multiple networks or costly sampling. Deterministic Uncertainty Quantification (DUQ) uses a radial basis function output layer to encode per-class centroid <cit.>, while Deep Deterministic Uncertainty uses an lipschitz regularized ResNet that preserves distances to enable feature space density estimation <cit.>. The disadvantages of these methods is that they are not general and make assumptions, for example only being defined for classification tasks, and still they require a slight increase (∼ 10%) in computation. §.§ Computational Requirements In the previous section, we argued that using uncertainty estimation methods requires changes to the training process of the model, but more fundamentally, it also changes the prediction process. Additional computation in the form of ensemble models or multiple forward passes are often required, increasing the computational costs of applying uncertainty estimation methods to machine learning models, in comparison with not applying these techniques. To make predictions with uncertainty, multiple forward passes or multiple models are required, which increases their computational cost linearly as a function of M, compared over the original single model. A typical value for ensemble models is M = 5. The selection of M provides a trade-off between uncertainty quality and computational requirements. More computation allows for more forward passes or models (larger M) and better uncertainty quality, but this can become computationally expensive to compute. Figure <ref> shows this concept, where uncertainty quality indicates the ability to separate correct from incorrect predictions in various settings. Another disadvantage of uncertainty estimation methods is, since they change the training process, sometimes depending on the method, performance on the task itself (classification or regression) can change, either decreasing or increasing. § GPAI IN THE CONTEXT OF THE EUROPEAN ARTIFICIAL INTELLIGENCE ACT According to Art 3 (63) AIA, an AI model is defined as general purpose if: (1) it is trained with a large amount of data (2) uses self-supervision at scale (3) displays significant generality and (4) is capable of competently performing a wide range of distinct tasks. Currently this definition encompasses two types of AI models: generative AI and foundation models. Generative AI refers to deep learning that generates content like text, video, images or code depending on the provided input. Foundation models on the other hand are general purpose or widely applicable models for many tasks, which does not necessarily imply generating data. In addition, if GPAI is used in specific sectors or for the tasks listed in Art. 6 and Annex III e.g., education, law enforcement, employment, the GPAI models may be classified as high-risk AI by themselves or as component of other high-risk AI system. The use of GPAI in high-risk systems is a separate compliance issue that needs to be discussed in detail. Nevertheless, we examine the stringent regime for GPAI classified or part of high-risk AI system only in the context of uncertainty estimation and its feasibility as legal compliance measure. This is desirable also because the AIA recommends voluntary application of some or all of the mandatory requirements applicable to high-risk AI systems. Further the AI act classifies two groups of GPAI models based on compute threshold: GPAI and GPAI with systemic risks. The GPAI is considered with systemic risk or high impact capabilities if the cumulative amount of compute used for model`s training is greater than 10^25 floating point operations (FLOPs) (Art. 51 (2) AIA). In this paper we focus on interpreting the new regulatory requirements for GPAI and specifically on large language models like ChatGPT v.4 or multi-modal models (audio, video, text, etc) which fit the definition of GPAI with systemic risk. The AI act specifies concrete transparency documentation and model evaluation requirements for providers and deployers of general-purpose AI models (GPAI) in Art. 53 with specific focus on metrics to evaluate the GPAI model such as accuracy and performance evaluation metrics, quality of datasets assurance, and robustness against errors (explicitly listed in Annex XI AIA). Moreover, the AIA requires human oversight measures, which enable humans to interpret the AI system output and if needed to intervene in order to avoid negative consequences or risks, or stop the system if it does not perform as intended. This can significantly improve their integration in AI systems for specific tasks as it requires also close cooperation with downstream providers (those who implement the GPAI model in their own AI systems). GPAI with systemic risk is a model that has high-impact capabilities in the sense that it is characterised with unpredictability, emerging capabilities, and continuous learning. Therefore, the legislator considers that GPAI with systemic risk can have actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale (Art. 3(65) AIA). Providers of such models must follow a stringent regime with additional obligations to perform mandatory model evaluation and adversarial testing according to specific standards, which ensure assessment and mitigation of systemic risks. (Art. 55-56 AIA) The newly established AI Office is a body that will facilitate the development of standards and codes of practice. Therefore, research and discussion on what practical measures are necessary to assess the risks of GPAI is of crucial importance in this regulatory initiative. Despite the strict legal regime, existing guidelines on risk management in GPAI report issues with high uncertainty and lack of standards to mitigate it <cit.>, while an initial assessment by Stanford on the most common Gen AI models show that they suffer from data quality and governance issues, lack of transparency, and low robustness. <cit.> § FEASIBILITY OF UNCERTAINTY ESTIMATION AS A MEASURE FOR AIA COMPLIANCE The AIA is a framework law as it provides for a general accountability regime for AI systems, but relies on industry, researchers and other stakeholders to develop further best practices and standards for operationalization of the AIA in the concrete domain and type of AI systems. Further, we examine firstly how uncertainty estimation can benefit compliance with the high-risk AIA requirements, and secondly for its feasibility for GPAI and GPAI with systemic risk quality assurance. §.§ Uncertainty estimation to Support High-risk GPAI Assessment GPAI models that are implemented in AI systems for domains and tasks that present high risk to safety and fundamental rights of individuals (see Art. 6 and Annex III) are obliged to mandatory comply with the high-risk requirements in Chapter II of the act summarized in Table <ref> §.§.§ Supporting Risk Management The first requirement for high risk GPAI is to be accompanied with risk management system that is maintained and updated throughout the GPAI entire life cycle. Such system should encompass two types of measures: (i) for the identification, analysis, and mitigation of foreseeable risks (Art. 9 (2-5)); and (ii) for the testing of most appropriate risk management measures (Art. 9 (6-8)). Providers and deployers of GPAI should always consider as foreseeable risk that the GPAI model can make incorrect predictions. GPAI uncertainty estimation can support this objective as it provides a way to identify and record hallucinations and incorrect predictions by providing a threshold on model confidence to separate correct from incorrect predictions. Further, the predictions that are bellow this threshold can be analysed to identify the origin of the errors and possible mitigation strategies. As the measure shows the probability of the prediction being correct, it also allows to detect possible misuse of the system. For example, if currently ChatGPT can be tricked to generate fake news, with uncertainty estimation, the model will provide proof that the output might be incorrect. Interestingly, the legislator explicitly stated in recital 65 that addressing foreseeable misuse of AI system should not require specific additional training. To the contrary, to the best of our knowledge additional training is always required when misuse or risk mitigation strategies are employed. §.§.§ Improvement of Dataset Governance Current practices of adding just more data to train GPAI models were efficient in improving performance of the model to certain extend, but eventually if such data is of poor quality the model degrades over time. Therefore, the AIA considers that the performance of GPAI models depends on the quality of the datasets used for training, validation, and testing. Art. 10 AIA defines concrete data management practices and stringent requirements for quality and relevance of datasets. The origin of data, relevance, representativness, and data preparation techniques must be clearly stated, while mitigation of errors or biases in the data should be demonstrated. However, a preliminary EU study concluded that complience with those requirements might be challenging in practice, as currently there are no universally agreed standard for dataset quality assurance, while the quality of data is domain and AI system specific. <cit.> Uncertainty measures for GPAI can assist in fulfilling partly data governance requirements. It improves the quality of the training process as it allows for the system to detect and report by itself inaccurate predictions (low confidenc). In this sense, the uncertainty estimation measure allows to minimise the negative effect of low-quality data on the systems output and potentially to trace the reasons for high uncertainty thresholds and to curate the datasets further. Data uncertainty (also known as aleatoric uncertainty) in labels can be estimated by a model if trained with an appropriate setup, and then the model can report data (aleatoric) and model (epistemic) uncertainty separately <cit.>, which have different meanings. High model uncertainty reports gaps in the training set and inputs far from the training set distribution, while high data uncertainty reports problems with labels, such as ambiguous or incorrect labels. §.§.§ Enabling Documentation, Transparency, and Human Oversight Art. 11 and 12 of the AIA require sufficient technical documentation and record keeping for which are both measure to enable more transparency and human oversight in high-risk GPAI. Annex IV AIA specifies concrete requirements for technical documentation, where uncertainty estimation measure can be used to satisfy several of them as follows: * (1)(b) how AI system can be used to interact with * (1)(c) the computational resources used to develop, train, test and validate the AI system * (1)(e) technical measures needed to facilitate the interpretation of the outputs of AI systems * (1)(f) he technical solutions adopted to ensure continuous compliance of the AI system with AIA Confidence level estimation for GPAI can assist providers and deployers to understand and assess the reliability of the GPAI output (art. 14 AIA). In particular, such measure will allow to find a low confidence answers that are likely to be incorrect or present inputs that were unexpected during training, which can then be logged and used to improve the system in a next iteration. §.§.§ A Mmeasure to Assess GPAI Accuracy and Robustness Against Errors AIA requires accuracy and robustness measurements as well as continuous performance evaluation of the AI system in order to ensure resilience against limitations of the AI system such as errors, faults or inconsistencies and sufficient transparency information for deployers of AI systems regarding such limitations (see Art. 15 (3) and Art. 13 (3)(b)(ii)). The desired level of accuracy depends on the domain and the level of error tolerance for the specific task. For example medical AI applications needs to have high accuracy across multiple population groups and proper confidence estimation for physicians to trust predictions and take a deeper look on low confidence predictions, while leisure applications not need to have high accuracy, as it is a low stakes setting. A limitation for the currently used performance metrics for GPAI is that they report on overall model accuracy, but does not account for the nature, origin, or severity of reported error rates. One very clear example is in face recognition algorithms <cit.>, where performance in terms of accuracy decreases significantly for darker skin tones, as they are less prevalent in the training set. Without proper validation, biases in a model can go unchecked. Uncertainty estimation can be used as a per-sample proxy for accuracy metric for GPAI as well as a source to examine the nature and severity of errors. For example, in the case of face recognition, when the model makes incorrect predictions, these should have a high uncertainty or low confidence, and this should be examined by a human, by setting threshold on uncertainty. An important case are hallucinations, which should also be predictions with low confidence, which can then be logged, and even a GPAI system can reject to produce an answer instead of showing a hallucination to the end user. §.§ Limitations The use of uncertainty estimation methods come with many limitations. In general there are no guarantees on quality of uncertainty <cit.>, meaning that incorrect predictions can still have high certainty, and there is much research to improve calibraton of machine learning models. Part of our main argument is that GPAI systems with uncertainty estimation require more energy use, during both training and at prediction time, and this is a major limitation, as the public and regulators would like to reduce the energy consumption of GPAI systems. Finally, uncertainty estimation methods do not directly address bias in datasets or GPAI systems, which are built indirectly by humans, and for this purpose other methods must be used, coming from the literature in fairness of machine learning algorithms. § DISCUSSION AND CONCLUSIONS The new AIA act is a brave fist step towards a comprehensive accountability regime for AI systems in general, and GPAI in particular. However, the act is a framework law that requires its interpretation with respect to each AI model or system on case-by-case bases. AIA also relies on the development of common standards and technical specifications to establish best practices for compliance with the regulation. One standard proposed and examined in this paper for its feasibility is uncertainty estimation. Providers and deployers of GPAI should know if the output they obtain from the system is correct or they should trust the prediction, but current GPAI systems do not give confidence estimates. This paper provided arguments that GPAI models should be trained with proper uncertainty estimation methods, and provide confidence estimates to the end user. We demonstrated that uncertainty estimation measure is a practical solution for compliance with AIA requirements for transparency, technical documentation, robustness, and human oversight as it allows providers and developers to disregard erroneous output and further examine and curate the models data to mitigate hallucination problems. A summary of our proposed use cases are presented in Table <ref>. Some controversies emerge in the field of GPAI since integration of legal compliance measures like uncertainty estimation also increases the FLOPs for model training. The legislator approach to decide if GPAI poses systemic risk based on the amount of compute is a good starting point, but it is a simplistic view, as computation can be used for different purposes that might not imply emerging or unexpected properties of a model. This presents a dilemma under the AIA. It seems that the legislator considers more computations for GPAI training as an indicator for increased risk of the system, to the contrary, we demonstrated that measure to ensure evaluation of the model and legal compliance can increase the computations in order to reduce the risks. It is questionable, if methods to increase legal compliance that also increase the computational and energy consumption for the AI system should be encouraged and if so should those computations be excluded from the FLOPs count in order to avoid the classification of the system as systemic risk. This presents a legal dilemma that might discourage developers from implementing advanced model performance methods like uncertainty estimation. icml2024
http://arxiv.org/abs/2408.12457v1
20240822145506
Data-driven MPC with terminal conditions in the Koopman framework
[ "Karl Worthmann", "Robin Strässer", "Manuel Schaller", "Julian Berberich", "Frank Allgöwer" ]
eess.SY
[ "eess.SY", "cs.SY", "math.OC" ]
Extendable optical phase synchronization of remote and independent quantum network nodes over deployed fibers R. Hanson August 26, 2024 ============================================================================================================= empty empty § ABSTRACT We investigate nonlinear model predictive control (MPC) with terminal conditions in the Koopman framework using extended dynamic mode decomposition (EDMD) to generate a data-based surrogate model for prediction and optimization. We rigorously show recursive feasibility and prove practical asymptotic stability w.r.t. the approximation accuracy. To this end, finite-data error bounds are employed. The construction of the terminal conditions is based on recently derived proportional error bounds to ensure the required Lyapunov decrease. Finally, we illustrate the effectiveness of the proposed data-driven predictive controller including the design procedure to construct the terminal region and controller. Data-driven control, error bounds, nonlinear model predictive control, Koopman operator, SafEDMD § INTRODUCTION Model predictive control (MPC) is a well-established advanced control methodology. The underlying idea is to solve, at each time instant, a finite-horizon optimal control problem based on the most-recent state measurement to evaluate the controller, see, e.g., <cit.>. While MPC is attractive due to the simplicity of the underlying idea and its ability to handle constrained nonlinear multi-input, multi-output systems, some care is in order to ensure proper functioning, see, e.g., <cit.>. To this end, often terminal conditions <cit.> are used to ensure recursive feasibility and asymptotic stability. A key requirement to apply MPC is a reliable model to accurately predict the system behavior in dependence of the control. In this regard, data-driven methods have recently gained popularity, see, e.g., <cit.> and the references therein. In this paper, we focus on methods based on extended dynamic mode decomposition (EDMD; <cit.>), whose theoretical underpinning is the Koopman framework <cit.>, see also the recent review article <cit.>. The Koopman operator replaces the original highly-nonlinear dynamics by linear dynamics in the infinite-dimensional space of observable functions. Then, a finite-dimensional data-driven surrogate model is generated by EDMD using linear regression. This approach was extended to systems with inputs (EDMDc: EDMD with control; <cit.>). An alternative approach exploits control affinity to construct a bilinear surrogate model <cit.>, which exhibits a superior performance if direct state-control couplings are present, see, e.g., <cit.> and <cit.> for an application and a discussion within the scope of MPC. Whereas convergence of EDMD in the infinite-data limit was shown in <cit.>, an essential tool for a thorough controller design with guarantees are finite-data error bounds. Here, to the best of our knowledge, Igor Mezić was the first to rigorously establish bounds on the estimation error <cit.> for deterministic systems and ergodic sampling. Then, the authors in <cit.> provided error bounds based on i.i.d. sampling before the first finite-data bounds on the approximation error for control systems were derived in <cit.>. For deterministic and stochastic continuous- and discrete-time systems in Polish spaces with i.i.d. and ergodic sampling, error estimates under non-restrictive assumptions were derived in <cit.>. EDMD has been successfully applied in MPC <cit.>, see also <cit.> for a tube-based approach. These approaches have been shown to perform well in applications, e.g., autonomous driving <cit.>. However, the first rigorous closed-loop analysis of MPC using EDMD in the prediction step was only recently provided in <cit.>. The key step in controller design with closed-loop guarantees was to adapt the regression problem in EDMD and, then, deduce error bounds, which are proportional to the control and the (lifted) state <cit.>. In this article, we present an EDMD-based MPC scheme with terminal conditions with closed-loop stability guarantees and verified domain of attraction. In particular, we rigorously show recursive feasibility and practical asymptotic stability. Here, the term practical results from an accumulation of the error along the predicted trajectories within the optimization step of the MPC algorithm. The design of the terminal conditions relies on our recently proposed controller design framework SafEDMD <cit.>. Contrary to established approaches, e.g., based on the linear-quadratic regulator and EDMDc <cit.>, we provide rigorous closed-loop guarantees based on tailored finite-data error bounds and numerically demonstrate a significantly improved closed-loop performance. The outline is as follows: In Section <ref>, we briefly recap SafEDMD in the Koopman framework. In Section <ref>, we present our EDMD-based MPC scheme. Then, the respective analysis (recursive feasibility, practical asymptotic stability) and the constructive design of the terminal conditions is presented in Section <ref>. Finally, the results are numerically validated before conclusions are drawn in Section <ref>. Notation: For non-negative integers a,b, we use the notation [a:b] := { i ∈ℕ_0 | a ≤ i ≤ b}. For two sets A, B ⊂ℝ^n, A ⊕ B = {𝐳∈ℝ^n |∃ 𝐱∈ A, 𝐲∈ B: 𝐳 = 𝐱 + 𝐲} is the Pontryagin sum. A continuous function α: ℝ_≥ 0→ℝ_≥ 0 is called of class 𝒦 if it is strictly increasing and zero at zero. If α∈𝒦 is, in addition, unbounded, α is of class 𝒦_∞. A continuous function β: ℝ_≥ 0×ℕ_0 →ℝ_≥ 0 is said to be of class 𝒦ℒ if β(·,k) ∈𝒦_∞ holds and β(r,·) is strictly monotonically decreasing with lim_t →∞β(r,t) = 0. The closed ε-ball around 𝐱∈ℝ^n is denoted by ℬ_ε(𝐱). § THE KOOPMAN OPERATOR AND SAFEDMD We consider continuous-time dynamical control systems governed by the control-affine ordinary differential equation ẋ(t) = g_0(x(t)) + ∑_i=1^m g_i(x(t)) _i(t) with maps g_i ∈𝒞^1(ℝ^n,ℝ^n), i ∈ [0:m]. For the locally integrable control function ∈ L^∞_loc(ℝ_≥ 0,ℝ^m), we have (local) existence and uniqueness of the respective (Carathéodory) solution x(·;𝐱̂,) emanating from 𝐱̂∈ℝ^n. Typically, control functions  are implemented in a sampled-data fashion with zero-order hold, i.e., (t) ≡𝐮_k ∈ℝ^m on [k Δ t, (k+1) Δ t), k ∈ℕ_0, for sampling period Δ t > 0. Invoking autonomy of the maps g_i, i ∈ [0:m], we define the discrete-time system dynamics 𝐱^+ = f(𝐱,𝐮) := 𝐱 + ∫_0^Δ t g_0(x(t;𝐱,u)) + G(x(t;𝐱,u)) 𝐮 dt with G(x(t;𝐱,u)) 𝐮 = ∑_i=1^m g_i(x(t;𝐱,u)) u_i for the constant control function u(t) ≡𝐮. Hence, x_𝔲(k+1;x̂) = f(x_𝔲(k;x̂),𝐮_k) = x((k+1) Δ t; 𝐱̂,u) holds with the control function u defined by (<ref>) and the state x_𝔲(k+1;x̂) generated by the discrete-time system (<ref>) using the control-input sequence 𝔲 = (u_i)_i=0^k. In particular, the discrete time k ∈ℕ_0 corresponds to the continuous time k Δ t. Since we consider the stabilization task, we assume that g_0 vanishes at the origin, i.e., g_0(0) = 0. Then, the origin is an equilibrium for u = 0 in the dynamics (<ref>). For given control function u(t) ≡𝐮, the Koopman identity (𝒦^t_𝐮ψ)(x̂) = ψ(x(t;x̂,u)) holds for all observables ψ∈ L^2(ℝ^n,ℝ), t ≥ 0, 𝐱̂∈ℝ^n. Here, 𝒦^t_𝐮 represents the Koopman operator of the respective semigroup (𝒦^t_𝐮)_t ≥ 0 of bounded linear operators. It is noteworthy that the Koopman operator 𝒦^t_𝐮 maps an observable (function) ψ to another observable 𝒦^t_𝐮ψ. To derive a data-driven surrogate model of the Koopman operator, we collect finitely many, linearly independent observables in the dictionary 𝒟 := {ψ_k, k ∈ [0:M] }, whose span, 𝕍 := span( 𝒟 ), forms an (M+1)-dimensional subspace. On the convex and compact set 𝕏⊂ℝ^n containing the origin in its interior, the compression P_𝕍𝒦^t_𝐮|_𝕍 is approximated by linear regression using d ∈ℕ samples (ψ_k(𝐱̂_j),ψ_k(x(t;𝐱̂_j,u))), j ∈ [1:d], see, e.g., <cit.>. Further, we set ψ_0 ≡ 1, ψ_k(𝐱) = x_k for k ∈ [1:n], and ψ_k(0) = 0 with ψ_k ∈𝒞^2(ℝ^n,ℝ) for k ∈ [n+1:M], resulting in Φ(𝐱) = [ 1 x_1 ⋯ x_n ψ_n+1(𝐱) ⋯ ψ_M(𝐱) ]^⊤, 𝐱 = [ 0_n I_n 0_n × M-n ]Φ(𝐱), and some constant L_Φ > 0 such that 𝐱≤Φ(𝐱) - Φ(0) ≤ L_Φ𝐱 holds on 𝕏. We require the following assumption originally proposed in <cit.>, which states that the compression of the Koopman operator coincides with the restriction 𝒦^t_u|_𝕍, see also <cit.> for sufficient conditions and <cit.> for a detailed discussion. [Invariance of 𝕍] For any ψ∈𝕍 and u(t) ≡𝐮∈ℝ^m, let ψ(x(t;·,u)) ∈𝕍 hold for all t ≥ 0. As motivated in <cit.> and rigorously shown in <cit.>, the Koopman operator approximately inherits control affinity: 𝒦^t_u≈𝒦^t_0 + ∑_i=1^m u_i (𝒦^t_e_i - 𝒦^t_0), where e_i stands for the ith unit vector, i∈[1:m]. Here, we apply SafEDMD as proposed in <cit.>, i.e., we learn a data-driven bilinear surrogate model of the form K_u^Δ t = K_0^Δ t + ∑_i=1^m u_i (K_e_i^Δ t - K_0^Δ t) using d i.i.d. data samples for u = 0 and u = e_i, i ∈ [1:m]. Due to the constant observable ψ_0(x) ≡ 1, ψ_k(0) = 0 for k ∈ [1:M], and f(0,0) = 0, we impose the following structure on the surrogate of the Koopman operator K_0^Δ t = [ 1 0^⊤; 0 A ], K_e_i^Δ t = [ 1 0^⊤; b_i B_i ], i∈[1:m]. The unknown matrices A, B_i and the vector b_i result from solving the linear regression problems A = argmin_A∈ℝ^M× MY_0 - A X_0_F, [ b_i B_i ] = argmin_b_i∈ℝ^M, B_i∈ℝ^M× MY_e_i - [ b_i B_i ] X_e_i_F for i ∈ [1:m], where ·_F is the Frobenius norm. The data matrices are X_0 = [ 0_M I_M ][ Φ(x_0,1) ⋯ Φ(x_0,d) ], X_e_i = [ Φ(x_e_i,1) ⋯ Φ(x_e_i,d) ], i ∈ [1:m], Y_u = [ 0_M I_M ][ Φ(x(Δ t;x_u,1,u)) ⋯ Φ(x(Δ t;x_u,d,u)) ]. As shown in <cit.>, we have the following proportional bound: for any probabilistic tolerance δ∈ (0,1), amount of data d_0 ∈ℕ, and sampling rate Δ t, there are constants c_x,c_u ∈𝒪(1/√(δ d_0)+Δ t^2) such that (𝒦_u^Δ tΦ)(x) - K_u^Δ tΦ(x)≤ c_x Φ(x) - Φ(0) + c_u u holds for all x∈𝕏 and u∈𝕌 with probability 1-δ provided d ≥ d_0, where 𝕌⊂ℝ^m is compact. In particular, the bound on the estimation error (<ref>) can be guaranteed for arbitrarily small constants c_x,c_u>0 for sufficiently many data points d_0 and a small enough sampling rate Δ t. For the construction of the terminal ingredients in the proposed MPC approach, it is crucial that (<ref>) formulates a proportional error bound, i.e., the right-hand side vanishes for (x,u)=(0,0). By compactness of 𝕏 and 𝕌, the bound (<ref>) on the estimation error implies that, for any ε>0 and probabilistic tolerance δ∈ (0,1), there is an amount of data d_0∈ℕ and a maximal time step Δ t_0> 0 such that, for all Δ t ≤Δ t_0 and d≥ d_0, the SafEDMD surrogate model (<ref>) satisfies the following bound with probability least 1-δ: (𝒦_u^Δ tΦ)(x) - K_u^Δ tΦ(x) ≤ε ∀ x∈𝕏, u∈𝕌. For a sequence of control values 𝔲 = (u_κ)_κ=0^N-1⊆𝕌, the κ-step prediction, κ∈[1:N], resulting from the surrogate model (<ref>), denoted by x_𝔲(κ;x̂), reads [ 0_n I_n 0_n × M-n ] K^Δ t_u_κ-1⋯ K^Δ t_u_0Φ(x̂). Then, using Inequality (<ref>) and the triangle inequality yields (𝒦_u_κ-1^Δ t⋯𝒦_u_0^Δ tΦ)(x) - K^Δ t_u_κ-1⋯ K^Δ t_u_0Φ(x) ≤ε∑_i=0^κ-1 L_𝒦^i for all x∈𝕏 and u∈𝕌 with L_𝒦 := max_u∈𝕌𝒦_u^Δ t. In conclusion, SafEDMD allows us to derive a data-driven surrogate model capable of making multi-step predictions with arbitrary accuracy supposing that sufficient data is available and the sampling period Δ t is small enough. The latter can be mitigated by collecting data for smaller Δ t and, then, construct the predictors by applying the derived models multiple times. This explains why the sampling period Δ t does not explicitly occur for generator-based surrogate models, see <cit.>. However, from a practical viewpoint, operator-based models are desirable as they do not rely on derivative data, see <cit.> for a detailed discussion. § EDMD-BASED MPC WITH TERMINAL CONDITIONS We propose a SafEDMD-based MPC controller with terminal conditions and present a notion of practical asymptotic stability, which will be verified in the subsequent section. A key feature is that the formulation of the terminal conditions is carried out in the lifted space, where we leverage the bilinear structure of the surrogate model for an explicit construction using SafEDMD in Subsection <ref>. In view of the constant observable function contained in Φ, cf. (<ref>), we set Φ(x) := [ 0_M I_M ]Φ(x). Then, Φ(0) = 0 holds. Let the compact sets 𝕏⊆ℝ^n and 𝕌⊆ℝ^m represent the state and control constraints, respectively. We consider a terminal region 𝕏_f⊆𝕏 with 𝕏_f := {x∈ℝ^n|Φ(x)^⊤ P^-1Φ(x) ≤ c } parametrized by an (M × M)-matrix P = P^⊤≻ 0 and c>0. Further, we define the terminal cost V_f:𝕏_f →ℝ by V_f(x):=Φ(x)^⊤ P^-1Φ(x). We define admissibility of a control-input sequence. 𝔲 = (u_κ)_κ=0^N-1⊂𝕌 is said to be an admissible control sequence for x̂∈𝕏⊆ℝ^n and horizon N ∈ℕ, denoted by 𝔲∈𝒰_N(x̂), if x_𝔲(κ;x̂) ∈𝕏, κ∈ [1:N], and x_𝔲(N;x̂) ∈𝕏_f hold. Next, we suitably adapt the notion of admissibility to deal with approximation errors in the optimization step of the MPC algorithm, where predictions are conducted using the SafEDMD-based surrogate model. To this end, we use the following definition in dependence of the approximation accuracy ε, ε∈ (0,ε_0], based on the κ-step error bound (<ref>). For ε∈ (0,ε_0], 𝔲 = (u_κ)_κ=0^N-1⊂𝕌 is said to be an EDMD-admissible control sequence for x̂∈𝕏 and horizon N ∈ℕ for the SafEDMD-based surrogate, denoted by 𝔲∈𝒰_N(x̂), if the set inclusion [0_n I_n 0_n× M-n] K^Δ t_u_κ-1⋯ K^Δ t_u_0Φ(x̂) ⊕ℬ_c̅(κ) ε(0) ⊆𝕏 holds for all κ∈ [1:N] with c̅(κ) := ∑_i=0^κ-1 L_𝒦^i, where 𝕏 is replaced by 𝕏_f for κ = N. Based on this definition, we propose our EDMD-based MPC scheme with terminal conditions for the stage costs ℓ(x,u) := x_Q^2 + u_R^2 := x^⊤ Q x + u^⊤ R u with Q = Q^⊤≻ 0 and R = R^⊤≻ 0 resembling ideas from robust MPC <cit.>. Our goal is to show recursive feasibility and practical asymptotic stability of EDMD-based MPC as formulated in Algorithm <ref>. To this end, we recall <cit.> – practical asymptotic stability w.r.t. the approximation error ε, i.e., that the behavior of the closed-loop (dynamical) system resembles an asymptotically-stable one until an arbitrarily small neighborhood of the origin is reached, where the size of the neighborhood depends on the approximation accuracy ε. The origin is said to be practically asymptotically stable (PAS) w.r.t. the approximation error on the set A ⊆𝕏 containing the origin in its interior if there exists β∈𝒦ℒ such that: for each r>0, there is ε_0 > 0 such that, for all ε∈ (0,ε_0] satisfying condition (<ref>), the solution x_μ_N^ε(·,x̂) of x_μ_N^ε (k+1) = f(x_μ_N^ε(k),μ_N^ε(x_μ_N^ε(k))), x_μ_N^ε(0) = x̂∈ A, with f from (<ref>) fulfills x_μ_N^ε(k;x̂) ∈ A and x_μ_N^ε(k;x̂) ≤max{β(x̂,k),r} ∀ k∈ℕ_0 for the feedback law μ_N^ε defined in Step 3) of Algorithm <ref>. § ANALYSIS OF EDMD-BASED MPC AND CONSTRUCTION OF THE TERMINAL CONDITIONS We first show recursive feasibility and practical asymptotic stability of the origin w.r.t. the MPC closed loop resulting from Algorithm <ref>. The proof is based on the following assumption, whose (constructive) verification is discussed in Subsection <ref>. To this end, we construct a suitable terminal region and terminal cost using SafEDMD <cit.> in combination with the proportional error bound (<ref>). [Terminal conditions] Let a continuous sampled-data controller μ: 𝕏_f→𝕌 with μ(0) = 0 be given such that 𝕏_f is rendered invariant w.r.t. the discrete-time dynamics (<ref>) and the Lyapunov decrease V_f(f(x,μ(x))) ≤ V_f(x) - ℓ(x,μ(x)) holds for all x∈𝕏_f with stage cost ℓ(x,u) = x_Q^2 + u_R^2. §.§ MPC closed-loop analysis In this part, we provide our main theoretical result. We show that, assuming initial feasibility, the MPC closed loop is well defined (recursively feasible) and exhibits practical asymptotic stability of the origin w.r.t. the MPC closed loop. Let Assumptions <ref> and <ref> hold. Then, the MPC closed loop is recursively feasible and the origin is practically asymptotically stable w.r.t. the approximation error on the set 𝕏 in the sense of Definition <ref>. Let r > 0 be given. W.l.o.g., we assume ℬ_r = ℬ_r(0) ⊆𝕏_f; otherwise, we reduce r until this inclusion holds while the norm bound in Definition <ref> follows also for the original r. In view of (<ref>), we choose a sufficiently small, but fixed estimation error ε>0 such that the following construction can be conducted. Let η > 0 be such that f(x,u) ∈ℬ_r holds for all x∈ S := V_f^-1[0,η] ⊕c̅(N) ε⊆ℬ_r and all control values u∈μ(S) := {v∈ℝ^m|∃ x∈ S: μ(x) = v}, where V_f^-1[0,η] is the sub-level set {x| V_f(x) ≤η}. Furthermore, for x̂∈ A, let the robust EDMD admissible control-input sequence 𝔲^⋆ = (u_κ^⋆)_κ=0^N-1∈𝒰_N(x̂) be optimal in Step 2) of Algorithm <ref> meaning, in particular, that x̂ is initially feasible. Then, we show that the (shifted and prolonged) control sequence 𝔲^+ := [ u_1^⋆ ⋯ u_N-1^⋆ μ(x_𝔲^⋆(N;x̂)) ] is feasible for the successor state x^+ = f(x̂,μ^ε_N(x̂)) of the discrete-time dynamics (<ref>) and yields a Lyapunov decrease w.r.t. the respective (optimal) value function V_N if x̂∉ S. Otherwise x^+ is contained in ℬ_r per construction. Feasibility of x^+ and x_𝔲^+(κ;x^+), κ∈ [1:N-1], directly follows from Assumption <ref> and the constraint tightening in Definition <ref>. Since, in the compact set 𝕏_f ∖ V_f^-1[0,η], min_x∈𝕏_f ∖ V_f^-1[0,η] V_f(x) - V_f(f(x,μ(x))) > 0 holds for the minimal decrease, continuity of μ implies ξ := min_x∈𝕏_f ∖ V_f^-1[0,η] max_y∈ Y V_f(x) - V_f(f(x,μ(y))) > 0, Y = (𝕏_f ∩ℬ_c̅(N) ε(x)) ∖ V_f^-1[0,η], for sufficiently small ε. In conclusion, we get a decrease by applying u^+_N-1 in the last prediction step, whenever the predicted penultimate state is not contained in the η-ball centered at the origin, which completes the proof of recursive feasibility in view of the prelude before defining 𝔲^⋆. In the following, we follow the line of reasoning in <cit.> and verify a suitable decay of the optimal value function V_N, which serves as a Lyapunov function. To this end, for the kth state x̂ = x^MPC_μ^ε_N(k) of the MPC closed-loop trajectory, let 𝔲^⋆ = 𝔲^⋆(x̂) = (u^⋆_κ)_κ=0^N-1 again denote the optimal control-input sequence. Then, for the successor state x^+ = f(x̂,u^⋆(0)), we define the candidate control-input sequence at time k+1 by (<ref>). This yields V_N(x^+) - V_N(x̂) + ℓ(x_𝔲^⋆(0),u^⋆_0) ≤∑_κ=0^N-2( ℓ(x_𝔲^+(κ),u^+_κ) - ℓ(x_𝔲^⋆(κ+1),u^⋆_κ+1) + ℓ(x_𝔲^+(N-1),u^+_N-1) + V_f(x_𝔲^+(N))_≤ V_f(x_𝔲^+(N-1)) + P^-1c̃ L_Φε - V_f(x_𝔲^⋆(N)) with x_𝔲^+(κ) = x_𝔲^+(κ;x^+) and x_𝔲^⋆(κ) = x_𝔲^⋆(κ;x̂) for κ∈ [0:N] in view of the assumed Lyapunov decrease (<ref>), where the term P^-1c̃ L_Φε results from the difference of V_f(x_𝔲^+(N)) - V_f(f(x_𝔲^+(N-1),u_N-1^+)) analogously to the following calculations: For vectors a,b∈ℝ^n and a matrix M ∈ℝ^n × n, the estimate a_M^2 -b_M^2 = (a+b)^⊤M (a-b) ≤Ma+ba-b holds. Hence, every difference in the stage cost in the sum (κ∈ [0:N-2]) can be estimated by Qx_𝔲^+(κ) + x_𝔲^⋆(κ+1) x_𝔲^+(κ) - x_𝔲^⋆(κ+1) , where the second factor can be uniformly estimated on the compact set 𝕏 by c̃, while the third summand is uniformly bounded by c̅(κ+1) ε. Next, we estimate the term V_f(x_𝔲^+(N-1)) - V_f(x_𝔲^⋆(N)) by using the definition V_f(x)=Φ(x)^⊤ P^-1Φ(x) and the Lipschitz continuity of Φ by c̃P^-1 L_Φx_𝔲^+(N-1) - x_𝔲^⋆(N) ≤c̃P^-1 L_Φc̅(N) ε. Combining both estimates and using ℓ(x_𝔲^⋆(0),u^⋆_0) ≥x̂_Q^2, we get V_N(x^+) - V_N(x̂) ≤ c̃ε[ P^-1 L_Φ(1+c̅(N)) + Q ∑_κ=0^N-2c̅(κ+1) ] - x̂_Q^2, i.e., the desired Lyapunov decrease outside the sub-level set V_f^-1[0,η] contained in ℬ_r. Using standard arguments <cit.>, this completes the proof of practical asymptotic stability. We emphasize that the used constraint tightening ensuring recursive feasibility relies on a simple propagation of a one-step error bound to bound the deviation between nominal and true state along the prediction horizon <cit.>. Whereas the present article serves as a starting point, more advanced robust MPC techniques may be applied to reduce conservatism, see, e.g., <cit.> or <cit.> and the references therein. Naturally, the question arises whether the proposed MPC controller may indeed render the closed loop asymptotically stable. Based on the above line of reasoning, this cannot be easily shown. To see this, one may trace back (estimate from above similarly to the derived Inequality (<ref>)) the term x_𝔲^+(κ) - x_𝔲^⋆(κ+1), κ∈ [1:N-1], to x_𝔲^+(0) - x_𝔲^⋆(1;x̂) and, then, invoke the proportional error bound (<ref>). However, this yields a linear term in norm, which cannot be bounded by the quadratic decrease established in our proof arbitrarily close to the origin. In conclusion, we cannot directly compensate the linear error bound locally around the origin, preventing to conclude asymptotic stability. §.§ Construction of the terminal conditions using SafEDMD Next, we show how to design the terminal conditions of the proposed MPC Algorithm <ref> such that Assumption <ref> holds. In particular, we exploit the proportional error bound (<ref>) of SafEDMD to design a locally stabilizing control law for all x∈𝕏_f. To this end, we leverage <cit.> to infer the sampled-data controller μ(x) = (I-L_w(Λ^-1⊗Φ(x)))^-1 L P^-1_μΦ(x), where ⊗ stands for the Kronecker product and the matrices L∈ℝ^m× M, L_w∈ℝ^m× Mm, 0≺Λ=Λ^⊤∈ℝ^m× m, and 0≺ P_μ=P_μ^⊤∈ℝ^M× M are chosen such that two linear matrix inequalities are satisfied, see <cit.>. More precisely, μ renders the terminal region 𝕏_f invariant w.r.t. the discrete-time system (<ref>) and guarantees Φ(f(x,μ(x)))^2_P^-1_μ < Φ(x)^2_P^-1_μ for any x∈𝕏_f ∖{0} if (<ref>) holds. To satisfy (<ref>), we exploit the prior strict inequality, i.e., there exists ϵ_x,ϵ_u>0 such that Φ(f(x,μ(x)))^2_P^-1_μ ≤Φ(x)^2_P^-1_μ - ϵ_xx^2-ϵ_uμ(x)^2 ≤Φ(x)^2_P^-1_μ - ϵℓ(x,μ(x)) with ϵ=min{ϵ_x/Q,ϵ_u/R} for all x∈𝕏_f. Hence, defining P = ϵ P_μ, c=ϵ^-1 with the constructed terminal conditions using SafEDMD ensures the required Assumption <ref>. We conclude this section highlighting an advantage of dual-mode MPC combining Algorithm <ref> and SafEDMD. Besides providing a suitable data-driven way to construct the terminal ingredients, SafEDMD can be also used to obtain exponential stability <cit.>. Once the practically-stable region is reached (which is guaranteed by Theorem <ref>), one may switch to the stabilizing terminal controller. The underlying reason is that, in the region of practical stability, the uncertainty attached to the SafEDMD predictions outweighs the advantages of its prediction capability. § SIMULATION In the following, we evaluate our proposed MPC scheme. We consider an undamped inverted pendulum ẋ_1 = x_2, ẋ_2 = g/lsin(x_1) - b/ml^2x_2 + 1/ml^2u and apply SafEDMD to obtain a data-driven bilinear surrogate model. Algorithm <ref> is implemented in Matlab using MPCTools <cit.> with its interface to the nonlinear optimization software CasADi <cit.>. For the simulation, we choose b=0.5, l=1, m=1, and g=9.81, and collect d=6000 data samples for each constant control input u(t)≡u̅ with u̅∈{0,1}, where we sample uniformly from 𝕏:=[-15,15]^2 with the sampling rate Δ t = 0.01. We impose the control constraint 𝕌:=[-25,25]. Further, we choose Q=I and R=0.1I in the stage cost (<ref>). We set Φ(x) = [ 1 x_1 x_2 sin(x_1) ]^⊤ and design the terminal ingredients based on <cit.> with c_x=c_u=3e-4 for the learning error bound (<ref>) with ε in (<ref>), S_z=0, R_z=2.5, and Q_z according to  <cit.>. We deploy the MPC scheme with a horizon N=1.1/Δ t=110 to stabilize the unstable equilibrium at the origin. Fig. <ref> depicts the closed-loop behavior under the proposed controller. As expected due to Theorem <ref>, the state is practically stabilized, i.e., converges to a set-point close to the origin within the designed terminal region. Notably, by following Remark <ref>, i.e., switching to the stabilizing terminal control law after reaching the (invariant) terminal region at t≈ 10, we can remove the offset and obtain a dual-mode MPC which asymptotically stabilizes the origin. For comparison, we apply MPC based on a linear Koopman model (L-MPC) based on EDMDc <cit.>, which is a commonly used Koopman-based control technique <cit.>. Here, L-MPC stabilizes the nonlinear system with a remaining offset to the origin, but offers no guarantees w.r.t. closed-loop stability of the nonlinear system. Although we only have shown that the proposed MPC scheme yields asymptotic stability when operated in dual-mode with SafEDMD, we briefly illustrate that the MPC control law may also be asymptotically stabilizing without switching. Applying SafEDMD for the observables Φ_1(x)= [ 1 x_1 x_2 sin(x_1) x_2 cos(x_1) ]^⊤, the MPC scheme asymptotically stabilizes the origin (see Fig. <ref>). Here, L-MPC still results in an offset and oscillating closed-loop behavior when applied to the nonlinear system. § CONCLUSIONS AND OUTLOOK We proposed a data-driven MPC scheme with terminal conditions, where a variant of EDMD is used to generate a bilinear surrogate of the nonlinear system. The terminal region and costs are constructed using the recently proposed SafEDMD learning architecture. We rigorously showed practical asymptotic stability w.r.t. the MPC closed loop. Further, employing a dual-mode MPC approach based on SafEDMD yields exponential stability. The results are illustrated by a numerical example showing the efficacy in comparison with MPC based on linear models obtained from EDMDc. Future work will be devoted to the removal of Assumption <ref> by using the uniform bounds on the approximation error recently proposed for kernel EDMD in <cit.>. -12cm IEEEtran
http://arxiv.org/abs/2408.11743v1
20240821161041
MARLIN: Mixed-Precision Auto-Regressive Parallel Inference on Large Language Models
[ "Elias Frantar", "Roberto L. Castro", "Jiale Chen", "Torsten Hoefler", "Dan Alistarh" ]
cs.LG
[ "cs.LG" ]
[ MARLIN: Mixed-Precision Auto-Regressive Parallel Inference on Large Language Models equal* Elias Frantarista Roberto L. Castroudc Jiale Chenista Torsten Hoeflerethz Dan Alistarhista,neuralmagic istaInstitute of Science and Technology Austria (ISTA), Klosterneuburg, Austria udcCITIC, Universidade da Coruña, A Coruña, Spain ethzD-INFK, ETH Zurich, Zurich, Switzerland neuralmagicNeural Magic, Inc., Somerville, United States Jiale Chenjiale.chen@ist.ac.at Large language model (LLM) inference, GPU programming, Batch parallelism 0.3in ] § ABSTRACT As inference on Large Language Models (LLMs) emerges as an important workload in machine learning applications, weight quantization has become a standard technique for efficient GPU deployment. Quantization not only reduces model size, but has also been shown to yield substantial speedups for single-user inference, due to reduced memory movement, with low accuracy impact. Yet, it remains open whether speedups are achievable also in batched settings with multiple parallel clients, which are highly relevant for practical serving. It is unclear whether GPU kernels can be designed to remain practically memory-bound, while supporting the substantially increased compute requirements of batched workloads. This paper resolves this question positively by describing the design of Mixed-precision Auto-Regressive LINear kernels, called MARLIN. Concretely, given a model whose weights are compressed via quantization to, e.g., 4 bits per element, MARLIN shows that batchsizes up to 16-32 can be supported with close to maximum (4×) quantization speedup, and larger batchsizes up to 64-128 with gradually decreasing, but still significant, acceleration. MARLIN accomplishes this via a combination of techniques, such as asynchronous memory access, complex task scheduling and pipelining, and bespoke quantization support. Our experiments show that MARLIN's near-optimal performance on individual LLM layers across different scenarios can also lead to end-to-end LLM inference speedups (of up to 2.8×) when integrated with the popular vLLM serving engine. Finally, MARLIN is extensible to further compression techniques, like NVIDIA 2:4 sparsity, leading to additional speedups. § INTRODUCTION The capabilities of large language models (LLMs) <cit.> have led to significant research and industrial interest. Consequently, a lot of effort has been dedicated to reducing their computational costs, and notably their inference costs <cit.>. A large fraction of this work starts from the observation that generative workloads—in which a model produces a next token (often a word) based on a cached context—can be heavily memory-bound when executed on GPUs or CPUs, as the cost of reading the LLM weights dwarfs that of the arithmetic operations, and their footprint greatly exceeds the cache size. Reducing memory movement leads to substantial practical speedups by compressing the network weights, as shown by various recent works <cit.>, in particular in the context of quantization. Specifically, during inference, weights can often be loaded from GPU memory in compressed form—reducing movement costs—and then dynamically decompressed in registers before multiplication. A key limitation of existing such mixed-precision inference implementations is that they cease to provide significant speedups in the batched inference case, that is, when multiple tokens must be generated in parallel. Intuitively, this is because this case has significantly higher arithmetic intensity, making it much harder to fully hide all required computations behind the reduced memory movement. Yet, the batched scenario is key in large-scale LLM applications: for instance, OpenAI is claimed to produce 100 billion words a day <cit.>–that is, more than 1 million words a second–providing ample opportunities for parallelism, and in fact the necessity for grouping these requests to achieve highest GPU utilization. Contribution. In this work, we investigate software support for LLM inference acceleration via mixed-precision in the general batched case. We observe that, across GPU types, quantized LLM generative inference remains memory-bound even for fairly large input sizes: in practice, one could still obtain close to the full speedup from reduced memory movement even when 16-32 tokens are generated in parallel. Concretely, this is because modern GPUs, such as the ones from NVIDIA's Ampere family, typically have a FLOP-to-byte ratio in the range of 100 to 200 for FP16 operations <cit.>. Thus, if one would be able to reduce weight precision to 4 bits while maintaining a proportional number of multiply-accumulate operations per quantized weight (in this case, in the range of 25-50), one could theoretically still obtain close to the optimal 4× speedup. Yet, realizing this in practice is complex. In this paper, we present the design and implementation of a family of mixed-precision inference kernels called MARLIN, which achieve near-optimal batched inference speedups due to reduced memory movement on modern, widely available, NVIDIA Ampere GPUs. MARLIN kernels combine various techniques, ranging from advanced task scheduling, partitioning, and pipeplining techniques to quantization-specific layout and compute optimizations. We validate our design both via individual per-layer benchmarks, and end-to-end through an integration with vLLM <cit.>, a popular open-source LLM serving engine. Specifically, for 4bit-weight inference, MARLIN obtains speedups of approximately 3.9× relative to FP16 on an inference-optimized NVIDIA A10 GPU and large matrices, for batch sizes of up to 16-32. (See Figure <ref>). Speedups gradually reduce, towards 1.5× at batch size 128, as the problem becomes compute-bound. Our analysis shows that this is close to optimal. In addition to the base design, we present Sparse-MARLIN, an extension of MARLIN to the 2:4-sparse Tensor Core format, which provides additional speedups of up to 65% relative to the original (dense) variant. We also extend our benchmarks to end-to-end (full model) results in an industrial inference setting, via a vLLM <cit.> integration, on top of leading open LLMs such as Llama <cit.> and Falcon <cit.>, for which accurate 4bit quantization and 2:4 sparsification is possible. According to our end-to-end measurements, the MARLIN kernel dramatically increases the speed of multi-user token generation, achieving up to a 2.8× speedup compared to vLLM’s standard precision kernel, at batchsize 16. Sparse-MARLIN further improves performance, providing speedups of up to 3.2×. Overall, we show that near optimal weight-quantized LLM inference speedups can be achieved also at batchsizes significantly larger than 1. This is done via a new kind of GPU kernel design, which takes full advantage of hardware capabilities specifically mixed-precision, and should be extensible to other compression formats. The code for MARLIN[https://github.com/IST-DASLab/marlin] and its Sparse-MARLIN variant [https://github.com/IST-DASLab/Sparse-Marlin] are available openly, as well as the vLLM integration [https://github.com/vllm-project/vllm]. § BACKGROUND We continue with an overview of GPU architecture, and the CUDA programming and execution model. We focus on the Tensor Core improvements introduced by NVIDIA Ampere, which we utilize extensively. Finally, we provide some background on mixed-precision inference in LLMs. §.§ Graphics Processing Units NVIDIA GPUs comprise an array of Streaming Multiprocessor (SM) elements that share a DRAM memory, known as Global MEMory (GMEM) and an L2 cache. Each SM is divided into partitions, which contain various processing blocks. Each processing block includes a warp scheduler, a Register File (RF), and an L0 instruction cache. The processing blocks within an SM share an L1 cache, which can be partially reconfigured as a fast scratch pad memory referred to as Shared Memory (SMEM). Within each processing block, there are four types of units: Integer Units, Special Function Units, Floating-Point Units (FPU) / CUDA Cores, and Tensor-Core Units (TCU). TCUs, first introduced in the Volta architecture, primarily target ML workloads by enabling one matrix multiply-and-accumulate (MMA) operation per cycle. This reduces the cost of fetching and decoding multiple instructions needed for such computations. In the Ampere architecture, TCUs can deliver up to 16× more performance on FP16 than fused multiply-add (FMA) operations running on FPUs. The CUDA programming and execution model is closely related to the architecture specifics described. It defines three granularity levels, encompassing thread blocks, warps, and threads. The warp is the basic scheduling unit in CUDA, consisting of 32 threads that are executed concurrently. Thread blocks are a collection of warps, scheduled for execution on the same SM. The number of warps, and the number of thread-blocks running simultaneously on each SM is contingent upon hardware limitations, such as the number of warp schedulers, registers per thread, or the SMEM available. §.§.§ Modern Tensor Core Units Ampere GPUs extended their TCUs with respect to previous generations to handle both 1) fine-grained structured sparsity, resulting in Sparse Tensor-Core Units (SPTCs), and 2) asynchronous copy operations. First, structured sparsity is supported through a new 2:4 format, promising a 2× speedup over the original TCUs, and up to 32× over FPUs. The 2:4 format divides the LHS matrix into vectors of length four, and for each vector it zeros-out two elements, resulting in a 50% sparse but structured matrix. Figure <ref> shows a simplified representation of an SPTC. Two data structures will represent the sparsified matrix: (1) a values structure, depicted in blue, containing the non-zero values, (2) a metadata structure, depicted in purple, containing the position of each non-zero value within each group of 4 elements. The metadata structure will be used by the new hardware components on SPTCs to select just the elements of the RHS matrix that are needed in the computation, skipping the zeroed-out values. NVIDIA's Ampere microarchitecture introduces data fetching improvements for enhanced Tensor Core performance. This involves a new asynchronous copy instruction that allows loading data directly from GMEM into SMEM. As shown in Figure <ref> 1, in previous generations it was necessary to first load through L1 cache into RF with global load instructions. Then, the data was transferred to SMEM with shared store instructions, and finally moved into RF with shared load instructions. Ampere's new asynchronous copy saves SM internal bandwidth by avoiding intermediate RF access. There are two variants of this instruction, “access” that saves data into L1 for subsequent accesses and reuse (Figure <ref> 2), and “bypass”, which also skips L1 cache (Figure <ref> 3). §.§ Mixed-Precision Inference on LLMs Mixed-precision LLM inference offers the potential to reduce a model's large memory footprint, and correspondingly accelerate memory-bound workloads by statically compressing pretrained model weights while decompressing them on-the-fly during inference as needed. Weight Quantization. A standard LLM compression approach is weight-only quantization, which reduces the precision in which the weights W are stored, while leaving the layer inputs X untouched. This is extremely popular, e.g., <cit.>, as it has shown remarkable accuracy robustness even at relatively high compression rates. Broadly, weight quantization lossily compresses floating-point weights by mapping them to a limited set of integer levels. We focus on uniform quantization, meaning that given a vector v∈ℝ^n, we define Q ( v, b ) = ⌊v - min(v)/max(v) - min(v)( 2^ - 1 ) ⌉ = ⌊( v - ) / ⌉, where ⌊·⌉ rounds to nearest, = (v) = min(v) maps to zero and = (v) = ( max(v) - min(v) ) / ( 2^ -1 ) is the scale. The reconstruction error can be computed as ε_r = v - Q ( v, b ) _2. We can improve the error by partitioning v into groups and quantizing each group separately, thus storing and values for each group, e.g., of 128 contiguous values. In this paper, we use perform the actual weight quantization via a variant of GPTQ <cit.>, which takes advantage of second-order information to compensate for quantization errors, allowing for only minor accuracy degradation. However, we emphasize that our kernel techniques are independent of any particular quantization algorithm. § THE MARLIN KERNEL §.§ Motivation LLM weight quantization is motivated by the fact that modern GPUs have large FLOPs/Bytes ratios, meaning that they can execute floating point operations much faster than they can read from memory. As an example, an A10 GPU has a FLOPs/Bytes ratio of ≈ 200 <cit.>. In the context of a single layer matrix multiplication, processing one input token takes 2 FLOPs per weight and the GPU can execute 100 FLOPs in the time it takes to load one 4-bit weight. Hence, memory loading will dominate runtime as long as the input batchsize is less than b_opt≈ 50. In fact, b_opt is the batchsize where latency is neither bound by memory nor by compute, i.e., where we achieve the lowest latency at maximum throughput. In principle, this is precisely the batchsize that we would like to operate at in practice: any smaller does not yield further speedup and any larger does not improve throughput. (This analysis is further detailed in Section <ref>.) However, actually implementing such a mixed-precision (FP16-INT4) matrix multiplication (matmul) kernel which fully maximizes essentially all GPU resources, i.e., compute and memory, simultaneously, is a major challenge. In the following, we will try to come as close as possible to this goal by designing MARLIN, an extremely optimized Mixed-precision Auto-Regressive LINear kernel. §.§ Ampere Matrix Multiplication We begin by describing the general concepts used to implement peak performing (uniform precision) matrix multiplication kernels on GPUs, in particular on Ampere class devices. We closely follow the CUTLASS hierarchical parallelization model <cit.>. Concretely, we consider the problem of multiplying an M × K matrix 𝐀 with a K × N matrix 𝐁 to produce an M × N output matrix 𝐂. SM Level. As a first step, 𝐀 is partitioned into M_sm× K_sm blocks 𝐀_sm[i_sm, k_sm], 𝐁 into K_sm× N_sm blocks 𝐁_sm[k_sm, j_sm] and 𝐂 into M_sm× N_sm blocks 𝐂_sm[i_sm, j_sm]. Due to the nature of a matrix multiplication, all 𝐂_sm[i_sm, j_sm] can be computed independently by accumulating the results of 𝐀_sm[i_sm, k_sm] 𝐁_sm[k_sm, j_sm] over all k_sm. Consequently, computation can be easily parallelized by distributing those 𝐂_sm[i_sm, j_sm] sub-problems across the GPU's independent compute units, its SMs. At this stage, 𝐀_sm[i_sm, k_sm] and 𝐁_sm[k_sm, j_sm] blocks must be loaded from global GPU memory. Similarly, 𝐂_sm[i_sm, j_sm] must eventually be written back to global storage, but intermediate accumulation can happen directly in registers, as we will discuss next. Warp Level. Within the sub-problem considered by a single SM, another equivalent partitioning, this time with parameters M_wa, K_wa, and N_wa, is performed. This is in order to assign independent 𝐂_wa[i_sm, j_sm][i_wa, j_wa] output accumulation tasks to different warps. Crucially, the SM blocks 𝐀_sm[i_sm, k_sm] and 𝐁_sm[k_sm, j_sm] can be temporarily stored in shared memory, so that the repeated loading of 𝐀_wa[i_sm, k_sm][i_wa, k_wa] and 𝐁_wa[k_sm, j_sm][k_wa, j_wa] by different warps is much faster. Meanwhile, outputs 𝐂_wa[i_sm, j_sm][i_wa, j_wa] are kept in the corresponding warp's registers, eliminating any additional memory access costs during accumulation. Tensor Core Level. Eventually, each warp will repeatedly multiply M_wa× K_wa and K_wa× N_wa matrices. While the corresponding matrix dimensions are small at this level, they still typically exceed the fundamental Tensor Core (M_tc, K_tc, N_tc) shape. Consequently, one final partitioning step is required. However, unlike before, 𝐂_tc[i_sm, j_sm][i_wa, j_wa][i_tc, j_tc] are accumulated sequentially by a single warp. While all data is in registers at this point and there is thus no memory access cost, it is still important to perform the loop over k_tc outermost. This is to remove sequential dependencies between Tensor Core operations as much as possible to maximize throughput. It should be noted that actually utilizing Tensor Cores requires further distribution of matrix elements across threads in very specific patterns. However, this is a technical detail mandated by the microarchitecture rather than another flexible opportunity for parallelization. §.§ Mixed-Precision Challenges Adapting the above uniform precision matmul to the mixed-precision case while maintaining peak performance, in particular for medium M where the operation is (close to) memory-bound, is challenging for the following reasons: * The various parallelization levels must be very carefully configured to ensure that the loading of the quantized operand 𝐁 actually is the kernel's main runtime bottleneck; and not, e.g., repeated reloading of full precision 𝐀_sm blocks. * As runtime is dominated by memory loading, this aspect must hit peak efficiency, despite the significantly compressed representation of 𝐁. * For medium M, the cost of matmul computations can get close to the overall the memory loading cost, hence requiring extremely careful overlapping to stay close to theoretical performance. Additionally, we also need to manage quantization metadata, making this part even more tricky. * Partitioning constraints forced by Challenge 1, together with the fact that M is not very large, significantly limit parallelization options. This makes it hard to achieve peak memory loading and compute on both the SM and warp level, respectively. This effect is amplified further by existing model matrix shapes which can be unfavorable for specific GPUs. Our MARLIN kernel specifically addresses all of the above challenges, eventually allowing it to achieve close to peak performance in many practical settings. §.§ Kernel Design In what follows, we assume that the matrix 𝐀 is in full FP16 precision, while the K× N matrix 𝐁 has been (symmetrically) quantized to INT4, either with one FP16 scale for each of the N columns or one scale per G consecutive weights in each column, for ⌈ K/G ⌉ N scales in total. Bound By Weight Loading. Executing our target matmul requires, in theory, touching exactly 16MK + 4KN + 16MN bits of memory (reading both operands and writing the results) while executing exactly MKN multiply-accumulate operations, each counted as 2 FLOPs. If M is relatively small, our problem has low arithmetic intensity. Consequently, it should be bound by the cost of reading the quantized weights 𝐁 from global GPU memory. This holds in theory, but we need to organize computation carefully for this to remain true in practice. In contrast to the previously studied <cit.> M = 1 case, where both 𝐀 and 𝐂 are tiny, inputs and outputs now actually have non-negligible size, especially since those operands have 4× higher bit-width than our weights. Hence, we need to pick a sufficiently large N_SM to minimize costly reloading of 𝐀_sm blocks. At the same time, this reduces the number of 𝐂_sm[i_sm, j_sm] sub-problems, making it hard to fully utilize all SMs. The key to working around these problems is exploiting the GPU's L2 cache, which is usually significantly faster than global memory. Additionally, a GPU can load from L2 to L1 and from global to L2 simultaneously. Thus, we can pipeline these loads and essentially hide the bandwidth cost of the 𝐀_sm block loads completely, as long as the overall required memory traffic does not exceed the L2 bandwidth. Consequently, we will proceed by partitioning 𝐂 into blocks of size M × N_sm with N_sm∈{64, 128, 256}, i.e., moderately wide tiles of full input batchsize, and then assigning each corresponding independent matmul sub-problem to one SM. At N_sm = 256, even batchsize M = 64 remains bound by global weight loading. More precisely, global loading of 𝐀_sm blocks remains the bottleneck as long as reading both 𝐀_sm and 𝐁_sm blocks from L2 is faster, i.e.: (2 M K_sm + 0.5 K_sm N_sm) / B_l2 < (0.5 K_sm N_sm) / B_gl, where B_l2 and B_gl denote the L2 and global bandwidth, respectively. Maximizing Loading Bandwidth. In order to maximize practical loading bandwidth, we aim to utilize the widest loads possible; on current GPUs 16 bytes (128 bits) per thread. This means one warp can load 32 × 32 = 1024 INT4 weights with a single instruction. To reach peak efficiency, we need to have 8 threads each in a warp read 128 bytes of contiguous chunks from GMEM (assuming 128-byte-aligned addresses), a full cache line. Achieving this for 𝐀 blocks of shape M × K_sm mandates a K_sm of at least 64. Since the weights are static during inference and can thus be preprocessed offline, we simplify things by reshuffling 16 × 64 tiles so that they are laid out contiguously in memory and are thus loaded optimally, which also simplifies corresponding indexing. While we continuously reload 𝐀_sm blocks from L2 cache, each element of 𝐁 is accessed exactly once. Nevertheless, every read will always be put into the L2 cache, potentially evicting parts of 𝐀 that are still needed by some SMs. To avoid such cache pollution, we use the cp.async instruction with an evict_first cache-hint, ensuring that unnecessarily stored 𝐁 data is dropped before any other cache line. Shared Memory Layouts. Overall, we always load asynchronously via Ampere's cp.async instruction from global (or L2) to shared memory; this requires no temporary registers and also makes overlapping these loads with computation much easier. Due to our offline preprocessing of 𝐁, we can simply copy to shared memory in contiguous fashion, avoiding bank conflicts. In contrast, handling the 𝐀 fragments requires a lot more care: specifically, we need to ensure that the 16-byte vectors corresponding to indices ij, (i + 8)j, i(j + 1) and (i + 8)(j + 1) of each 16× 16 FP16 𝐀 block are stored in different memory banks. Only then can ldmatrix.sync instructions execute in conflict-free manner. (Those load 𝐀 operand data and distribute it across warp threads to prepare for Tensor Core use.) This can be achieved by storing 16-byte element ij in an activation tile at location i(i ⊕ j) in the corresponding shared memory tile, where ⊕ denotes the XOR operation <cit.>. Another key aspect of this index transformation is that if a warp reads a contiguous sub-tile of the global 𝐀 tile (e.g., the first 4 rows), then it will be written permuted but still overall contiguously into shared memory. Although undocumented, this appears to be necessary in order to avoid bank conflicts on writing, as we observed when analyzing outputs of the NVIDIA profiler. These index calculations are somewhat complex and potentially slow to take care of dynamically; however, as they only affect a relatively small number of shared memory locations, which remain static throughout the main loop, we can precompute them in registers, accompanied by appropriate unrolling, described below. Memory Load Pipelining. The key to simultaneously reaching close to maximum bandwidth and close to maximum compute is to fully overlap memory loading and Tensor Core math. For global to shared memory loads, this can be achieved via cp.async operations, in every iteration prefetching the 𝐀_sm and 𝐁_sm blocks which will be used P - 1 steps in the future, where P is the pipeline depth (we need one more buffer for the current tile). Additionally, we can prefetch the next sub-tile from shared memory (most GPU operations do not block until they hit a dependency) before accumulating the current partial matmul, for which the operands were already fetched to registers in the previous iteration—this technique is also called double buffering <cit.>. We pick a pipeline depth of P = 4 for two reasons: (a) this seemed sufficient in all of our tests to completely hide latency while fitting into shared memory even for M = 64, and (b) because it is evenly divisible by 2. The latter is crucial as it allows us to smoothly unroll across the full pipeline since after P iterations both the pipeline and the register buffer index will always have the same value of 0. This unrolling makes all shared memory addressing completely static, avoiding slow transformed index calculations (see above) by using some of the extra registers that we have available. Finally, we would like to note that this also seemed to be the most reliable way to make the CUDA compiler correctly order instructions to enable actual double buffering. Figure <ref> visualizes the several layers of pipelining used by the MARLIN kernel. Warp Layout. The computation of C_sm on a single SM must further be subdivided across warps: if done in direct fashion, each warp would compute an M × (N_sm / #warps) tile of the output. In order to reach peak compute throughput, we would like to use at least four (as Ampere GPUs have four warp schedulers) and ideally eight warps <cit.>, to have additional latency hiding. However, this leads to small tile sizes, especially at smaller N_sm. This is not only problematic for our memory reshuffling discussed above but also hinders Tensor Core throughput since a small tile-width brings more sequential dependencies (as those consecutive operations must use the same accumulators) into tensor-core operations, which can cause stalls. Instead, we fix the sub-tile width of each warp to 64 and further split the computation across K_sm; Figure <ref> illustrates such a warp layout and Algorithm <ref> provides corresponding pseudo-code. Consequently, multiple warps will accumulate partial results of the same 𝐂_wa[i_sm, j_sm][i_wa, j_wa] in registers. These must then eventually be reduced in shared memory before the final write-out. Yet, this can be done via a logarithmic parallel reduction <cit.>, which typically causes minimal overhead. Dequantization and Tensor Cores. Doing naive type-casts from INT4 to FP16 is slow; instead, we follow a modified version of the binary manipulations of <cit.>. We now illustrate this procedure in the simplest case: converting the INT4 located at positions 12 - 15 in an INT16 to a signed FP16 value. First, we extract just the bits corresponding to our INT4 (via an AND of a mask) and turn bits 1-7 of the result into 0110010 (with an OR); this can be accomplished in a single |lop3| instruction, which we however seemingly need to emit explicitly. Now, we have an FP16 number with an exponent of 50 and the last 4 mantissa bits corresponding to our conversion target. Consequently, subtracting the FP16 value with exponent 50 and mantissa 0, will give us the floating point representation of exactly our 4 target bits, unsigned. To make this value signed, we further have to subtract 8, which we can however fuse directly into the last 3 bits of the total value we subtract. A similar strategy works for different bit positions. Modern GPUs can simultaneously compute with two separate 16-bit operands packed into a single 32-bit register. Hence, we can efficiently dequantize two INT4s in an INT32 at the same time, using the just described procedure. Finally, we want to dequantize directly into the right register layout for subsequent Tensor Core calls. To do this, we again take advantage of the fact that 𝐁 can be preprocessed offline and reorganize weights such that the 16-byte vector read by each thread contains precisely its necessary 8 quantized weights of 4 separate 16×16 Tensor Core blocks. Additionally, within an INT32, weights are stored interleaved, according to the pattern 64207531, to power the previously mentioned parallel decoding. At the innermost level, we accumulate the results of an M × 16 times 16 × 64 matmul. We execute this accumulation column-wise, emitting 16×16 times 16×8 Tensor Core mma.sync instructions. This has the advantage over row-wise execution that we can pipeline the dequantization of the next 𝐁 operand with the Tensor Core math of the current column. Groups and Instruction Ordering. For per-output quantization, we can simply scale the final output once before the global write-out. An interesting observation is that despite these loads not being asynchronous to any computation, it is still critical to perform them via cp.async followed by an immediate wait_group instruction, to avoid unfavorable main loop instruction reordering by the compiler. With grouped quantization, which is crucial to maintain the good accuracy, we have to load and apply scaling during the main loop. First, we reorganize scale storage in a similar way as quantized weights (see above), such that the scales required by the same type of thread, for different 16 × 16 blocks, are packed together and can be loaded from shared memory as a single 16-byte vector. In principle, for group-size 128 and a 𝐁_sm tile shape of 64 × 256, we only need to global and shared memory load new scales once every other tile (and here only once during the first sub-tile). However, it appears that the compiler is rather brittle to such irregularities in perhaps the most critical section of the code, leading to unfavorable instructions orderings with 10 - 20% overall slow-down in some shape settings. Instead, we find that reloading scales from shared memory for every sub-tile maintains peak performance. Doing this adds some technically unnecessary shared memory loads, but there is sufficient extra bandwidth to support this at no overhead, while it otherwise preserves the compiler's well pipelined instruction ordering for non-grouped quantization. Striped Partitioning. With all the techniques described so far, we can reach near optimal compute and bandwidth performance, provided matrices are large and can be perfectly partitioned across all SMs over the N axis. In practice, this is rarely the case. The standard remedy in such a situation is to also partition across the K dimension, but for many popular layer shapes and GPU combinations we would need a lot of additional splits to reach an even distribution without significant wave quantization. This in turn adds many global reduction steps, with additional overhead. Instead, we opt for a striped partitioning scheme where the “stripes” of 𝐁 processed by an SM may span across multiple 𝐂_sm tiles (see also Figure <ref>). Concretely, we first determine the number of 𝐁_sm tiles to be processed by each SM T = ⌈#tiles / #SMs⌉ and then assign (up to) T tiles column-wise starting top-left. Crucially, if we reach the bottom of a tile column but the current SM does not yet own T tiles, we proceed by assigning tiles from the top of the next tile column; in other words, stripes can span across multiple columns. This ensures a roughly uniform distribution of tiles across all SMs, while minimizing the number of required global reduction steps. This strategy is similar to stream-k partitioning <cit.>. We implement the global reduction between stripes of the same tile column serially, from bottom to top. The latter approach is most efficient since the bottom-most SM will have its results fastest and the top-most slowest in the presence of any column spill-over. We perform the reduction in FP16 directly in the output buffer to maximize L2 cache hits and thus minimize any global read overheads. This also keeps the operation essentially in-place, requiring only a small extra lock buffer for synchronization. Finally, we note that for batchsizes ≫ 64, we can virtually replicate 𝐁 for the striped index calculations, followed by a modulo operation to move back into the original matrix, and advance the 𝐀 pointer to the corresponding size-64 input batch segment. This results in significantly less global reductions for large input batchsizes (as occur during LLM prefills) and improves compute throughput in this setting. §.§ GPTQ Modifications The quantization format used by MARLIN, designed for peak inference efficiency, is slightly different than the original GPTQ implementation <cit.>, yet still produces highly accurate models. We also integrate two small improvements into GPTQ: (a) picking group scales by searching for optimal group-wise clipping thresholds similar to <cit.>, and (b) supporting calibration sequences of variable length. We have found these modifications to yield small but consistent accuracy improvements over standard GPTQ, while having the advantage of higher performance. (We also provide a simple conversion script between model formats.) Figure <ref> illustrates perplexity (lower is better) versus model size in bits for our variant of GPTQ versus the original uncompressed models. This shows that MARLIN-quantized models are ≈ 3.33 × smaller at the same perplexity as the uncompressed baseline. While this is not lossless (the ideal gain would be 3.87× at this bit-width and group size), it is a significant improvement, especially given MARLIN's high inference efficiency. § THE SPARSE-MARLIN KERNEL To further improve FLOPS/Byte ratios, we can integrate a 2:4 sparsity scheme on top of the 4-bit quantized weight representation. For background, the Sparse Tensor Cores (SPTCs) in the NVIDIA Ampere architecture provide an effective means to execute 50% sparse matrices on specialized hardware units designed for sparse computation. Yet, to harness SPTCs, certain modifications and extensions to the previously described MARLIN kernel are required. First, to accommodate the constraints of the  instruction, which enables the utilization of the SPTCs and requires sparse matrices as the Left-Hand-Side (LHS) operand in the tensor operation <cit.>, we have designed new specific data layouts. Specifically, the problem of multiplying 𝐀 with 𝐁 is now reformulated under-the-hood as solving ( 𝐁^⊤𝐀^⊤)^⊤ to produce 𝐂.[Continuing with notation in Section <ref>, this is multiplying an N× K matrix (weight) with an K× M matrix (activation) to produce an M× N output.] However, this reformulation retains all the techniques and optimizations from the dense MARLIN kernel design previously described. Note that 𝐁 can be preprocessed offline as needed, while 𝐀 can be transposed on-the-fly in SMEM with native support via the ldmatrix instruction and its .trans optional qualifier, without incurring performance degradation. Next, we describe the two new data structures necessary for encoding 2:4 sparse matrices in Sparse-MARLIN, along with their adaptations tailored to address this particular problem: (1) the non-zero values structure, and (2) the metadata indices structure. Quantized non-zero values. Figure <ref>, left side, illustrates a 4-bit quantized matrix 𝐁 of size N× K which has been pruned to 2:4 sparsity. The compressed representation of this matrix, depicted in 1_a, will have half the size of the original one in the inner dimension, that is, N× K/2. However, as each value is a 4-bit element, we can apply the dense MARLIN compression approach on top of this to further compress 8 elements in a 32-bit value, as depicted in 1_b, with a final size of N/8× K/2. To maximize memory efficiency, since the weights remain static during inference, each 64× 16 tile is reshuffled so that each thread loads and stores elements in contiguous memory positions, similar to dense MARLIN. Continuing with Figure <ref>, the colored elements in the non-zero values structure represent an example of the elements supplied by thread T_0 for one 64× 16 block of 𝐁. The paired colors denote elements processed within the same  instruction, necessitating 4 iterations to compute all elements. This layout ensures the widest 128-bit instructions (4 consecutive 32-bit elements per thread, as shown in 1_b) when loading the non-zero value structure from GMEM. Furthermore, due to the redefinition of the product and since the output 𝐂 will be an FP16 matrix of shape M× N, this layout also ensures 128-bit instructions (e.g., first eight consecutive output elements in column 0 stored in T_0 registers) when storing the results transposed from RF to GMEM. Thus, this reformulation of the product not only stores the results transposed without incurring performance degradation, but further improves the efficiency of output writing compared to the baseline dense design. Metadata indices. In order to select the elements from the Right-Hand-Side (RHS) operand 𝐀 that will be necessary in the sparse computation, a metadata structure containing the indexes of non-zero elements in the original matrix is required. Figure <ref>, left side, illustrates the metadata indices structure of 𝐁. Since this is 2:4 sparsity, indices will be in the range 0∼ 3, encoded with 2 bits. Based on the data layout described in Figure <ref>, and considering the sparsity selector constraints of the  instruction, we propose a new data layout for the metadata structure. The sparsity selector indicates the thread(s) within a group of four consecutive threads that will contribute the metadata for the entire group. In the example depicted in Figure <ref>, the sparsity selector can be either 0 (threads T_0, T_1) or 1 (threads T_2, T_3). First, we have to reorder the rows based on the non-zero values structure previously described, as shown in 2_a. This allows us to use 128-bit load instructions from GMEM to SMEM. Then, in order to load the data from SMEM to RF bank-conflict free, we perform a single  instruction which will contain all the information for the next four  operations to be executed, and which will distribute the information across threads T_0∼ T_3 as required. However, a previous data reshuffling is needed, as 2_b shows. This way, threads T_0, T_1 will contain the information for the first two iterations, and threads T_2, T_3 will have the information for the two remaining ones. Remark that all this pre-processing is done offline once, without runtime overhead. § EXPERIMENTAL RESULTS §.§ Kernel Benchmarks In our first set of experiments, we examine the efficiency of MARLIN relative to an ideal kernel, and compare its performance with other popular 4-bit inference kernels, notably the well-optimized PyTorch kernel <cit.>, the AWQ kernel <cit.>, the open-source ExLlamaV2 kernel <cit.>, and the bits-and-bytes kernel <cit.>, on a large matrix that can be ideally partitioned on a target GPU. For this, we choose the NVIDIA A10 GPU, which is popular for inference workloads. This allows all kernels to reach close to their best possible performance. All kernels are executed at 4-bit and groupsize 128. (However, scale formats are not 100% identical, due to small differences between the methods.) Figure <ref> shows our results for a large 72k × 18k matrix. We observe that, while existing kernels achieve relatively close to the optimal 3.87× speedup at batchsize 1 (note the 0.125 bits storage overhead of the group scales), their performance degrades quickly as the number of inputs is increased. In contrast, MARLIN delivers close to ideal speedups at all batchsizes, enabling the maximum possible 3.87× speedup up to batchsizes around 16-32, and tailing off as we transition from the memory- to the compute-bound matmul regime. Due to its striped partitioning scheme, MARLIN brings strong performance also on real (smaller) matrices and various GPUs. This is demonstrated in Figure <ref>, where we benchmark, at batchsize 16, the overall speedup (relative to FP16) across some individual linear layers in popular open-source models <cit.>, showing similar trends. We observe better speedups on commodity GPUs such as the NVIDIA GeForce RTX 3090, and lower speedups on the flagship NVIDIA A100 GPU; this is because the latter has significantly higher GMEM bandwidth and compute, which makes overheads such as pipeline startup latency or suboptimal partitioning relatively bigger, especially on small matrices. Next, we also study what performance can be sustained over longer periods of time, at locked base GPU clock, as this is a probable scenario in a production setting. Interestingly, we find that reduced clock speeds significantly harm the relative speedups of prior kernels, but have no effect on MARLIN's virtually optimal performance (relative to the lower clock setting). This can be observed in Figure <ref>. Finally, we also tested how MARLIN performed on very large batch sizes, corresponding to the initial prompt-processing “prefill” inference step while running on a powerful GPU like the A100. We observed that, even in this case, MARLIN is nearly identical to an uncompressed compute-bound matmul up to batch size 1024, with only ≈ 10% slow-down at even larger input shapes. We leave optimizations in this particular scenario for future work. Roofline Analysis. To gain a deeper understanding of the computational efficiency of MARLIN, we perform a roofline analysis, which is a widely accepted methodology for evaluating performance potential. Figure <ref> shows the roofline analysis of the matrix multiplication operation performed on the MARLIN kernel on an NVIDIA A10 GPU. Several typical weight matrix sizes (2^12, 2^13, 2^14, 2^15) are tested during the analysis. The markers on curves are the profiling results with different input batch sizes (2^0, 2^1, ..., 2^16). First, note that the GPU itself offers different performance levels, depending on whether the boost clock is enabled and can be sustained (see horizontal lines). Generally, we first observe that batchsizes smaller than 64 are memory-bound, while the larger batchsizes are compute-bound, confirming our prior intuition. Further, the MARLIN kernel achieves strong hardware utilization across matrix sizes and arithmetic intensities, with the best results for larger matrices. Interestingly, we observe that for time intensive computations (large matrices and batchsizes), the GPU's clock rate is automatically throttled and FLOP/s correspondingly drop towards the base clock limit. Performance of Sparse-MARLIN. We now examine improvements due to 2:4 sparsity. Figure <ref> shows peak performance of Sparse-MARLIN compared to ideal lines, dense variants, and popular open-source kernels, while Figure <ref> shows sustained performance. These figures again demonstrate strong performance of this implementation, thus validating the extensibility of our design to other formats. §.§ End-to-End Experiments Next, we validate our approach end-to-end (i.e., on full models) in a realistic LLM serving setting. For this, we examine the performance of MARLIN and its sparse variant when integrated into the popular open-source vLLM serving engine <cit.>. Accuracy. In Table <ref> we briefly examine the accuracy difference between the baseline and sparse and sparse-quantized versions of Llama2. While recovering model accuracy is not our focus in this paper, we note that these results show that accuracy can be well-recovered under compression. Integration with vLLM. We first compare the end-to-end performance of MARLIN and Sparse-MARLIN with the default 16-bit kernel via a vLLM integration. The GPTQ-quantized models are used for MARLIN and Sparse-MARLIN, while the original unquantized models are used for the baseline. We perform the benchmark using 64 input tokens per sequence in a batch and let the model generate another 64 tokens for each sequence. We intentionally make the sequence length small so that it can fit into the memories of more types of GPUs and the computation cost is not dominated by attention calculations. Figure <ref> shows the total time it takes for the Llama-2-7B model to generate new tokens on NVIDIA A10 in the generation phase, which reflects the output token throughput. The MARLIN kernel has a speedup up to approximately 3×, while Sparse-MARLIN provides an additional 1.2× end-to-end speedup on top of MARLIN. The reduction in speedup relative to our prior per-layer experiments is due to the various additional overheads of inference, outside the linear layers that we accelerate. GPU and Model Types. Next, Table <ref> shows MARLIN speedups under a variety of settings using several popular (quantized) models on different GPU types. In addition, for large models, we also examine the impact of sharding the weight matrices across multiple GPUs, supported by vLLM. We find that MARLIN improves the performance in all scenarios. The largest speedups happen when inference is memory-bound (up to batchsize ≈ 16) and the GPUs are weaker or fewer in number. The finding is natural as overheads are relatively more costly when the absolute runtime of core operations is lower. Thus, MARLIN is particularly beneficial in resource-constrained settings. Client Counts. Finally, we perform a serving benchmark in a simulated server-client setting and measured the standard TPOT metric (Time Per Output Token, the average latency to generate an output token for each queried sequence) under different querying intensities (queries-per-second or QPS), which influences the average batch size per inference. Figure <ref> shows the results of Llama-2-7B on an NVIDIA RTX A6000 GPU. We observe that MARLIN achieves approximately 2.8× latency reduction, while Sparse-MARLIN provides about 3.3× speedup, noticing that speedups relative to FP16 are stable across QPS values. Observe that the speedup relative to the baseline increases as we increase number of queries per second (QPS). We believe that the explanation for this phenomenon is the following: due to reduced memory movement, MARLIN allows for lower average per-query latency; in turn, this implies that the average batch size at which MARLIN executes is lower than the average batch size for FP16. In turn, this means that the relative gains of MARLIN will increase as we scale up the number of clients. Figure <ref> shows that MARLIN can also lead to improvements in the case where prompt processing is also taken into account, where we measure time to first token (TTFT). § RELATED WORK Due to space constraints, we focus on closely related work about providing efficient support for quantized LLM inference. As noted previously, there is already significant work on accurate LLM weight quantization, with popular methods such as GPTQ <cit.> and AWQ <cit.>, as well as explorations of round-to-nearest (RTN) quantization <cit.>, which is usually less accurate. The MARLIN parallelization approach can be generalized to these quantization methods. In fact, since the original release of our kernel for the GPTQ format, a version of MARLIN supporting AWQ has been introduced independently in vLLM <cit.>. More broadly, LLM quantization methods can also consider compressing both weights and activations <cit.>, with advanced methods such as SmoothQuant <cit.> or QuaRot <cit.>. However, quantization of activations tends to be more complex, due to the emergence of large “outlier” values <cit.>. As such, those approaches tend to either target lower 8bit precision, or require more complex additional processing steps, such as via rotation matrices <cit.>. The MARLIN approach is extensible to this case, for instance, recent independent follow-up to MARLIN extended our approach to the case where activations are quantized to 8 bits, while weights are quantized to 4 bits <cit.>. § DISCUSSION AND FUTURE WORK We have presented MARLIN, a general approach for implementing mixed-precision kernels for LLM generative inference, which achieves near-optimal efficiency by leveraging new GPU hardware instructions and parallelization techniques. Specifically, we have shown that MARLIN and its sparse counterpart reach near-optimal per-layer efficiency, and can lead to speedups of up to 3× in real-world deployment scenarios, at moderate accuracy impact. In terms of future work, a natural direction is investigating MARLIN support for the recently proposed and more complex techniques for “extreme” compression via vector quantization <cit.>, which require more complex decompression. Another direction is to investigate MARLIN support for additional forms of mixed precision, such as the ones arising from activation compression or sparsity. § ACKNOWLEDGMENTS The authors would like to thank the Neural Magic team, in particular Michael Goin, Alexander Matveev, and Rob Shaw, for support during the writing of this paper, in particular with the vLLM integration. This research was supported in part by generous grants from NVIDIA and Google. arxiv
http://arxiv.org/abs/2408.12411v1
20240822140317
If Mixed States Are Secretly Quickly Oscillating Pure States, Weak Measurements Can Detect It
[ "Igor Prlina" ]
quant-ph
[ "quant-ph", "hep-th" ]
prlina@ipb.ac.rs Institute of Physics, University of Belgrade, Pregrevica 118, 11080 Belgrade, Serbia § ABSTRACT The apparent nonunitary evolution in the black hole information paradox and recent work on describing wavefunction collapse via nonunitary nonlinear stochastic operators has motivated us to analyze whether mixed states can be distinguished from quickly oscillating pure states. We have demonstrated that the answer is no for all practical purposes if only strong nonpostselected measurements are performed. However, if weak measurements in postselected systems are used, mixed states and quickly oscillating states produce different results. An experimental procedure is proposed which could in principle determine the nature of mixed states stemming from blackbody radiation, decoherence, thermalization in solid state materials, Unruh radiation and Hawking radiation, among others. The analysis in this work applies to all fast oscillations, including those at Planck scale. As such, tabletop weak measurements can be used to probe (very specific) potential high energy behavior, where strong nonpostselected measurements cannot be applied. This work also demonstrates that weak measurements are not equivalent to a set of strong measurements without postselection since measurements which are impossible for all practical purposes need to be excluded. If Mixed States Are Secretly Quickly Oscillating Pure States, Weak Measurements Can Detect It Igor Prlina ============================================================================================= § INTRODUCTION One of the biggest unresolved problems in quantum theory is the so called “black hole information paradox” <cit.>. Due to Hawking radiation <cit.>, one can have a pure state corresponding to an isolated system collapsing into a black hole, which subsequently fully evaporates, ending in a mixed thermal state. Such an evolution seems to be non-unitary, which is against the postulates of quantum dynamics. However, there is another important aspect of black hole information paradox: an isolated system in equilibrium spontaneously increases its entropy, which is forbidden by the laws of thermodynamics. Without addressing the unitarity concerns, one possible resolution to the thermodynamics issue would be if Hawking radiation is actually a pure state, by definition of zero entropy, yet has properties which make it practically indistinguishable from a mixed state. Of course, pure states and mixed stated are vastly different, how can they be practically indistinguishable? One possible answer is averaging over time. This is relevant if the pure state is a superposition of vectors whose relative phases oscillate very quickly in time. In statistical physics, there is a long standing belief that averaging a single system over long period of time gives the same probability distribution as averaging over a large ensemble of systems in a given moment of time. This is related to the ergodic theorem <cit.>. This equivalence is used in both statistical and quantum mechanics to pick averaging over ensembles as the preferred method. However, a consequence of the equivalence is that the two averagings must lead to the same result. Namely, if one averages a quantum system over ensemble and obtains a quantum state, one is allowed to average it over time once more and the state must not change. The averaging over time usually occurs because the experiment lasts for some finite yet small amount of time. The study of continuous measurements is quite complicated <cit.>. Instead of continuous measurements, in this paper we will assume that all measurements are perfect and instantaneous. That said, the exact moment that a measurement occurred in cannot be precisely determined. Namely, any device that observes time (say, a stopwatch) has a finite resolution. This means that when we perform a set of measurements at some fixed time, in actuality we are mixing measurement results from experiments occurring in some interval Δ t around the specified moment of time. This interval depends on the resolution of the time measurement device. As such, the measured probability distributions do not correspond to one single measurement at a specified time, but to a set of measurements at different moments in time. Thus, the probabilities are effectively averaged over time. We note that the same procedure applies even in the case of continuous measurements, if these are modeled as a sequence of instantaneous measurements. The averaging is not performed over the duration of the continuous measurement, but over the time interval in which the continuous measurement can start. It has recently been suggested that the collapse of the wavefunction in the act of measurement can be described using nonunitary, nonlinear stochastic evolution <cit.>. Nonunitarity implies that a pure state can evolve into an effectively mixed state, nonlinearity implies that two identical vectors can evolve in different ways, and stochastic evolution can be modeled by quick oscillations. Let us elaborate on the claim that nonlinear evolution can lead to identical vectors evolving in distinct ways. In quantum mechanics states differing only by a complex multiplicative factor are usually considered to be identical, |ψ⟩≡ C|ψ⟩. Since nonlinear evolution in general does not have to be homogeneous, E(t,C|ψ⟩)≠ CE(t,|ψ⟩), the equivalence no longer holds. However, if we still treat all the states that differ up to a complex factor as identical, the nonlinear evolution will behave as if one state can evolve in different ways. This analysis, as well as previous discussion, has lead us to ask a question, the answer to which is the main goal of this paper: can quickly oscillating states be distinguished from mixed states? As we will demonstrate, without postselection, the answer is no for all practical purposes. However, weak measurements in postselected systems can distinguish the two possibilities. It is very important to note that in this paper we do not claim that mixed states are fundamentally pure quickly oscillating states, nor do we suggest a mechanism which would cause such oscillations. We merely show that if such oscillations exist, they can be detected using weak measurements. As is well-known, quantum mechanics is indeterministic due to quantum measurements, which can only be described in terms of probabilities. In addition to absolute probabilities, in probability theory conditional probabilities also exist. Conditional probabilities in quantum mechanics correspond to postselected systems and are given by the Aharonov-Bergmann-Lebowitz (ABL) rule <cit.>. To measure an observable in a specified quantum state, one must first prepare, that is, preselect a system. At some initial time, a selective measurement is performed, and only the elements of the ensemble that satisfy the preselection condition contribute to the measurement result. Postselection is similar: after the measurement, another selective measurement is performed, and the measurement results corresponding to the elements of the ensemble which do not satisfy the postselection results are discarded. The ABL probability does not only depend on the eigenprojector corresponding to the eigenvalue measured in the experiment. It also depends on all other eigenprojectors corresponding to eigenvalues that could have been the result of the measurement by the same apparatus, but haven't. This has lead to theoretical discussions on the nature of objective probabilities and quantum counterfactuals <cit.>. A variant of postselected systems has been used to describe the interaction of ions with a conducting surface <cit.>. The analysis of postselected measurements under non-unitary evolution was performed in <cit.>. When one wants to determine what is the expectation value of a given observable, one usually strongly couples it with the quantum system and performs a strong measurement. This measurement provides more information than the expectation value alone. The entire probability distribution for all eigenvalues of the observable is obtained, and the quantum state collapses in the act of measurement. One can calculate the expectation value from the probability distribution, and ignore the distribution itself. However, fundamentally the information was available and as such the collapse has occurred. There is a way to measure expectation values directly without collapsing the state. Namely, one can choose to weakly couple the observable to the quantum system. The measurement device is not capable of determining what are individual measurement results of the given observable for each member of the ensemble, however the mean value can be observed. This type of measurement can be called weak non-postselected measurement. When the experimental setup used to obtain weak non-postselected measurements is used under a postselection condition, one obtains so-called weak values of (postselected) weak measurements <cit.>. Usually, using the term “weak measurement” implies that postselection has been performed. Experimentally, weak measurements have been applied in many different practical problems: to amplify the measurement signal <cit.>, to directly measure the wavefunction <cit.>, to measure “trajectories” in the double-slit experiment <cit.> and many more. As we will demonstrate in this paper, weak measurements can be used to distinguish quickly oscillating pure states from mixed states as well. This paper is organized as follows: In Section 2, we discuss averaging over time and averaging over ensemble in quantum mechanics. In Section 3, we discuss weak measurements and weak values, and how they behave under averaging over time. In Section 4, we discuss in greater detail the case of two-state quantum systems. In Section 5, we discuss infinite dimensional quantum systems, in both discrete and continuous basis. In Section 6, an experimental protocol for distinguishing quickly oscillating pure states from mixed states is described, as well as the theoretical implications of the possibility of these measurements. Finally, in Section 7, some concluding remarks about possible extensions of the presented work are given. § AVERAGING OVER ENSEMBLE AND OVER TIME Let us begin by defining what we mean by “quickly oscillating states”. One can choose a pure state as some sum, such that each summand has a time-dependent phase and a time independent real factor: |ψ⟩=A_1e^iφ_1(t)|ψ_1⟩+A_2e^iφ_2(t)|ψ_2⟩+...+A_ne^iφ_n(t)|ψ_n⟩. In the form of a statistical operator, this state can be written as the projector _ψ=i∑|A_i|^2|ψ_i⟩⟨ψ_i|+i≠ j∑A_i^*A_je^i(φ_j(t)-φ_i(t))|ψ_j⟩⟨ψ_i|. When we say a state is quickly oscillating, we assume that the state is (at least approximately for the duration of measurement) of the form (<ref>), and that all the phase differences φ_j(t)-φ_i(t),i≠ j, quickly change in time. If this projector is averaged over a time interval Δ t corresponding to a finite resolution of the time measurement device, such that this interval is long enough that the phase differences change for a large number of multiples of π, one obtains a mixed state: ρ_ψ=i∑|A_i|^2|ψ_i⟩⟨ψ_i|. A mixed state is obviously different than a pure state. However, states are (usually) [9] not directly observed in quantum mechanics. Instead, they are just a step towards obtaining measurable quantities, that is, expectation values of observables, transition probabilities and higher moments of probability distribution. The expectation value of an observable Ô in state _ψ is ⟨Ô|=⟩Tr[_ψÔ]=i∑|A_i|^2⟨ψ_i|Ô|ψ_i|+⟩i≠ j∑A_i^*A_je^i(φ_j(t)-φ_i(t))⟨ψ_i|Ô|ψ_j|.⟩ After averaging over the time interval Δ t, one obtains the same result as would have been obtained starting from the mixed state: ⟨Ô|_⟩t≡1/Δ t[0]Δ t∫⟨ψ(t)|Ô|ψ(t)|⟩dt=⟨Ô|≡⟩Tr[ρ_ψÔ]=i∑|A_i|^2⟨ψ_i|Ô|ψ_i|.⟩ Similarly, if we introduce another state _a=|a⟩⟨a|, the transition probability from _ψ to _a, Tr[_ψ_a]=|⟨ψ|a|⟩|^2, is equivalent to the expectation value of the projector _a in the state |ψ⟩. Since we have shown that under time averaging, the state _ψ gets replaced by the mixed state ρ_ψ in the expression for any observable, including Ô=_a, we conclude that taking the time average of a transition probability in a quickly oscillating state is equivalent to starting from the corresponding mixed state: ⟨Tr[_ψ_a]|_⟩t=Tr[ρ_ψ_a]=i∑|A_i|^2|⟨ψ_i|a|⟩|^2. One can also consider time correlations: C_1=⟨ψ(t)|Ô(τ_1)...Ô(τ_N)|ψ(t)⟩=Tr[_ψÔ(τ_1)...Ô(τ_N)]. The moments in time at which the observables are taken cannot be chosen exactly due to the finite resolution of the time measurement device. As such, in addition to averaging over time t, one must also average over times τ_i. If we do time averaging over t first, due to the fact that the product Ô(τ_1)...Ô(τ_N) does not depend on time t, and due to the linearity of the trace, one obtains ⟨C_1|_⟩t=Tr[ρ_ψÔ(τ_1)...Ô(τ_N)], that is, after time averaging over t, the quickly oscillating pure state and the mixed state lead to the same correlator. This result still needs to be averaged over times τ_i but since the intermediate step is the same in both cases, the final result must be identical as well. One can also evaluate the product of means at different moments of time: C_2=⟨ψ(t+τ_1)|Ô|ψ(t+τ_1)⟩...⟨ψ(t+τ_N)|Ô|ψ(t+τ_N)⟩, where we need to average over both t and τ_i as before. Each mean depends only on a single time τ_i, and when averaging over it, time t is treated as a constant phase. As such, using the same arguments as in (<ref>), ⟨ψ(t+τ_i)|Ô|ψ(t+τ_i)⟩_τ_i=Tr[ρ_ψÔ], which does not depend on time. Thus, the product of means after averaging is the same for a quickly oscillating pure state and for a corresponding mixed state. Note that this analysis also applies for the special case where all parameters τ_i are equal to zero, which corresponds to the higher moments of probability distribution. Due to the finite resolution of the time measurement device, one cannot be certain that they are correlating simultaneous experimental results, and not with results in slightly different moments of time. As such, one can substitute all projectors onto states of quickly oscillating phases with corresponding mixed states and omit averaging over time. No realistic measurement result will be modified in this way. The quickly oscillating state and the mixed state are fully equivalent for observable quantities, and as such, the question whether the state is fundamentally pure and quickly oscillating, or mixed without rapid time dependence, appears to be metaphysical for all practical purposes. In the following section we will show that this is not the case if one considers weak measurements on postselected systems. Finally, let us note that the previous analysis also applies to the case of continuous measurements as long as they are modeled as a sequence of instantaneous measurements. The first measurement in the sequence collapses the quickly oscillating state into a state that changes slowly in time. Thus, after the initial moment of measurement, quick oscillations have no effect on the experimental result. That said, the initial moment of measurement cannot be precisely controlled just like in the single instantaneous measurement case, and the experimental result needs to be averaged over the resolution of the time measurement device (not over the duration of the continuous measurement). § WEAK VALUES UNDER AVERAGING OVER TIME In this Section, we will focus on weak measurements. Unlike in non-postselected case, weak measurements in postselected systems do not simply give an expectation value. The postselection introduces a second state into consideration, which can be evolved backwards in time towards the moment of the weak measurement. As such, the weak measurement will depend on the matrix element of the observable indexed by the two states. The weak value of an observable Ô in a quantum system described by the preselected and postselected states |_1⟩ and |_2⟩ respectively, is, as shown in [3], given by O_w=⟨_1|Ô|_2|⟩/⟨_1|_2|⟩. The pure states |_1⟩ and |_2⟩ correspond to statistical operators _1=|_1⟩⟨_1| and _2=|_2⟩⟨_2| respectively. We can rewrite the expression using the statistical operators O_w=Tr[_1Ô_2]/Tr[_1_2], with a simple proof Tr[_1Ô_2]/Tr[_1_2]=Tr[|_1⟩⟨_1|Ô|_2⟩⟨_2|]/Tr[|_1⟩⟨_1|_2|⟨%s|⟩_2]=⟨_1|Ô|_2⟩⟨_2|_1|⟩/⟨_1|_2|⟨%s|%s⟩⟩_2|_1. As can be seen from equation (<ref>), weak values are complex numbers. The real part of a weak value is directly observable as the position of the pointer of the measurement device, and the imaginary part corresponds to the momentum of the pointer <cit.>. Since the pointer can be a macroscopic object, the uncertainty principle can be ignored. An experimental procedure which directly observes the modulus and the phase of the weak value based on a quantum eraser has been developed as well <cit.>. If the phases of the states quickly oscillate in time, the observable weak value needs to be time-averaged. As such, the measurement result directly corresponds to ⟨O_w|_⟩t=⟨Tr[_1Ô_2]/Tr[_1_2]⟩_t where ⟨O_w|_⟩t≡1/T[0]T∫O_wdt, and T is the period of fast oscillations. In the previous section we presented standard arguments that a quickly oscillating state _1 can be substituted by the corresponding mixed state ρ_1. If we do that, we obtain O_w=Tr[ρ̂_1Ô_2]/Tr[ρ̂_1_2]=⟨Tr[_1Ô_2]|_⟩t/⟨Tr[_1_2]|_⟩t. Here we make an important observation: if the state is fundamentally quickly oscillating and pure, the value of the weak measurement will differ from the case when the state is fundamentally mixed. The question regarding the nature of the state is no longer metaphysical, it becomes experimentally testable. The expression (<ref>) is not just a result of the aforementioned substitution. It applies for systems which are preselected in mixed states and can be derived from (<ref>). To do so, let us explain what it means to preselect or postselect a system into a mixed state. Mixed states are described by statistical operators: hermitian operators with non-negative eigenvalues, with unit trace. That means that the preselected and postselected states can be written as ρ_i=i∑p_iΠ̂_i, ρ_f=j∑q_jΠ̂_j. Preselecting into the mixed state ρ_i can be done by utilizing different preselection criteria for different members of the ensemble: p_i is the ratio of members of the ensemble that are preselected in the pure state Π̂_i. Similar interpretation applies for postselection. Thus, a weak measurement with mixed state can be considered as a combination of weak measurements with pure states. The probability of a random member of the ensemble corresponding to the pure states Π̂_i and Π̂_j is p_iq_j. However, not all members of the ensemble satisfy the postselection criterion. Thus, the former probability needs to be multiplied by the probability that the postselection is satisfied, which is the transition probability from the initial pure state to the final pure state, Tr[_i_j]. As such, each weak measurement corresponding to a pure state pair _i, _j is weighted by a factor of p_iq_jTr[_i_j], normalized by the total probability of postselection occurring, k,l∑p_kq_lTr[_k_l]. Consequently, the weak value is O_w=i,j∑p_iq_jTr[_i_j]/k,l∑p_kq_lTr[_k_l]Tr[_iÔ_j]/Tr[_i_j]. The trace in the numerator of the first fraction cancels with the trace in the denominator of the second fraction. Using the linearity of the trace, and equation (<ref>), we obtain: O_w=Tr[ρ̂_1Ôρ̂_2]/Tr[ρ̂_1ρ̂_2]. It remains to answer how is it possible that direct averaging over time and substitution of the pure state into the mixed state no longer give same results. Does this imply that in the case of weak measurements, the equivalence of averaging a system over time and averaging a system over the ensemble of all possible configurations no longer holds? Luckily, the answer is no. As described, in the previous analysis weak measurements are performed in some time interval corresponding to the finite resolution of the time measurement device. Within this interval, the weak measurements are uniformly distributed. A mixed state corresponds to averaging the pure state over the ensemble of all possible configurations. Each configuration is equally present in the mixture. The different configurations in the ensemble are described by different phases, which correspond to different moments in time. However, not all members of the initial ensemble will satisfy the postselection condition at the same rate: different phases have different transition probabilities. By using the mixed state, we overcount the phases which are more likely to survive postselection. This would correspond to a set of weak measurements which are not uniformly distributed within the time resolution of the time measurement device. To compensate, the expression needs to be weighted by the probability of satisfying the postselection condition, leading to the correct result. The averaging needs to be performed over the postselected ensemble, not the initial one, and as such, the mixed state corresponding to the averaging over the initial ensemble cannot be directly used in the case of quickly oscillating states. This problem is not present when the state is fundamentally mixed, because the postselection probability is the same for all members of the subensembles corresponding to given pure states in the mixture. § TWO-STATE QUANTUM SYSTEMS In this Section we will focus on the case of two-state vector systems. We will label these two states |+⟩ and |-⟩. We introduce a quickly oscillating state |ψ_1⟩ corresponding to the preselection condition, and a state |ψ_2⟩ corresponding to postselection which changes slowly with time in the measurement interval: |ψ_1⟩=N_1(|+⟩+Ae^iφ|-⟩), |ψ_2⟩=N_2(|+⟩+Be^iχ|-⟩). The coefficients A and B are taken to be strictly positive. The negative signs can be absorbed into the phases. The coefficients N_i are normalization factors, N_1=(1+A^2)^-1/2, N_2=(1+B^2)^-1/2, but since the expression for weak value is linear in both the preselected and the postselected state, both in the numerator and the denominator, the normalization factors will cancel out in the expression for weak values. This is relevant for the next Section. The weak value of observable Ô for quickly oscillating state |ψ_1⟩ becomes O_w=⟨+|Ô|+|+⟩Be^iχ⟨+|Ô|-|+⟩Ae^-iφ⟨-|Ô|+|+⟩ABe^i(χ-φ)⟨-|Ô|-|⟩/1+ABe^i(χ-φ). With a proper choice of the measured observable, the expression can simplify significantly. For example, let us pick the polarization observable: Ô=|+⟩⟨+|-|-⟩⟨-|≡Ŝ.̂ Its weak value is given by the expression S_w=1-ABe^i(χ-φ)/1+ABe^i(χ-φ). Weak values are complex numbers, with both the real and the imaginary part being directly measurable. After some manipulation, it can be shown that the real part of this weak value is Re[S_w]=1-A^2B^2/2AB/1+A^2B^2/2AB+cos(χ-φ). In order to explicitly evaluate the time average, we will assume that the phase difference depends linearly on time, with a very high frequency: χ-φ=ω t+ϕ, ϕ=const. Since the weak value is a periodic function, the average over a time interval much larger than the period of oscillations is equal to the average over a single period: ⟨Re[S_w]|_⟩t=1/T[0]T∫Re[S_w]dt. Using the following known integral [0]2π∫dx/a+cosx=2π/√(a^2-1), a>1, we can evaluate the average: ⟨Re[S_w]|_⟩t=sgn[1-A^2B^2], that is, the observed real part of the weak value can only ever be either +1 or -1, other than the special case of AB=1, when the expression (<ref>) does not hold, but it can be seen from (<ref>) that the real part of the weak value is exactly zero in that case. It is important to note that this result does not depend on the frequency of oscillations, as long as the oscillation period is much shorter that the duration of the measurement. The imaginary part of the weak value is given by the expression: Im[S_w]=-2ABsin(χ-φ)/1+A^2B^2+2ABcos(χ-φ). If we again assume linear dependence of the phase difference on time, the time averaged imaginary part of the weak value becomes zero, since we are averaging an odd function over its period: ⟨Im[S_w]|_⟩t=0. Now we will consider the weak value of the same observable when the quickly oscillating pure state is substituted by the corresponding mixed state: ρ_1=N_1^2(|+⟩⟨+|+A^2|-⟩⟨-|), Π_2=|ψ_2⟩⟨ψ_2|=N_2^2(|+⟩⟨+|+B^2|-⟩⟨-|+Be^iχ|-⟩⟨+|+Be^-iχ|+⟩⟨-|). The weak value is now S_w=1-A^2B^2/1+A^2B^2. Note that the same result holds even when we postselect the system in the corresponding mixed state ρ_2=N_2^2(|+⟩⟨+|+B^2|-⟩⟨-|). We see that this expression corresponds to averaging the numerator and the denominator of expression (<ref>) separately. However, such an averaging is not proper. Just like in the quickly oscillating pure states case, the imaginary part is zero. However, the real part is significantly different. The weak value can take on any value in the range [-1,1], depending on the choice of A and B, and is not limited to +1, -1 and 0 like in the quickly oscillating pure state case. As such, we can see that weak measurement can be used to distinguish whether states are quickly oscillating but fundamentally pure, or if the states are fundamentally mixed. § QUANTUM SYSTEMS OF ARBITRARY NUMBER OF DIMENSIONS The previous analysis can easily be repeated in the case of countably many dimensions. Now, the quickly oscillating pure state is |ψ_1⟩=j∑A_je^iφ_j|j⟩ and the corresponding mixed state is ρ_1=j∑|A_j|^2|j⟩⟨j|. We choose two basis vectors, |a⟩ and |b⟩, and we measure the observable Ô=|a⟩⟨a|-|b⟩⟨b|, while for the postselected state we choose |ψ_2⟩=N_2(|a⟩+Be^iχ|b⟩). We can rewrite the states as |ψ_1⟩=N_1(|a⟩+Ae^iφ|b⟩+j≠ a,b∑A_je^iφ_j|j⟩) ρ_1=|N_1|^2(|a⟩⟨a|+A^2|b⟩⟨b|+j≠ a,b∑A_j^2|j⟩⟨j|). The coefficients N_1 and N_2 are normalization factors which will cancel out in the expression for the weak value. The chosen observable Ô is non-zero only in a two-dimensional subspace. Both the initial and final state have the same form as in the two-dimensional case when projected to the said subspace. As such, it is easy to see that the results from the two-dimensional case are obtained once more for countably many dimensions, both in finite and infinite cases. In the case of a continuous basis, the analysis requires more finesse. A quickly oscillating state is taken to be of the following form: |ψ_1⟩=[-∞]+∞∫A(x)e^iφ(x,t)|x⟩dx, with the corresponding projector Π_1=|ψ_1⟩=[-∞]+∞∫[-∞]+∞∫A(x)A(y)e^i[φ(x,t)-φ(y,t)]|x⟩⟨y|dxdy. We will assume that A(x) and φ(x,t) are smooth functions. However, like in the previous case, we will take A(x) to be non-negative, and for originally negative values, we will absorb π into the phase. This will make φ(x,t) piecewise continuous and piecewise differentiable. In order for Π_1 to average over time into the desired effectively mixed state, the off-diagonal elements ought to average to zero. This will be the case as long as e^i[φ(x,t)-φ(y,t)] is quickly oscillating in time. However, if we take x and y to be arbitrarily close, the oscillations will stop being fast at some point. As such, time averaging of this quickly oscillating state will never give an exact mixed state. That said, just like for time, there is a finite resolution of realistic measurement. There exists a Δ x such that for all practical purposes, x and x+Δ x are experimentally indistinguishable. As such, matrix elements |x⟩⟨x| and |x⟩⟨x+Δ x| are effectively identical. Thus, the projector Π_1 averages out over time into a matrix for all practical purposes indistinguishable from the mixed state ρ_1=[-∞]+∞∫A^2(x)|x⟩⟨x|dx, as long as e^i[φ(x,t)-φ(y,t)] is a quickly oscillating function in time for any x, and any y>x+Δ x. For simplicity, we can assume φ(x,t)=-ω(x)t+φ(x). For the averaging to hold, the following relationship must be true for all x: |ω'(x)|≫2π/Δ xΔ t, where ω'(x) is the first derivative of ω(x), and Δ t and Δ x are the state of the art experimental resolutions of time and the observable with a continuous spectrum chosen as the basis, respectively. Due to the finite resolution of the observable X̂, the effective mixed state might occur even at a fixed moment of time, due to averaging over x. This will happen if φ(x) is a quickly oscillating function in x. Otherwise, we will assume that φ(x) is slowly changing with x. As before, we can choose an observable and postselected state such that the quickly oscillating state and the corresponding mixed state can be distinguished via a weak measurement. For the observable, we will pick Ô=[a-Δ a]a∫|x⟩⟨x|dx-[a]a+Δ a∫|x⟩⟨x|dx, and for the postselected state we will take |ψ_2⟩=[a-Δ a]a∫B(x)e^iχ(x)|x⟩dx+[a]a+Δ a∫B(x)e^iχ(x)|x⟩dx. For the weak value we obtain O_w=[a-Δ a]a∫A(x)B(x)e^i[χ(x)-φ(x,t)]dx-[a]a+Δ a∫A(x)B(x)e^i[χ(x)-φ(x,t)]dx/[a-Δ a]a∫A(x)B(x)e^i[χ(x)-φ(x,t)]dx+[a]a+Δ a∫A(x)B(x)e^i[χ(x)-φ(x,t)]dx. In order to evaluate this expression, we need to make assumptions about the phase of the quickly oscillating state, as well as choose suitable values of parameters of the postselected state. In what follows we will assume that the quickly changing phase is φ(x,t)=-Ω xt+Φ x, where Ω and Φ are constants. This choice of phase ensures that the averaging over time is valid as long as |Ω|=|ω'(x)| satisfies the expression (<ref>). If the parameter Φ is large, averaging over x is also allowed, and in the case that φ(x) is a slowly changing function, we approximate it with the linear term. For simplicity, we will choose the phase of the postselected state to always be zero. The amplitude of the quickly oscillating state, A(x), cannot be controlled, but it can be measured without postselection since A^2(x) is the probability of finding the effectively mixed state in state |x⟩. As such, we will treat A(x) as a known function, and use it in our choice of the postselected state. For the function B(x) of the postselected state, we will choose B(x)=NC_1/A(x), a-Δ a<x<a, B(x)=NC_2/A(x), a<x<a+Δ a, where N is the normalization factor, and C_1 and C_2 are positive constant parameters. Now we can evaluate the expression (<ref>): O_w=C_1+C_2-C_1e^-i(ΩΔ at-ΦΔ a)-C_2e^i(ΩΔ at-ΦΔ a)/C_1-C_2-C_1e^-i(ΩΔ at-ΦΔ a)+C_2e^i(ΩΔ at-ΦΔ a), and it is simple to show that this expression is equivalent to O_w=1-C_2/C_1e^i(ΩΔ at-ΦΔ a)/1+C_2/C_1e^i(ΩΔ at-ΦΔ a). The last expression has the same form as the expression (<ref>), and as such, gives analogous result after time averaging: ⟨ O_w⟩_t=sgn[1-C_2^2/C_1^2]. In the case that the initial state is mixed, the weak value becomes O_w=[a-Δ a]a∫A^2(x)B^2(x)dx-[a]a+Δ a∫A^2(x)B^2(x)dx/[a-Δ a]a∫A^2(x)B^2(x)dx+[a]a+Δ a∫A^2(x)B^2(x)dx and after applying the condition (<ref>), the expression evaluates into O_w=1-C_2^2/C_1^2/1+C_2^2/C_1^2. As such, weak measurements can be used to test the nature of effectively mixed states even in the continuous case. § POSSIBLE EXPERIMENTS In Section 3 we have explained how to preselect a system in a mixed quantum state: use different preselection criteria on different members of the ensemble. However, this is not the approach we suggest in potential experiments. We have demonstrated that weak measurements can be used to distinguish mixed states from quickly oscillating ones. As such, we should not be preselecting the states ourselves. Instead, we introduce a source of states which should be mixed according to theory. These can be experimentally feasible, like a radiating black body, a quantum state prepared long time ago likely to have experienced decoherence, or electrons in some material. We can also consider thought experiments involving Unruh radiation <cit.> or Hawking radiation. In Figure 1, we give a sketch of such experiments. For sources of continuous states, two experiments need to be conducted: a strong measurement in nonpostselected system which gives the amplitude of the effective mixed state used to choose the proper postselected state, as well as a weak measurement on a postselected system. We are not aware of any strong arguments why the easy-to-measure states involved in thermal radiation, decoherence, or condensed matter should be quickly oscillating. However, we still suggest conducting the corresponding experiments, since they should not require significant investment for groups which are already experimentally utilizing weak measurements. This is especially true in the finite dimensional case, given that weak measurements of polarization are relatively common. Weak measurements of Unruh and Hawking radiation are practically unfeasible, but possible in principle. These thought experiments are primarily relevant for theoretical considerations. < g r a p h i c s > Figure 1. An abstracted sketch of possible experiments. Column I represents possible sources of mixed states. Object II is the weak measurement device. Object III is the postselection measurement. The represented sources of mixed states are as follows. a) a black body producing thermal radiation b) a pure state producing a mixed state via decoherence c) a solid state material described by a mixed state d) a source of constant acceleration leading to Unruh radiation, represented by a rocket engine accelerating the entire experimental setup e) a black hole producing Hawking radiation We do not claim that the quick oscillations exist, nor do we suggest that the quick oscillations occur at Planckian frequencies. However, it is possible that they are indeed present in nature and that they occur at Planck scale. This on it's own is of some importance. Namely, if mixed states are fundamentally pure states oscillating at Planckian frequencies, tabletop weak measurements would be able to observe the effect. As such, we have shown that there exist possible Planck scale phenomena which are observable by weak measurements in postselected systems, while invisible under strong nonpostselected measurements. As such, we suggest there is merit in further, more rigorous study of weak measurements in the framework of quantum field theory, since other applications of weak measurements could be found. It has been argued that weak measurements are equivalent to a set of nonpostselected strong measurements <cit.>. As such, weak measurements would contain no new information relative to nonpostselected measurements. As we have shown in this work, there are experimental questions that cannot be answered by strong postselected measurements, but can be investigated via weak measurements. This occurs because there are measurements which are impossible for all practical purposes. In our case, measurements didn't have perfect time resolution. It is obvious that a perfect measurement occurring at a fully fixed moment of time can distinguish a quickly oscillating state from a mixed state. However, as explained, such a measurement cannot be performed in a realistic setting. A feasible weak measurement might be equivalent to a set of strong measurements on nonpostselected systems, but such that some of those equivalent measurements are not possible for all practical purposes. Thus, in practice, weak measurements may lead to new information. § CONCLUDING REMARKS The question if states are fundamentally mixed, or they are pure but with relative phases quickly oscillating is not metaphysical. This question can be answered using weak measurements in postselected systems. Mixed states can be obtained in multiple ways, for example: by merging multiple ensembles together, by performing a strong nonselective quantum measurement, by thermalization, by decoherence, by accelerating the system, by observing a radiating black hole. We suggest that applying weak measurements on these states can determine their true nature. Some of these experiments should not be difficult nor expensive to do, while some are merely thought experiments. While measuring Hawking radiation from black holes is definitely beyond current experimental reach, the presented results show that, in principle, weak measurements can be used to see if black hole radiation is pure or mixed. This might be of relevance for the so-called black hole information paradox, which questions how a pure state can evolve into a mixed state after evaporating from a black hole. This work has also demonstrated that weak measurements are not equivalent to a set of strong measurements without postselection when measurements which are impossible for all practical purposes are excluded. There are multiple possible extensions of presented work. It is possible to use weak measurements to find decoherence rate in different systems. It might also be possible to generalize the framework of weak measurements to quantum field theory, and try to analyze the quantum information paradox with more rigor, as well as look for other questions weak measurements can answer that nonpostselected measurements cannot. Additionally, weak measurements might be used to test different models of objective wavefunction collapse which depend on stochastic evolution, since weak measurements under stochastic evolution should behave similarly to time averaging studied in this work. § ACKNOWLEDGMENTS I'd like to thank Marko Vojinović, Nikola Paunković, Igor Salom, Aleksandra Gočanin, Časlav Brukner and Mihailo Đorđević for useful discussions. Research supported by the Ministry of Science, Technological Development and Innovations (MNTRI) of the Republic of Serbia. 10 InfParadox Mathur, S. (2009). What Exactly is the Information Paradox?. In: Papantonopoulos, E. (eds) Physics of Black Holes. Lecture Notes in Physics, vol 769. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-88460-6_1 Hawking Hawking, S. W. (1975). Particle creation by black holes. Communications in mathematical physics, 43(3), 199-220. Ergodic Cornfeld, I. P., Fomin, S. V., & Sinai, Y. G. (2012). Ergodic theory (Vol. 245). Springer Science & Business Media. ContMeasure Kiilerich, A. H. (2018). Quantum Metrology with Continuous Measurements. Doctoral Thesis ObjectiveCollapse Mertens, L., Wesseling, M., & van Wezel, J. (2024). Stochastic field dynamics in models of spontaneous unitarity violation. SciPost Physics Core, 7(1), 012. ABL Aharonov, Y., Bergmann, P. G., & Lebowitz, J. L. (1964). Time symmetry in the quantum process of measurement. Physical Review, 134(6B), B1410. Threeboxes Vaidman, L. (1996). Weak-measurement elements of reality. Foundations of Physics, 26(7), 895-906. CounterfactualBAD Kastner, R. E. (1999). The three-box “paradox” and other reasons to reject the counterfactual usage of the ABL rule. Foundations of Physics, 29(6), 851-863. CounterfactualGOOD Mohrhoff, U. (2001). Objective probabilities, quantum counterfactuals, and the ABL rule—A response to RE Kastner. American Journal of Physics, 69(8), 864-873. Rydberg Nedeljković, N. N., Majkić, M. D., Bo�anić, D. K., & Dojčilović, R. J. (2016). Dynamics of the Rydberg state population of slow highly charged ions impinging a solid surface at arbitrary collision geometry. Journal of Physics B: Atomic, Molecular and Optical Physics, 49(12), 125201. Nano Nedeljković, N. N., & Majkić, M. D. (2023). Critical velocities for the nanostructure creation on a metal surface by an impact of slow highly charged Ar q+, Kr q+, and Xe q+ ions. The European Physical Journal D, 77(1), 3. NonunitaryABL Prlina, I. P., & Nedeljković, N. N. (2015). Time-symmetrized description of nonunitary time asymmetric quantum evolution. Journal of Physics A: Mathematical and Theoretical, 49(3), 035301. WeakMeasurement Aharonov, Y., & Vaidman, L. (1990). Properties of a quantum system during the time interval between two measurements. Physical Review A, 41(1), 11. WeakValueAmplify Jordan, A. N., Martinez-Rincon, J., & Howell, J. C. (2014). Technical advantages for weak-value amplification: when less is more. Physical Review X, 4(1), 011031. WavefunctionMeasure Lundeen, J. S., Sutherland, B., Patel, A., Stewart, C., & Bamber, C. (2011). Direct measurement of the quantum wavefunction. Nature, 474(7350), 188-191. WeakTrajectories Kocsis, S., Braverman, B., Ravets, S., Stevens, M. J., Mirin, R. P., Shalm, L. K., & Steinberg, A. M. (2011). Observing the average trajectories of single photons in a two-slit interferometer. Science, 332(6034), 1170-1173. Eraser Cormann, M., Remy, M., Kolaric, B., & Caudano, Y. (2016). Revealing geometric phases in modular and weak values with a quantum eraser. Physical Review A, 93(4), 042124. Unruh Unruh, W. G. (1976). Notes on black-hole evaporation. Physical Review D, 14(4), 870. UnruhReview Crispino, L. C., Higuchi, A., & Matsas, G. E. (2008). The Unruh effect and its applications. Reviews of Modern Physics, 80(3), 787-838. Weak=00003DStrong Kastner, R. E. (2017). Demystifying weak measurements. Foundations of Physics, 47(5), 697-707.
http://arxiv.org/abs/2408.12361v1
20240822125823
Color superconductivity in the two-flavor quark-meson diquark model
[ "Jens O. Andersen", "Mathias P. Nødtvedt" ]
hep-ph
[ "hep-ph", "nucl-th" ]
arrows,shapes trees matrix,arrows positioning calc,through
http://arxiv.org/abs/2408.12555v1
20240822171237
Ideal topological flat bands in chiral symmetric moiré systems from non-holomorphic functions
[ "Siddhartha Sarkar", "Xiaohan Wan", "Yitong Zhang", "Kai Sun" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall" ]
These three authors contributed equally Department of Physics, University of Michigan, Ann Arbor, MI 48109, USA These three authors contributed equally Department of Physics, University of Michigan, Ann Arbor, MI 48109, USA These three authors contributed equally Department of Physics, University of Michigan, Ann Arbor, MI 48109, USA sunkai@umich.edu Department of Physics, University of Michigan, Ann Arbor, MI 48109, USA § ABSTRACT Recent studies on topological flat bands and their fractional states have revealed increasing similarities between moiré flat bands and Landau levels (LLs). For instance, like the lowest LL, topological exact flat bands with ideal quantum geometry can be constructed using the same holomorphic function structure, ψ_𝐤 = f_𝐤-𝐤_0(z) ψ_𝐤_0, where f_𝐤(z) is a holomorphic function. This holomorphic structure has been the foundation of existing knowledge on constructing ideal topological flat bands. In this Letter, we report a new family of ideal topological flat bands where the f function does not need to be holomorphic. We provide both model examples and universal principles, as well as an analytic method to construct the wavefunctions of these flat bands, revealing their universal properties, including ideal quantum geometry and a Chern number of C = ± 2 or higher. Ideal topological flat bands in chiral symmetric moiré systems from non-holomorphic functions Kai Sun ============================================================================================= Topological flat bands have long been a central focus in modern condensed matter physics due to their unique physical properties and the novel quantum phenomena they can host, such as the fractional quantum Hall effect <cit.>. Recent research has revealed two intriguing insights about these bands: (1) Besides Landau levels (LLs), topological flat bands and the fractional states that they may host can be realized in various physical systems, such as moiré systems, even in the absence of external magnetic fields <cit.>. (2) Although these moiré topological flat bands are created in fundamentally different setups from LLs, they appear to share the same theoretical foundation. To illustrate the deep connection between moiré flat bands and LLs, we consider the lowest LL and chiral twisted bilayer graphene (TBG) as examples. Unlike the lowest LL, which exhibit perfect band flatness and ideal quantum geometry, flat bands in moiré systems are typically not ideal—they are not perfectly flat and their wavefunctions lack ideal quantum geometry. However, these systems can often be adjusted slightly to make their topological flat bands ideal. A well-known example is the chiral limit of TBG <cit.>. Similar ideal topological flat bands can also arise in a variety of systems, such as twisted bilayer checkerboard lattices <cit.>, TBG with a spatially alternating magnetic field <cit.>, single-layer systems with quadratic band crossing points under a periodic strain field <cit.>, the ideal limit of twisted bilayer Fe-based superlattices <cit.>, TBG with strong second harmonic tunneling <cit.>, and chiral twisted bilayer systems with higher-order topological nodes <cit.>. For LLs, the foundation of our understanding are based on a simple fact: due to the algebra of magnetic translations, the lowest Landau level exhibits a simple structure, ψ = f(z) exp(-zz̅/4l^2), where l is the magnetic length and z=x+i y is the complex coordinate. Up to a less important factor, the eigenstates are holomorphic functions of z. This holomorphic structure is crucial for understanding both the integer and fractional quantum Hall effects, enabling us to easily formulate their wavefunctions, such as the Laughlin wavefunction, and understand their physical properties. In chiral TBG, as well as other ideal flat band models mentioned above, wavefunctions of the exact flat bands exhibit a structure identical to the lowest LL ψ_𝐤(𝐫) = f_𝐤 - 𝐊(z) ψ_𝐊(𝐫) <cit.>, where f_𝐤 - 𝐊(z) is a holomorphic function and 𝐊 represents the corner of the moiré Brillouin zone. More importantly, the analogy to the lowest LL extends even into the strongly correlated regime. Similar to Landau level systems <cit.>, we can derive exact solutions for fractional states from such ideal exact flat bands <cit.>. It is worthwhile to mention that the deep connection between moiré exact flat bands and LLs extends beyond just the simplest cases. In Landau level systems, more complex exact flat bands and fractional states can be achieved by introducing additional elements to the lowest Landau level. For example, higher LLs can be obtained by applying the raising operator 2∂_z -z/2l, and quantum Hall multilayers can be created by introducing additional layers. In moiré systems, by adding more components, layers, or bands, higher vortexability <cit.> and exact flat bands with higher Chern numbers <cit.> can be achieved. Despite recent studies increasingly revealing evidence of deep and fundamental connections between moiré flat bands and LLs, this Letter explores potential differences between LLs and moiré exact flat bands. We report a new family of exactly flat topological bands that are vortexable <cit.> but cannot be directly traced back to the lowest Landau level. Specifically, we find that in these flat band systems, the Bloch wavefunction can be written as ψ_𝐤 = f_𝐤-𝐤_0(z, z̅) ψ_𝐤_0, where ψ_𝐤_0 is the wavefunction of the same flat band at an arbitrarily chosen reference momentum point 𝐤_0. In contrast to the lowest LL or chiral TBG, where f must be a holomorphic function, here this f function is not holomorphic. This distinction has two important consequences for moiré flat bands. First, because the wavefunction needs to obey the Bloch boundary conditions, if the function f is holomorphic, it must have a pole in the moiré unit cell. As a result, to support ideal flat bands, the wavefunction ψ_𝐤_0 must have at least one zero to cancel the divergence at this pole. In our system, because f is not holomorphic, it does not need to have any poles, and thus ψ_𝐤_0 does not need zeros. Secondly, for holomorphic functions of f, the holomorphic nature implies that the Chern number must be C = ± 1, unless additional degrees of freedom (e.g., using multiple layers <cit.> or models that can be re-written as two decoupled C=1 flat bands <cit.>). However, the flat bands we find here are not confined by this limit. In contrast, they generally have a Chern number of C= ± 2 or higher. Beyond specific model examples, we further demonstrate that these instances are not isolated cases but represent one of two possible pathways toward ideal moiré flat bands: besides the well-known holomorphic option, f(z), there exists a second viable pathway where f is non-holomorphic. This second option is prohibited in LLs or systems that can be mapped to Landau-level-like structures, but it is possible for general moiré systems. We further derive the partial differential equations (PDEs) that a non-holomorphic f function must satisfy. Interestingly, these equations can be solved analytically, with solutions expressible as integrals of theta functions. From these analytic solutions, it can be proven that these bands must have ideal quantum geometry and Chern numbers C=± 2. In the discussion section, we delve into the impact of this new knowledge, revealing that the f function solution that we identify has an interesting connection to the ratio between wavefunctions from the first and lowest LLs. We will also comment on the potential implications for fractional quantum states. A simple example– We begin our discussion by examining a simple example that illustrates the key properties of this new family of exact flat bands. For demonstration purposes, we select a model whose Hamiltonian and eigenwavefunctions take the simplest forms, rather than focusing on identifying the most feasible setup for future experimental realization. However, it is important to emphasize that although this model is intended for demonstration purposes, the physics it showcases are generic and represent one of the two possible pathways towards ideal topological flat bands, as will be shown in subsequent sections. Consider a 2× 2 Hamiltonian: ℋ_4(𝐫) = [ 0 𝒟_4^†(𝐫); 𝒟_4(𝐫) 0 ], 𝒟_4(𝐫) = (2i∂_z)^4+α A(𝐫), where ∂_z = 1/2(∂_x - i∂_y), the overbar indicates complex conjugation, A(𝐫) is a periodic moiré potential, and α is the complex amplitude of this potential. In the absence of the moiré structure (α = 0), this Hamiltonian represents a 2D system with a quartic band crossing, where the dispersion near the band crossing follows E ∝ |𝐤|^4 (Fig. <ref>(a)). The moiré potential α A(𝐫) can be induced by a periodic strain ( quadrupole) or 16-pole field, which may result from lattice mismatch with the substrate or moiré lattice reconstruction. This Hamiltonian is both chiral symmetric {ℋ_4,𝒮} = 0 with 𝒮 = σ_z and time-reversal symmetric [ℋ_4,𝒯] = 0 with 𝒯 = σ_x K and K being complex conjugation. For A(𝐫), we require it to preserve three-fold rotational symmetry 𝒞_3z. Given that (∂_z)^4 → e^2π i/3(∂_z)^4 under a three-fold rotation, the moiré potential must obey A(𝒞_3z𝐫) = e^2π i/3A(𝐫). As will be discussed below and in the Supplementary Material (SM) <cit.>, the combination of these symmetries and the fact that 𝒟_4 only has antiholomorphic derivative ensures that the quartic band crossing does not split into band crossings of lower order (e.g., Dirac or quadratic band crossings). Here we choose a simple moiré potential with only first harmonics A(𝐫) = 1/2∑_i=1^3 e^2π i (n-1)/3 e^-i𝐆_i·𝐫, where 𝐆_i represents the reciprocal lattice vector of the moiré structure 𝐆_i= 4π/√(3)a(-sin(2π/3(i-1)),cos(2π/3(i-1)) with a being the moiré period. Note that for this A(𝐫), a mirror symmetry ℳ_y, (x,y) → (x,-y), emerges when α is real. To find if this model support exact flat bands at certain “magic” values of α, we utilize a method introduced in Ref. <cit.>. Here we construct the Birman-Schwinger operator <cit.> T_4(𝐤;𝐫) = - (2i∂_z-k)^-4A(𝐫). where k=k_x+ik_y is an arbitrary wavevector, and compute the eigenvalues of this operator η_𝐤. In Fig. <ref>(b) we plot the inverse of these eigenvalues 1/η_𝐤 at a non-special 𝐤, and numerically verified that these values are independent of 𝐤. Each of these eigenvalues provides a “magic" value of α=1/η_𝐤, at which exact flat bands emerge (see SM for details). In Fig. <ref>(c), we plot the band structure of ℋ_4(𝐫) for one of these magic α values. As shown, there are two exact flat bands at zero energy. The wavefunctions of these two flat bands are sublattice-polarized, Ψ_𝐤,1(𝐫)={ψ_𝐤(𝐫),0} and Ψ_𝐤,2(𝐫)= 𝒯Ψ_-𝐤,1(𝐫)={0,ψ_-𝐤^*(𝐫)}, where ψ_𝐤(𝐫) is a zero mode of 𝒟_4(𝐫). It is straightforward to verify that Ψ_𝐤,1 and Ψ_𝐤,2 are related by time-reversal transformation and thus must carry opposite Chern numbers. Using the Wilson loop winding number [in the inset of Fig. <ref>(c)], we determine their Chern numbers to be ± 2. More interestingly, these two flat bands exhibit ideal quantum geometry tr(G(𝐤)) =|F_xy(𝐤)|, where G(𝐤) and and F_xy(𝐤) are the Fubini-Study metric <cit.> and the the Abelian Berry curvature respectively. Fig. <ref>(e) shows the wavefunction ψ_Γ(𝐫) at the Γ point (𝐤 =0). One key feature to highlight is that ψ_Γ(𝐫) never reaches zero. This feature directly indicates that f_𝐤(z,z̅)=ψ_𝐤/ψ_Γ must not be an holomorphic function, in direct contrast to other ideal flat bands such as the lowest Landau level or chiral TBG. As will be discussed below, these two cases, f_𝐤 being holomorphic or non-homophobic, represent the two allowed pathways toward achieving ideal flat bands in moiré systems. Construction of wavefunction.–Although f_𝐤 is not holomorphic, we can still analytically construct Bloch wavefunctions for these ideal flat bands. The clue comes from Fig. <ref>(f), where we plot the function g_𝐤(z,z̅) ≡∂_zf_𝐤(z,z̅) = ∂_z(ψ_𝐤(𝐫)/ψ_Γ(𝐫)). This function shares three key properties with the Bloch wavefunction of ideal flat bands in chiral TBG: (1) Bloch periodic, g(𝐫+𝐚)=g(𝐫) e^i 𝐤·𝐚, (2) having isolated zeros in the unit cell [Fig. <ref>(f)], and most importantly, (3) “vortexable." To prove its vortexability, we begin with the equations that ψ must satisfy, 𝒟_4ψ_𝐤(𝐫) = 𝒟_4(f_𝐤(z,z̅) ψ_Γ(𝐫)) = 0 and 𝒟_4ψ_Γ(𝐫) = 0. By subtracting these two equations, we obtain the equation for g_𝐤 D̃_4 g_𝐤≡(∑_n=0^3 4n (∂_z^n ψ_Γ)∂_z^3-n)g_𝐤 = 0, where nk = n!/k! (n-k)! is the binomial coefficient. Because this differential equation does not contain ∂_z, if g_Γ is a solution, then for any arbitrary holomorphic function h(z), h(z) g_Γ must also be a solution. With these three properties of g, we can construct the g function in the same manner as how wavefunctions in chiral TBG are constructed. First, by solving Eq.(<ref>) at Γ (𝐤 =0), we find a non-trivial solution for g_Γ. Unlike the lowest Landau levels or the chiral limit of TBG, where g is strictly zero, our g_Γ only have one isolated zero at the center of the unit cell [Fig. <ref>(g)]. Around this zero, symmetry requires g to obey the asymptotic form g_Γ(z,z̅) ∝ zz̅, which we have also verified numerically. This asymptotic form is crucial as it indicates that we can use this zero to cancel singularities caused by a pole. Hence, we can write down the function g_𝐤 as g_𝐤(z,z̅) = h_𝐤(z;z_0) g_Γ(z,z̅), h_𝐤(z;z_0) =e^i/2 (k̅z+ka̅_1 z/a_1)ϑ(z-z_0/a_1-k/b_2,τ)/ϑ(z-z_0/a_1,τ) , where a_1 = (𝐚_1)_x+i(𝐚_1)_y is the complexified moiré lattice vector 𝐚_1 = a(1,0), b_2 = (𝐛_2)_x+i(𝐛_2)_y is the complexified moiré reciprocal lattice vector 𝐛_2 = 4π/√(3)a(0,1), k = k_x+ik_y, τ = e^2π i/3, ϑ(z,τ) = -i∑_n=-∞^∞(-1)^n e^π i τ(n+1/2)^2+π i(2n+1)z is the Jacobi theta function <cit.>, and z_0 = 0 is the position of the zero in g_Γ(𝐫). Since, h_𝐤(z;z_0) is a holomorphic function of z, g_𝐤 satisfies Eq. (<ref>). From the definition of the theta function, it can be verified that the function h_𝐤(z;z_0) is Bloch periodic. Furthermore, since ϑ(z,τ) has simple zeros at positions z=m_1+m_2τ, m_i∈Z, h_𝐤(z;z_0) has a simple zero at z=z_0+a_1 k/b_2 and a simple pole at z = 0 in the unit cell. This and the fact that near 𝐫 = 0, g_Γ(z,z̅) ∼ zz̅ implies that g_𝐤(z,z̅) has a zero at 𝐫 = 0 (near which the function has the form g_𝐤(z,z̅)∼z̅), and another zero at 𝐫_𝐤 = -√(3)a^2/4πẑ×𝐤 (near which the function has the form g_𝐤(𝐫_𝐤+(x,y))∼ z). These two zeros can be seen in Fig. <ref>(f). Constructing the function g_𝐤(z,z̅) using Eq. (<ref>), we numerically verified that it matches with the numerically solved function g_𝐤(z,z̅) up to a constant factor. Note that the function g_𝐤(z,z̅) carries Chern number C = 1 since it has the same form as the lowest LL wavefunctions on a torus <cit.>. Also, note that the periodic part of g_𝐤(z,z̅), which is e^-i𝐤·𝐫g_𝐤(z,z̅), is a holomorphic function of k=k_x+ik_y <cit.>. To obtain f_𝐤(z,z̅), we can utlize the fact that g_𝐤 is a Bloch periodic function, and hence it can be written as a Fourier series g_𝐤(z,z̅) = ∑_𝐆 g_𝐤(𝐆) e^i(𝐆+𝐤)·𝐫, where 𝐆 = m_1𝐆_1+m_2𝐆_2, m_i∈Z, are the reciprocal lattice vectors and g_𝐤(𝐆) are the Fourier amplitudes. Because ∂_zf_𝐤 = g_𝐤, we get f_𝐤(z,z̅) = c(𝐤) ∫ dz̅g_𝐤(z,z̅) = c(𝐤) ∑_𝐆2g_𝐤(𝐆)/i(k+G) e^i(𝐆+𝐤).𝐫, where k+G = (k_x+G_x)+i(k_y+G_y), and c(𝐤) is a 𝐤 dependent constant which does not alter the form of the wavefunction at a given 𝐤. The problem with this form f_𝐤 is that the summand at 𝐆 = 0 has a 1/k type singularity near k=0 (if g_𝐤=0(𝐆=0)≠ 0, which we numerically verified). This fact and the fact that f_𝐤 = 0≡ f_Γ = 1, implies that to have a smooth gauge for f_𝐤(z,z̅) near 𝐤=0, c(𝐤) = ik/(2g_𝐤=0(𝐆=0)). Hence, the full expression for f_𝐤 is f_𝐤(z,z̅) = k/g_𝐤=0(𝐆=0)∑_𝐆g_𝐤(𝐆)/k+G e^i(𝐆+𝐤).𝐫 This extra holomorphic factor k=k_x+ik_y smooths out the gauge near k=0, but gives an extra winding to the Berry phase of f_𝐤 at the edge of the Brillouin zone. This extra winding, in addition to g_𝐤 having Chern number C = 1, implies that f_𝐤 carries Chern number C = 2; as was seen from the Wilson loop spectrum in the inset of Fig. <ref>(c). Since c(𝐤) and the periodic part of g_𝐤(z,z̅) are holomorphic in k=k_x+ik_y, the periodic part u_𝐤(𝐫) = e^-i𝐤·𝐫ψ_𝐤(𝐫) = e^-i𝐤·𝐫f_𝐤(z,z̅)ψ_Γ(𝐫) of the wavefunction ψ_𝐤(𝐫) is holomorphic in k=k_x+ik_y, which immediately implies ideal quantum geometry <cit.>. It is worth noting that there is a similarity between the wavefunction we constructed above and the wavefunction for the exact flat bands in twisted mono-bilayer graphene. We can write the coupled set of equations satisfied by f_𝐤 and g_𝐤 in the following form [ ∂_z -1; 0 D̃_4 ][ f_𝐤; g_𝐤 ] = [ 0; 0 ], where D̃ was defined in Eq. (<ref>). This equation has exactly the same form as the equation satisfied by the sublattice polarized wavefunction in twisted mono-bilayer graphene <cit.>: [ ∂_z -β; 0 D_TBG ][ ψ_𝐤,1; ψ_𝐤,TBG ] = [ 0; 0 ]. This is why our flat band has Chern number C=2, same as the twisted mono-bilayer graphene. However, a crucial difference between these two systems is that, in our case, the lowest LL-type function g_𝐤 is an auxiliary function that does not correspond to an actual physical degree of freedom. In contrast, in the twisted mono-bilayer graphene system, the lowest LL-type function ψ_𝐤, TBG represents a physical degree of freedom. Generic origin of these ideal flat bands– In this section, we consider generic situations, focusing on the fundamental origins and essential ingredients of this new family of ideal flat bands. For the lowest LL and ideal topological flat bands in moiré systems, a common ingredient is that these flat bands can all be reduced to the problem of finding the null space of a non-Hermitian operator 𝒟(∂_z, z ,z), with 𝒟ψ = 0, where 𝒟 does not contain any ∂_z. For LLs, this operator is simply 2i∂_z - A, where A is the gauge field, while for chiral TBG, 𝒟 is a 2 × 2 matrix. If the Bloch wavefunctions of the ideal flat band ψ_𝐤 can be factorized into the form ψ_𝐤 = f_𝐤-𝐤_0(z,z) ψ_𝐤_0, where ψ_𝐤_0 is the Bloch wavefunction at an arbitrary reference momentum point 𝐤_0, the null-space condition implies that 𝒟ψ_𝐤=𝒟ψ_𝐤_0=0. Consequently, the function f_𝐤-𝐤_0 must satisfy the equation 𝒟(f_𝐤-𝐤_0ψ_𝐤_0) - f_𝐤-𝐤_0𝒟ψ_𝐤_0 = 0 It is easy to verify that this equation only involves derivatives of f, as terms proportional to f (without derivatives) cancels out. Hence, this equation is effectively an equation for g_𝐤 =∂_zf_𝐤. More precisely, if 𝒟 is a n× n matrix, Eq. (<ref>) represents a set of n homogeneous partial different equations of g 𝒟̃ g_𝐤= 0, where the operator 𝒟̃(∂_z, z ,z) does not contain any holomorphic derivative ∂_z. For homogeneous partial differential equations, a trivial solution always exists: g=0. Since g is the anti-holomorphic derivative of f, this trivial solution means that f is holomorphic. Most of the well-known models of ideal flat bands, such as the lowest LL and the chiral limit of TBG, belong to this category. However, beyond this trivial solution, homogeneous PDEs may also support nontrivial solutions. For the lowest LL or TBG, it can be proved that nontrivial solutions are not allowed. However, for generic moiré systems, such solutions are possible, thus providing an alternative pathway towards ideal flat bands. The model discussed above falls into this category. It is important to note that the PDE for g [Eq. (<ref>)] has the same structure as the PDE that a chiral flat band must satisfy. Therefore, we can define another moiré system with an effective Hamiltonian ℋ_eff= [ 0 𝒟̃^†; 𝒟̃ 0 ] If ℋ_eff has chiral flat bands, their Bloch wavefunction provides a nontrivial solution for Eq. (<ref>). Should the flat band of ℋ_eff resemble the lowest Landau level with a Chern number |C|=1, this solution for g leads to an ideal flat band with |C|=2 when we revert to the original model, as exemplified above. If the flat bands of ℋ_eff have a higher Chern number |C|=n, the original model then contains flat bands with |C|=n+1. Band crossings of other orders, splitting of band crossings.–Existing studies have found C=1 holomorphic ideal flat bands can arise from Dirac and quadratic band touchings, i.e., band touchings with dispersion E ∝ |𝐤|^n for n=1 or 2. In this work, we demonstrate that n=4 band touchings produce a different type of ideal flat band with C=2. This naturally raises the question: can other types of ideal flat bands be obtained from band crossings with n=3 or n>4? At least for single-layer systems, we find that the answer is negative. What is unique about n=1, 2 and 4 lies in the fact that these band touchings are protected by symmetries (three-fold rotation C_3z, time-reversal, and chiral). In contrast, for n=3 or n>4, the band crossing generally splits into multiple lower-order band crossings with n=1 or 2, when a moiré potential is introduced (see SM <cit.>). If the degeneracy point splits, the eigenvalues of the Birman-Schwinger operator T_n(𝐤; 𝐫) generally do not remain 𝐤-independent [The eigenvalues of T_n(𝐤; 𝐫) indicate the values of α at which there is a zero mode at 𝐤. If the degeneracy at 𝐤 = 0 splits into Dirac crossings, the eigenvalues of T_n(𝐤; 𝐫) indicate at which value of α the Dirac crossing reaches wave vector 𝐤. Since the Dirac crossing reaches different 𝐤 at different values of α, the eigenvalues of T_n(𝐤; 𝐫) become 𝐤-dependent. See also Supplemental Material <cit.>.]. Hence, exact flat bands are generally not possible for any n ∉{2,4} [In the original paper <cit.> using the Birman-Schwinger operator to find “magic” angles, zero modes appearing only at 𝐤=0 for generic angles was one of the requirements for exact flat bands to appear at some “magic” angle. See also SM <cit.>.]. Multiply degenerate exact flat bands.–Before concluding our discussion, we present another model to demonstrate the diversity and rich physics achievable from this new family of exact flat bands. As shown in the appendix, by simply choosing a slightly different moiré potential, our model can give rise to four degenerate exact flat bands. Two of these bands have a total Chern number of C=2, while the other two have C=-2. The wavefunctions of these flat bands can also be analytically constructed, with detailed explanations provided in the appendix and SM <cit.>. Discussion.–The existing examples of exact flat bands with ideal quantum geometry in moiré systems found in the literature rely on the holomorphic structure ψ_𝐤 = f_𝐤 - 𝐤_0(z) ψ_𝐤_0, where f_𝐤(z) is a holomorphic function in z (up to a modification by adding multiple layers). This structure allows these wavefunctions to be “adiabatically" connected to those of the lowest LL, interpreting these wavefunctions as describing electrons in a spatially varying magnetic field <cit.>. The family of single-component wavefunctions described in this article cannot be written in this form, indicating that they are distinct from the lowest LL family, and thus do not correspond to wavefunctions induced by magnetic fields, whether homogeneous or spatially varying. However, these wavefunctions do satisfy ideal quantum geometry, unlike higher LLs <cit.> or “higher vortexable" states <cit.>. It will be interesting to study possible fractional states in this family of flat bands. Since the Bloch wavefunctions here are not holomorphic in z, the many-body wavefunctions of fractional states must also contain non-holomorphic structures, extending beyond the well-understood holomorphic-function-based many-body wavefunctions, such as Laughlin wavefunctions. Additionally, because many fundamental concepts of fractional states are built upon the structure of holomorphic functions and polynomial functions of z, these insights need to be revisited in these new flat band systems <cit.>. Finally, we mention an interesting coincidence. If we define the ratio between the first and lowest Landau levels as f̃ = ψ_1 / ψ_0, it is easy to verify that this f̃ function exhibits the same properties as the non-holomorphic f function we find: although f̃ is non-holomorphic, its anti-holomorphic first-order derivative g̃ = ∂_zf̃ obeys the same condition with our g function: multiplying g̃ by an arbitrary holomorphic function yields another valid g̃. Acknowledgements.–This work was supported in part by Air Force Office of Scientific Research MURI FA9550-23-1-0334 and the Office of Naval Research MURI N00014-20-1-2479 and Award N00014-21-1-2770, and by the Gordon and Betty Moore Foundation Award N031710 (KS). apsrev4-1 § MULTIPLY DEGENERATE EXACT FLAT BANDS. Here we choose a different moiré potential for the Hamiltonian defined in Eq. (<ref>) A(𝐫) = 1/2∑_i=1^3 e^2π i (n-1)/3cos(𝐆_i·𝐫). The form of this potential means that Hamiltonian now has 𝒞_6z symmetry. Again, by evaluating the inverse of the eigenvalues of the Birman-Schwinger operator T_4(𝐤;𝐫) at generic value of 𝐤 (see Fig. <ref>(a)), we find the “magic” α values for this potential. Interestingly, some of the eigenvalues of T_4(𝐤;𝐫) are doubly degenerate now. This implies that there are two independent eigenmodes of T_4(𝐤;𝐫), hence two independent zero modes of 𝒟_4(𝐤;𝐫). Therefore, there should be 4 exact flat bands in the spectrum of ℋ_4(𝐫), which we verified in Fig. <ref>(b). But, how do we construct the wavefunctions at generic 𝐤 in this case? To do so, we first evaluate ψ_Γ(𝐫). Note that now there two are exact flat band wavefunctions at Γ on the same sublattice. One of them satisfies ψ_Γ,1(𝒞_3z𝐫) =ψ_Γ,1(𝐫), the other one satisfies ψ_Γ,2(𝒞_3z𝐫) = e^2π i/3ψ_Γ,2(𝐫). We choose the first one because it is always a zero mode even away from magic α, and at magic α, it acquires some special properties that allows for the construction of exact flat band wavefunctions at generic 𝐤. Also, we show in the SM <cit.> that starting from ψ_Γ,1(𝐫) constructing the of wavefunctions at generic 𝐤, we can analytically continue them to 𝐤 = 0 to obtain ψ_Γ,1(𝐫). We plot ψ_Γ,1(𝐫) at the magic α value in Fig. <ref>(d). Although ψ_Γ,1(𝐫) has a ring of zeros around the center of the unit cell, the function ψ_Γ,1(𝐫) in the neighborhood of these zeros is not of type F(z,z̅)z (where F(z,z̅) is non-singular function), which we numerically checked. Hence, again, any lowest LL type exact flat band wavefunction construction of f_𝐤(z)ψ_Γ,1(𝐫), where f_𝐤(z) is a holomorphic function of z, is not possible since such a f_𝐤(z) necessarily has a pole of type 1/z, which is not canceled in f_𝐤(z)ψ_Γ,1(𝐫) unless ψ_Γ,1 in the vicinity of its zeros is of type F(z,z̅)z. However, when we numerically evaluate g_Γ^(1)(z,z̅) by solving Eq. (<ref>), we find that g_Γ^(1)(z,z̅) has two zeros z_0^(1) and z_0^(2) at the two corners of the unit cell (shown in Fig. <ref>(e)). Then, two independent functions g_𝐤^(1)(z,z̅) = h_𝐤(z;z_0^(1))g_Γ^(1)(z,z̅) and g_𝐤^(2)(z,z̅) = h_𝐤(z;z_0^(2))g_Γ^(1)(z,z̅) can be created, both of which satisfy Eq. (<ref>). From there, following the steps outlined in the previous example one can find the wavefunctions ψ_𝐤^(1)(𝐫) and ψ_𝐤^(2)(𝐫). Na'́ively, one may think that, each of these wavefunctions carries C=2, and in total the two wavefunctions carry Chern number C = 4. However, as was shown in <cit.>, even though g_𝐤^(1)(z,z̅) and g_𝐤^(2)(z,z̅) are independent, they are not orthogonal. Upon orthogonalization, the two new functions can be taken as g̃_𝐤^(1)(z,z̅) = g_𝐤^(1)(z,z̅) and g̃_𝐤^(2)(z,z̅) = g_𝐤^(2)(z,z̅) - ⟨ g_𝐤^(1)|g_𝐤^(2)⟩/⟨ g_𝐤^(1)|g_𝐤^(1)⟩g_𝐤^(1)(z,z̅). As was shown in <cit.>, g̃_𝐤^(2)(z,z̅) is topologically trivial. Now, when we evaluate f_𝐤^(i) by integrating g̃_𝐤^(i), we find a singularity of 1/k type for f_𝐤^(1) because g̃_𝐤=0^(1)(𝐆=0)≠ 0, which makes the Chern number of f_𝐤^(1) to be C = 2 (same as what was discussed between Eqs. (<ref>) and (<ref>)). However, g̃_𝐤=0^(2)(𝐆=0) = 0 as we show in SM <cit.>, and hence, there is no singularity in f_𝐤^(2). So, its Chern number is C=0 (since g̃_𝐤^(2) has C=0). Therefore, the total Chern number of the two bands together is C=2. This is verified in total winding of 2 of the Wilson loop spectrum in the inset of Fig. <ref>(b). Notice that this total Chern number intuitively matches with our expectation from the winding of the band crossing that we started with. We know that a winding of n around a band crossing gives Berry phase of nπ around it, hence for quartic band crossing gives Berry phase of 4π, which corresponds to Chern number C=2, and this argument is independent of the degeneracy of the flat bands. Furthermore, since 𝒟_4(𝐤;𝐫) is holomorphic in k=k_x+ik_y, and zero modes of 𝒟_4(𝐤;𝐫) are isolated from other eigenmodes of it (otherwise there would be other bands crossing energy E = 0), the zero modes of 𝒟_4(𝐤;𝐫) depend holomorphically on k=k_x+ik_y (this is due to <cit.> Chap. VII, Theorem 1.7). Hence, zero modes of 𝒟_4(𝐤;𝐫) (which are exact flat band wavefunctions) have ideal quantum geometry <cit.>, as is verified numerically in Fig. <ref>(c). figuresection Supplemental Material Ideal topological flat bands in chiral symmetric moiré systems from non-holomorphic functions Kai Sun ============================================================================================= § WAVE FUNCTIONS AND TOPOLOGY OF MULTIPLE FLAT BANDS ON EACH SUBLATTICE In this section, we construct the wave functions for the case considered in Fig. 2 of the main text, and show that the total Chern number of the exact flat band wavefunctions polarized on the same sublattice is C=2. Note that now there are two exact flat band wavefunctions at Γ on the same sublattice. One of them satisfies ψ_Γ,1(𝒞_3z𝐫) =ψ_Γ,1(𝐫), the other one satisfies ψ_Γ,2(𝒞_3z𝐫) = e^2π i/3ψ_Γ,2(𝐫). We choose the first one because it is always a zero mode even away from magic α, and at magic α, it acquires some special properties that allows for the construction of exact flat band wavefunctions at generic 𝐤. Also, we show below that starting from ψ_Γ,1(𝐫) constructing the of wavefunctions at genertic 𝐤, we can analytically continue them to 𝐤 = 0 to obtain ψ_Γ,1(𝐫). Starting from ψ_Γ,1(𝐫) in Fig. 2(d), when we numerically evaluate g_Γ(z,z̅) by solving Eq. (4), we find that g_Γ^(1)(z,z̅) has two zeros z_0^(1) and z_0^(2) at the two corners of the unit cell (shown in Fig. 2(e) in the main text). Then, two independent functions g_𝐤^(1)(z,z̅) = h_𝐤(z;z_0^(1))g_Γ^(1)(z,z̅), g_𝐤^(2)(z,z̅) = h_𝐤(z;z_0^(2))g_Γ^(1)(z,z̅) can be created, where h_𝐤(z;z_0^(i)) = e^i/2 (k̅z+ka̅_1 z/a_1)ϑ(z-z_0^(i)/a_1-k/b_2,τ)/ϑ(z-z_0^(i)/a_1,τ), where a_1 = (𝐚_1)_x+i(𝐚_1)_y is the complexified moiré lattice vector 𝐚_1 = a(1,0), b_2 = (𝐛_2)_x+i(𝐛_2)_y is the complexified moiré reciprocal lattice vector 𝐛_2 = 4π/√(3)a(0,1), k = k_x+ik_y, τ = e^2π i/3, ϑ(z,τ) = -i∑_n=-∞^∞(-1)^n e^π i τ(n+1/2)^2+π i(2n+1)z is the Jacobi theta function <cit.>. Both of the functions in Eq. (<ref>) satisfy Eq. (4) of the main text. To verify that this construction is indeed correct, we show the numerically obtained g_K^(i)(z,z̅) at the corner of the Brillouin zone (K point) in Fig. <ref>. Indeed Eq. <ref> can predict the positions of the zeros in these two functions correctly, as discussed in the caption of Fig. <ref>. From g_𝐤^(i)(z,z̅), following the steps outlined in the previous example one can find the wavefunctions ψ_𝐤^(1)(𝐫) and ψ_𝐤^(2)(𝐫). However, there are some subtleties here which dictate the topology of the wavefunctions. Na'́ively, one may think that, each of these wavefunctions carries C=2, and in total the two wavefunctions carry Chern number C = 4. However, as was shown in <cit.>, even though g_𝐤^(1)(z,z̅) and g_𝐤^(2)(z,z̅) are independent, they are not orthogonal. Upon orthogonation, the two new function can be taken as g̃_𝐤^(1)(z,z̅) = g_𝐤^(1)(z,z̅), g̃_𝐤^(2)(z,z̅) = g_𝐤^(2)(z,z̅) - ⟨ g_𝐤^(1)|g_𝐤^(2)⟩/⟨ g_𝐤^(1)|g_𝐤^(1)⟩g_𝐤^(1)(z,z̅). As was shown in <cit.>, g̃_𝐤^(2)(z,z̅) is topologically trivial (we discuss why that is the case below). Now, when we evaluate f_𝐤^(i) by integrating g̃_𝐤^(i), we find a singularity of 1/k type for f_𝐤^(1) because g̃_𝐤=0^(1)(𝐆=0)≠ 0, which makes the Chern number of f_𝐤^(1) to be C = 2 (same as what was discussed between Eqs. (6) and (7) of main text). However, g̃_𝐤=0^(2)(𝐆=0) = 0 as we show below, and hence, there is no singularity in f_𝐤^(2). So, its Chern number is C=0 (since g̃_𝐤^(2) has C=0). First notice that the expression of g̃_𝐤^(2)(z,z̅) in Eq. (<ref>) implies that g̃_𝐤=0^(2)(z,z̅) = 0 since h_𝐤=0(z;z_0^(1)) = h_𝐤=0(z;z_0^(2)) = 1. Therefore, we have to find g̃_𝐤=0^(2)(z,z̅) by doing an analytic continuation from nonzero 𝐤. To this end, let us first define the unit cell periodic part of h_𝐤(z;z_0^(i)) as h̃_𝐤(z;z_0^(i)) = e^-i𝐤·𝐫h_𝐤(z;z_0^(i)) = e^i/2 k(a̅_1 z/a_1-z̅)ϑ(z-z_0^(i)/a_1-k/b_2,τ)/ϑ(z-z_0^(i)/a_1,τ) = e^-i k(𝐛_2·𝐫)/b_2ϑ(z-z_0^(i)/a_1-k/b_2,τ)/ϑ(z-z_0^(i)/a_1,τ), which is a holomorphic function of k. Around 𝐤 =0, h_𝐤(z;z_0^(i)) can be expanded as h_𝐤(z;z_0^(i)) = 1 + h̃_0'(z;z_0) k +i(k z̅+k̅ z)/2 +𝒪(k^2), where h̃_0'(z;z_0) ≡ [∂_k h̃_0(z;z_0)]|_𝐤 = 0=1/2[(∂_k_x-i∂_k_y)h̃_𝐤(z;z_0)]|_𝐤 = 0 and we used h_𝐤=0(z;z_0^(i)) = h̃_𝐤=0(z;z_0^(i)) = 1. Plugging this into the expression of g̃_𝐤^(2)(z,z̅), we obtain g̃_𝐤^(2)(z,z̅) =k(h̃_0'(z;z_0^(2))-h̃_0'(z;z_0^(1))-⟨ g_Γ^(1)|(h̃_0'(z;z_0^(2))-h̃_0'(z;z_0^(1)))g_Γ^(1)⟩)g_Γ^(1)(z,z̅)+𝒪(k^2). Continuing this function to 𝐤=0, we get g̃_𝐤=0^(2)(z,z̅) =(h̃_0'(z;z_0^(2))-h̃_0'(z;z_0^(1))-⟨ g_Γ^(1)|(h̃_0'(z;z_0^(2))-h̃_0'(z;z_0^(1))|g_Γ^(1)⟩)g_Γ^(1)(z,z̅). Note that this procedure was used in <cit.> successfully to obtain wavefunctions for multiply degenerate flat bands. We plot g̃_𝐤=0^(2)(z,z̅) ≡g̃_Γ^(2)(z,z̅) in Fig. <ref>(b). Several comments are in order. * Note that g̃_Γ^(2)(z,z̅) obtained this way should be exactly the same as what we would get if we evaluate ∂̅_̅z̅(ψ_Γ,2(𝐫)/ψ_Γ,1(𝐫)). We plot the numerically obtained ∂̅_̅z̅(ψ_Γ,2(𝐫)/ψ_Γ,1(𝐫)) in Fig. <ref>(a). The fact that the two plots in Fig. <ref> match well proves that the construction in Eq. (<ref>) is correct. * Notice that the singularity g̃_𝐤^(2)∝ k in Eq. (<ref>) has opposite winding than the singularity in f_𝐤(z) ∝ 1/k shown earlier in the main text between Eqs. (6) and (7). Hence, unlike the singularity f_𝐤(z) which increases the Chern number by 1, the singularity in g̃_𝐤^(2) decreases the Chern number by 1, this is why the Chern number carried by g̃_𝐤^(2) is zero. * We numerically find that the function g̃_𝐤=0^(2)(z,z̅) under 𝒞_3z transforms as g̃_𝐤=0^(2)(𝒞_3zz,𝒞_3zz) = e^-2π i/3g̃_𝐤=0^(2)(z,z̅). This transformation property is also intuitively clear since we know that g_Γ^(1) transforms as a scalar under 𝒞_3z (g̃_Γ^(1)(𝒞_3zz,𝒞_3zz) = g̃_Γ^(1)(z,z̅)), h_𝐤=0(z;z_0^(i))=1, and under 𝒞_3z the partial derivative with respect to complex k transforms to ∂_k →e^-2π i/3∂_k. Since g̃_𝐤=0^(2)(𝒞_3zz,𝒞_3zz) = e^-2π i/3g̃_𝐤=0^(2)(z,z̅), it necessarily has zero average value in the unit cell, or in other words g̃_𝐤=0^(2)(𝐆=0) = 0. * Since g̃_𝐤=0^(2)(𝐆=0) = 0, we can directly use Eq. (6) to obtain f_𝐤=0^(2). Multiplying f_𝐤=0^(2) by ψ_Γ,1(𝐫), we can obtain ψ_Γ,2(𝐫) as promised. § WHY DOES THE QUARTIC BAND CROSSING NOT SPLIT UNDER THE ADDITION OF MOIRÉ POTENTIAL? We know from representation theory that in 2D spinless systems only linear (at a 𝒞_2z𝒯 or ℐ𝒯 (ℐ is inversion) symmetric 𝐤 point) and quadratic (at a time reversal invariant 𝐤 point which is also 𝒞_n'z (n'≥3) invariant) band crossings are stable. Then one may wonder why the quartic band crossing in our system does not split into Dirac crossings under the application of small moiré potential. The reason behind this comes from two things: (i) the constraints from chiral symmetry 𝒮, time reversal symmetry 𝒯 and three fold rotation symmetry 𝒞_3z on the system, (ii) the holomorphic dependence of 𝒟_4(𝐤;𝐫) on the parameter α. We discuss this below. Even though we want to show that the band crossing at energy E=0 at 𝐤 = 0 does not split for quartic band crossing with the addition of the moiré potential, our procedure will also show why the band crossing of order n=3 and n≥ 5 splits into crossings of lower orders. Hence, we keep the discussion general for the first few steps. We know that 𝒟_n(𝒞_3z𝐤;𝒞_3z𝐫) = e^2π i n/3𝒟(𝐤;𝐫). When α = 0, 𝒟_n(𝐤;𝐫) = (k_x+ik_y)^n. At α = 0, if we insist on moiré periodicity, there are degeneracies at the edge of the Brillouin zone at finite nonzero energies due to zone folding. For perturbatively small α's, band gaps will open at finite nonzero energy at the edge of the moiré Brillouin zone. Since we are interested in the splitting (or lack thereof) at zero energy at the zone center, we can project the Hamiltonian to the two bands closest E = 0. Since the moiré potential does not break 𝒞_3z or 𝒮, the 𝐤·𝐩 Hamiltonian ℋ_eff near 𝐤 = 0 should have the form ℋ_eff = [ 0 𝒟_eff,n^†(𝐤); 𝒟_eff,n(𝐤) 0 ], where 𝒟_eff(𝐤) = (k_x+i k_y)^n + f(k_x+i k_y), where f is some holomorphic function of k_x+ik_y. The function f needs to be holomorphic in k_x+ik_y because the original 𝒟(𝐤;𝐫) is holomorphic; hence a faithful 𝒟_eff,n should also be holomorphic. Now, since 𝒟_eff,n has to also satisfy 𝒟_eff,n(𝒞_3z𝐤) = e^2π i n/3𝒟_eff,n(𝐤), f must satisfy f(e^2π i/3(k_x+i k_y)) = e^2π i n/3f(k_x+i k_y). For different values of n, f(k_x+i k_y) to the lowest order in k are f(k_x+i k_y) = c+𝒪((k_x+i k_y)^3), n = 3, c(k_x+ik_y) + 𝒪((k_x+i k_y)^4), n = 4, c(k_x+ik_y)^2 + 𝒪((k_x+i k_y)^5), n = 5, c(k_x+ik_y)^n mod 3 + 𝒪((k_x+i k_y)^3+(n mod 3)), for any n, where c is some complex number that is a function of α such that c(α=0) = 0. Note that c is a real number if the system has mirror symmmetry. For order n band crossing to split, 𝒟_eff,n must be zero at some nonzero 𝐤. For n=3 or n=5, setting 𝒟_eff(𝐤)=0, we find that there are three 𝐤≠ 0 solutions: k_x+i k_y = (-c)^1/3,e^2π i/3(-c)^1/3,e^4π i/3(-c)^1/3. This is exactly what we see in Figs. <ref>(a,b). However, for n=4, there is one more symmetry: time reversal. Time reversal symmetry prohibits the term c(k_x+ik_y) in 𝒟_eff,4. Hence, quartic band crossing cannot split. § SPLITTING OF BAND CROSSINGS OF DIFFERENT ORDERS § MORE ON THE BIRMAN-SCHWINGER OPERATOR T_N(𝐤;𝐫) To find if there is a “magic” value of α at which exact flat bands appear in this model, we utilize a method introduced in <cit.>. We look at the structure of Bloch Hamiltonian ℋ_n(𝐤;𝐫) = e^-i𝐤·𝐫ℋ_n(𝐫)e^i𝐤·𝐫, whose eigenfunctions are periodic, where ℋ_n(𝐫) is a 2× 2 moirH́amiltonian with n-th order band crossing. The ℋ_n(𝐤;𝐫) has off-diagonal term 𝒟_n(𝐤;𝐫) = (2i∂_z-k)^n+α A(𝐫), where k = k_x+ik_y is the complexified wave-vector. If there is an exact flat band, 𝒟_n(𝐤;𝐫) has zero modes for all 𝐤. Writing 𝒟_n(𝐤;𝐫) = (2i∂_z-k)^4+α A(𝐫) = (2i∂_z-k)^n(1- α T_4(𝐤;𝐫)) where we defined T_n(𝐤;𝐫) = - (2i∂_z-k)^-nA(𝐫), this is the Birman-Schwinger operator <cit.>. Since (2i∂_z-k)^n is nonsingular when 𝐤 is not a reciprocal lattice vector (𝐤≠ m_1𝐆_1+m_2𝐆_2, m_i∈Z) for periodic functions, any zero mode of 𝒟_n(𝐤;𝐫) at these non-special 𝐤 values has to be a zero mode of (1- α T_n(𝐤;𝐫)), and hence an eigen-mode of T_n(𝐤;𝐫) with eigenvalue 1/α. Therefore, if the eigenvalues η_𝐤 of T_n(𝐤;𝐫) are independent of 𝐤, then the “magic” α's at which the exact flat bands appear are α = 1/η_𝐤. §.§ When are T_n(𝐤;𝐫) eigenvalues independent of 𝐤? The eigenvalues of T_n(𝐤;𝐫) just indicate the value of α at which there is a zero mode at 𝐤. There are two cases that we need consider here: * if the band crossing at 𝐤 = 0 does not split under the addition of moiré potential: This is the case for n ∈{2,4}. In this case, we prove that the eigenvalues of T_n(𝐤;𝐫) are independent of 𝐤 by contradiction (the mathematically rigorous proof can be found in <cit.>. here we give a physical argument). If T_n(𝐤;𝐫) eigenvalues (η_𝐤) are dependent of 𝐤, we can set α=η_𝐤, then there would be an isolated zero mode at 𝐤 for this value of α. Around this zero mode, there would nonzero winding of the Berry phase. That would imply the total winding of the system around the Brillouin zone has changed form n (the value we started with), which is impossible. This implies that T_n(𝐤;𝐫) eigenvalues are independent of 𝐤. * if the band crossing at 𝐤 = 0 splits under the addition of moiré potential: This is the case for n= 3 or n≥ 5 (as exemplified in Fig. <ref>). In this case, the eigenvalues of T_n(𝐤;𝐫) indicate at what value of α the Dirac crossing reaches wave vector 𝐤. Since the Dirac crossing reaches different 𝐤 at different value of α, the eigenvalues of T_n(𝐤;𝐫) are now 𝐤 dependent.
http://arxiv.org/abs/2408.11125v1
20240820182534
Towards the Unmanned Aerial Vehicle Traffic Management Systems (UTMs): Security Risks and Challenges
[ "Konstantinos Spalas" ]
cs.CR
[ "cs.CR", "cs.DC" ]
[ Stephen MacNeil =================== § ABSTRACT Every aspect of our life depends on the ability to communicate effectively. Organizations that manage to establish communication routines, protocols and means thrive. An Aerial Traffic Management System operates similarly as an organization but certainly in a more strict manner. Third party agencies ensure several aspects of their functionality, the utmost to be consider safety. Many people take safety as granted but it is a pretty difficult part our daily functions. Thus, apart from digesting new things and habits of the new era, simultaneously we have to ensure safety in every part of it. It is true that the more data we produce, the more information we create and the more specialization we must introduce in order to be effective in a reasonable time basis. A Unmanned Aircraft System Traffic Management (UTM) is a system that consists of miscellaneous modules where each of them needs its consideration regarding safety. In other words, a UTM is the state-of-the-art system that demand a high quality of services and specialization, if we need to consider them reliable. § INTRO §.§ History of Aviation During the early stages of aviation, the communications that used were visual signal using piece of colored strings or airplanes' maneuvers that were related to certain preset words. There was the assumption that the possibility of more than one airplane could fly on the same terminal was very low. Thus, it was impossible that two planes would collide. Nevertheless, in 1956 two planes crashed over the Grand Canyon. This incident ignited several actions in order to keep flights in safety, regarding the collaboration and introducing one central authority that manages the airspace.Aviation has passed from several stages in general, in respect of the technological status. At the beginning of the twentieth century , the Wrigth brothers managed to create some gliders, evolving simultaneously the area of aviation studying the aerodynamic science. It is widely known that aviation and aerospace uses the state-of-the-art technology. Apart of the structure, the aerodynamics, the size, great evolution has being taken place over the communication and data exchange sector. In the early years it was highly demanded that pilots made test or normal flights, in order to declare emergencies or communicate with the base, had to transmit critical data for performance. As the field of aviation took a lot of notice and as its capability of monitoring groundfields without getting noticed, part of aviation got militarized. In WWI, Military Air Forces played critical role to the war's outcome because the countermeasures for that kind of attacks was in the very early stages. §.§ Communications and Navigation Guidance in Classical Aviation Radio communication was first used in aircraft just prior to World War I (WWI) <cit.>. The first airborne radios were in zeppelins. Nevertheless, military needs triggered development of light radio sets that could be mounted on the aircrafts and they could report their observations immediately. The first experimental radio transmission from an airplane was conducted by the U.S. Navy in August 1910. The first aircraft radios transmitted using radiotelegraphy, sending Morse code. This configuration required two-seat aircraft, while the backseat pilot used the telegraph to transmit messages. During WWI, Amplitude Modulation (AM), which a way to transmit data in analog manner, made capable one pilot to transmit messages while operating the airplane, eliminating the need of a second crew having this duty. That led to lighter configuration leaving space for machine guns or third party equipment. Radios are based on radio waves which are electromagnetic pulse and part of the electronic spectrum. The atmosphere is filled with that kind of waves. Each wave occurs at a specific frequency and has a corresponding wavelength (period). The relationship between frequency and period is inversely proportional. A high-frequency wave has a short wave length and a low frequency f wave has a long period T. f=1/T T=1/f In order to perform communication over two stations, some hardware must be used to manipulate the multi-band microwaves that carries the information transmitted. Such devices are the transmitter, the antennas and the receiver.A transmitter consists of a precise oscillating circuit or oscillator that creates an Alternate Current (AC) carrier wave frequency. This is combined with amplification circuits or amplifiers. The distance a carrier wave travels is directly related to the amplification of the signal sent to the antenna. The antennas are simply conductors of lengths proportional to the wavelength of the oscillated frequency put out by the transmitter. An antenna captures the desired carrier wave as well as many other radio waves that are present in the atmosphere. Finally, a receiver isolates the desired carrier wave with its information.Currently, pilots and the ground control tower are mainly communicate via UHF (Ultra High Frequency, around 1 GHz ) or VHF (Very High Frequency, around 100-300MHz) analog voice radios  <cit.>. When an analog voice radio communication technology is used, all pilots in the same sector in order to communicate with an air traffic controller must be tuned on the same frequency. This can be challenging considering the expected air traffic growth. Statistical data on air traffic reveals that there is an increase trend regarding the transportation industry over the air. Long term forecast studies provided by Boeing predict a 5% growth rate of the world air traffic load between 2011 and 2030. This growth is due to many factors such as the more competitive low-cost airlines, the increased passenger demand and the greater need for companies to provide a better service to their customers. Nowadays, the air traffic load is still increasing, leading to a congestion of the worldwide analog voice frequency allocated to the civil aviation. The term “data link” is commonly used among the civil and military aviation community as the digital communications between an aircrafts and a ground stations (A2G) or between aircrafts to aircrafts (A2A). There are several benefits for digital data exchange to be preferred. For instance, using digital way to transmit signals we are able to: * Confirm the ground instructions aircrafts receive via special on board devices. * Correct errors during transmission. Thus, no data loss due to signal jamming. Another paradigm of digital communication is the utilization of the satellites. Aircrafts communicate with satellites for both operational an non operational services <cit.>. The majority role is for safety reasons using L-Band SATCOM services. L-band Digital Aeronautical Communications System (LDACS) <cit.> is one of the radio access technologies revealing the future aeronautical communication infrastructures that will allow aircraft to be digitally connected to the Aeronautical Telecommunication Networks (ATN) during all phases of flight. Specifically, LDACS shall connect an aircraft operating in the airspace by deploying a network of ground stations, each one of them covering a part of the airspace. An aircraft carrying an LDACS radio will then be able to connect to the Airspace Traffic Management System by communicating with the LDACS Ground Station (GS) covering its current location. This kind of deployment is similar to cellular mobile communication networks, functioned in areas called cells. A Ground Station operating in LDACS will be capable of serving up to 512 aircrafts <cit.>, considering the data exchange in general (guidance and communications). §.§ Internet of Things (IoT) The Internet of things (IoT) describes devices with sensors, processing ability, software and other technologies that connect and exchange data with other devices and systems over the Internet or other communications networks. Many IoT devices are embedded with technology such as sensors and software and can include mechanical and digital machines and consumer objects. Increasingly, organizations in a variety of industries are using IoT to operate more efficiently, deliver enhanced customer service, improve decision-making and increase the value of the business. With IoT, data is transferable over a network without requiring human-to-human or human-to-computer interactions. A thing in the internet of things can be a person with a heart monitor implant, a farm animal with a biochip transponder, an automobile that has built-in sensors to alert the driver when tire pressure is low, or any other natural or man-made object that can be assigned an Internet Protocol address and is able to transfer data over a network. Several fields of embedded systems are wireless sensor networks, control systems, automation (including home and building automation), independently and collectively enable the Internet of things. In the consumer market, IoT technology is most synonymous with "smart home" products, including devices and appliances (lighting fixtures, thermostats, home security systems, cameras, and other home appliances) that support one or more common ecosystems and can be controlled via devices associated with that ecosystem, such as smartphones and smart speakers. IoT is also used in healthcare systems.The Internet of Things is infiltrating many businesses. It provides simple means to collect and analyze technical system data to identify and optimize the performance of many things in our private and work lives. This technical revolution is also revealing new challenges and issues with our current IoT technologies. New solutions like Artificial Intelligence, Blockchain or 5G promise to overcome these challenges<cit.>.The enterprise IoT market grew 22.4% to $157.9 billion in 2021, according to the March 2022 update of IoT Analytics Global IoT Enterprise Spending Dashboard. The market grew slightly slower than the 24%, that predicted last year, due to several factors, including a slower-than-anticipated overall economic recovery, a lack of chipsets and disrupted supply chains. North America was the fastest growing region in 2021 (+24.1%), and process manufacturing was the fastest-growing segment (+25%). At this point, IoT Analytics forecasts the IoT market size to grow at a CAGR of 22.0% to $525 billion from 2022 until 2027 (fig.<ref>). The five-year forecast has been lowered from the previous year. A number of growth headwinds have had a much more profound impact than previously anticipated, namely supply shortages and disruptions (most notably chip shortages which are now expected to extend well into 2024 and possibly even beyond) and labor shortages, especially for sought-after software jobs. Despite the lowered growth projections, IoT remains a very hot technology topic with many projects focusing to enhance peoples' life. §.§ Unmanned Aerial Vehicles (UAVs) An unmanned aerial vehicle (UAV), commonly known as a drone, is an aircraft without any human pilot, crew or passengers on board. UAVs were originally developed through the twentieth century for military missions. As control technologies improved and costs fell, their use expanded to many non-military applications. These include aerial photography, precision agriculture, forest catastrophe monitoring, environmental monitoring, policing and surveillance, infrastructure inspections, smuggling, product deliveries, entertainment and many others.Various terminologies are employed to refer to unmanned aircraft. Some of them include: UAV (Unmanned Aerial Vehicle), UAS (Unmanned Aerial System), drone, RPA (Remotely Piloted Aircraft), RPV (Remotely Piloted Vehicle), and RPAS (Remotely Piloted Air System). There are a slight variations in the terminologies utilized by different countries and institutions when referring to unmanned aircraft<cit.>.The Unmanned Aerial Vehicles Market size is estimated at USD 17.31 billion in 2024, and is expected to reach USD 32.95 billion by 2029, growing at a CAGR of 13.74% during the forecast period (2024-2029)<cit.>. While UAV technologies growing, they have allowed manufacturers to produce a wide range of models in different sizes, weights, and shapes that can carry different sensor payloads, making them favorable across a broad application base. However, the lack of regulations and restrictions on the flying of UAVs beyond the visual line of sight (BVLOS) in several countries across the world has restrained the market's growth to its full potential. Other factors like security and safety concerns and trained pilots are also anticipated to challenge the growth of the UAV market to a certain extent. (Source: https://www.mordorintelligence.com/industry-reports/uav-market)UAVs are generally classified by their flying principle because of their aerodynamic structure. These having heavier mass will depend on propulsive thrust to fly into the air and categorized into two types, the rotor and wing type. Rotor UAVs will depend on multiple rotors and propellers attached to them in order to generate the required amount of thrust to lift upwards. The differential thrust make the capable to make turns, slips and general to manage their orientation. Similarly, UAVs with wing types will depend on their wings to produce the aerodynamic effect for lifting upwards into the air. This is classified further into three sub-categories, flapping-wing, fixed-wing and flying-wing. The UAV with a light weight like a parachute, balloons and blimps will rely on forces to fly in the air.Floreano et al.<cit.> discussed the different categories of UAV/Drones. Drones are categorized based on their mass and flight time. UAVs with heavier mass will have the capability to carry heavy payload and can perform autonomous and multiple tasks. Fixed-wing and Rotor-type UAVs are heavier in mass and relatively big in structure. Considering the aerodynamic efficiency, fixed-wing UAVs can have more flight time compared to rotor-type.The hardware in fig.<ref> reveals the onboard components used for different applications such as path planning, collision avoidance and inspection during the hovering of UAV. Light detection and ranging (LIDAR) and infrared devices are mainly used for collision avoidance and mapping, whereas camera and GPS are used for surveillance of particular area or path of the UAV in front and rear direction<cit.>. In fig.<ref> there is a block diagram that distinguish each function of a UAV and their correlation. §.§ Authorities Managing Aerial Vehicles Traffic The Federal Aviation Administration (FAA), which is responsible for the regulation and oversight of civil aviation within the United States, forecasts that the recreational small drone market will saturate at around 1.81 million units over the year 2026 and the commercial drone fleet will likely be at around 858,000 in the US. In a similar context, the number of consumer leisure drones will be about 7 million in 2050 and 400,000 for governmental and commercial purposes in Europe<cit.>. §.§.§ FAA and UAVs The Federal Aviation Administration (FAA) is a governing body under the United States Department of Transportation that is responsible for a wide range of regulatory activities related to the United States airspace. In a recently published final rule, the FAA addresses several concerns such as the need for a system to identify all aircrafts flying in national airspace, as well as the implementation of a separate system from the prevalent Automatic Dependent Surveillance–Broadcast system to prevent interference with manned aircrafts. Indicative, FAA is responsible for: * Directing air traffic in controlled airspace * Ensuring people from space launching * Airport safety/inspection * Standardising airport design, construction * Regulating for flights, inspection standards, and many others, while is been awake for new challenges and evolution due to technological improvements, implementations, etc. Thus, FAA enforces additional policies and certifications that would allow commercial and recreational flights in the US airspace. These are described in detail in Part 107 published by the FAA that addresses small UAS (sUAS) that weigh less than 55 lbs (24.9 kg).All unmanned aircrafts identified by the same document by class and capacity should have a remote identification (RID). RID broadcasts vital operational information including but not limited to drone's ID, its latitude and longitude information, current altitude, velocity, the ground control station, and the overall status of the UAS along with a timestamp. This information must be broadcast in ways that current wireless systems may recognize, record, and process. While US governing agencies retain the use of the word UAS for now, the International Civil Aviation Organization (ICAO) terminology is remotely piloted aircraft systems. The FAA describes the RID implementation as a digital license plate for all UAS flying in the United States airspace. They outline additional policies including several options for compliance, operating rules, and design and production guidelines for manufacturers. As the September 2023 deadline for compliance draws near, this article highlights possible deployment applications and challenges<cit.>. Hence,all pilots will be required to either have RID systems that comply with the described specifications or fly exclusively in designated areas known as FAA-recognized identification areas (FRIAs). §.§.§ EASA and UAVs European Union Aviation Safety Agency (EASA) has been the authority that secures the safety of aviation and frames the protection form in Europe. As an independent and neutral entity, EASA ensures confidence in safe air operations in Europe and worldwide by proposing and formulating rules, standards, and guidance-by certifying aircraft, parts, and equipment and by approving and overseeing organisations in all aviation domains. Thus, EASA aims to: * Ensure the highest common level of safety protection for EU citizens * Ensure the highest common level of environmental protection * Single regulatory and certification process among Member States * Facilitate the internal aviation single market and create a level playing field * Work with other international aviation organisations and regulators In order achieve its mission, EASA breaks down the ultimate purpose into several tasks: * Draft implementing rules in all fields pertinent to the EASA mission * Certify and approve products and organisations, in fields where EASA has exclusive competence (e.g. airworthiness) * Provide oversight and support to Member States in fields where EASA has shared competence (e.g. Air Operations , Air Traffic Management) * Promote the use of European and worldwide standards * Cooperate with international actors in order to achieve the highest safety level for EU citizens globally (e.g. EU safety list, Third Country Operators authorisations) European Union Regulations 2019/947 and 2019/945 set a framework for drones in order to fly with safety in European airspace. Regulation 2019/947 is applied from 2020 for all the European members (including Norway and Liechtenstein) and is about to expand to Switzerland and Iceland. It defines three categories of civil drones operations: open, specific and certified.The open allows people to operate drones in private areas that are not interfere formal flights. This category addresses the lower risk for civil drones operations and no special authorization is mandatory before flight. It is divided in three subcategories (A1,A2,A3).The specific covers operation with more risk, whereas the drone operator receives an authorization before the mission.The certified refers to flights with high safety risk and thus certification for then drone and its operator is mandatory. §.§.§ International Civil Aviation Organization (ICAO) The first ICAO exploratory meeting on UAVs was held in Montreal on 23 and 24 May 2006. Its objective was to determine the potential role of ICAO in UAV regulatory development work. The meeting agreed that although there would eventually be a wide range of technical and performance specifications and standards, only a portion of those would need to become ICAO Standards and Recommended Practices (SARPs). It was also determined that ICAO was not the most suitable body to lead the effort to develop such specifications. However, it was agreed that there was a need for harmonization of terms, strategies and principles with respect to the regulatory framework. §.§ Cryptography The goods of Confidentiality, Integrity and data Availability are fundamental rights for every human. Securing these fundamentals is one of my major aspects. As we live in a modern, digital, high quality and state-of-the-art environment, information flow in various directions via various paths when we use digital devices and communication software. Thus, we rely the security of our data on the cryptographic systems encapsulated on the above software or hardware.Cryptography, as a method, has its roots from the very ancient years when sensitive information had to be hidden during their transmission. With such practises,people managed to conserve the aforementioned fundamentals. The first attempt of a machine that ciphers a message is aged in ancient Sparta . That machine was called Spartan baton. Later, Caesar came up to a mathematical method that relied in letter sifting. He was able to send messages to his generals without these been broken. While an action creates a reaction, cryptography has its own rivalry which is cryptanalysis, and vice versa. As long the latter become more efficient and sophisticated the need for more effective methods of cryptography is a must. While humanity constantly evolves the field of cryptology, which is both cryptography and cryptanalysis, it has been converted to a remarkable science. Very advanced math techniques are the basis for both legs of this science. Apart the power of mathematics, computer hardware plays key role in modern cryptology. A state-of-the-art hardware are the quantum computers which states the future of cryptography uncertain due to their high computational power. Thus, efforts to break ciphers will not be that inefficient anymore, in respect of time and space. §.§.§ Symmetric Cryptography Symmetric key cryptography refers to encryption methods in which both the sender and receiver share the same key(fig.<ref>). This was the only kind of encryption publicly known until June 1976. Symmetric key ciphers are implemented as either block ciphers or stream ciphers. A block cipher enciphers input in blocks of plaintext as opposed to individual characters, the input form used by a stream cipher. In one example of a One-Time-Padding scheme, Alice and Bob both agree on a single, random n-bit binary vector p (known as the pad). In this case, p is the private key shared by Bob and Alice. When Alice would like to transmit a message to Bob, she performs a modulo-2 addition between p and her message m and transmits the result r to Bob. r=m⊕ p Note that binary modulo-2 addition is the same as XOR (exclusive OR) . This addition constitutes the entirety of the encryption process. When Bob receives Alice’s encrypted message, he uses the same pad p and performs the same addition of p to the received message. What it comes is r⊕ p=(m⊕ p)⊕ p=m⊕ (p⊕ p)=m ⊕ 0=m , which is the original message. Since only Alice and Bob know the secret pad, any third party that intercepts the encrypted message will have a difficult time deducing the original message. One of the primary disadvantages to private key cryptography relates to the difficulty of keeping the private key secret or use more than one an synchronize them. In order to protect the cryptosystem from attacks, the private key is often frequently changed, and the process of agreeing on a private key may need to take place in person. Furthermore, increasing the number of users in this cryptosystem also increases the chances that the system will be broken. Thus, private key cryptosystems do not scale well. These difficulties are not present in public key cryptography. §.§.§ Asymmetric Cryptography Public key cryptography, also known as asymmetric key cryptography, takes another approach to the process of encrypting and decrypting. In public key cryptography each one, Alice and Bob, maintain their own distinct private key and also a distinct public key. A public key is a piece of information that is published for all parties to see. Thus, Bob and Alice each publish a public key but they also keep a single piece of information secret. Only Alice knows her private key and only Bob knows his private key. Note that an individuals public and private key are typically related in some way that facilitates and enables the encryption and decryption process.Suppose Alice wishes to send Bob a message (fig.<ref>). Alice begins by looking up Bob’s public key. Alice then uses this public key to encrypt her message and transmits the result to Bob. Bob receives Alice’s transmitted message and uses his secret private key to decrypt the encrypted message and recover Alice’s original message. Notice that the information available to an attacker has increased substantially. The attacker now has knowledge of a public key (which is related to the private key used for decryption), additionally to the ciphertexts. In order to ensure security, it must therefore be difficult to produce the private key from the public key. A well-known implementation of public key cryptography, RSA. §.§.§ Hash Functions This type of encryption doesn’t make use of keys. It uses a cipher to generate a hash value of a fixed length from the plaintext. It is nearly impossible for the contents of plain text to be recovered from the ciphertext. Therefore, the hash function is a unique identifier for any given piece of content. In this process, plaintext data of any size is converted into a unique ciphertext of a specific length (fig. <ref>).By looking at the definition of hash function it may appear very similar to encryption yet hashing and encryption are not the same. The very basic difference between the two is that, unlike encryption, hashing function does not require anything like decrypting the hash value. It basically works in a way that plaintext data is inserted and using a mathematical algorithm an unreadable output is generated. The output is called hash digest, hash value or hash code, which is the unique identifier. Properties of a strong hash algorithm include determinism, pre-image resistance, collision resistance, good speed and avalanche effect aka snowball effect. Hash functions are the key element of a wide and well known technology, the blockchain. §.§ The Blockchain Blockchain is a chain of blocks where each block contains a set of transactions that are digitally signed by its "verifier" and stored across the distributed network so that all the legitimate stakeholders can access/verify them<cit.>. Due to the attributes of Blockchain such as decentralization, immutability, auditability, transparency, and cryptographic security, it offers various benefits to different domains such as cryptocurrency, financial sectors, private/public segments, insurance, healthcare, supply chain management, Internet of Things, etc. Blockchains are typically managed by a peer-to-peer (P2P) computer network for use as a public distributed ledger, where nodes collectively adhere to a consensus algorithm protocol to add and validate new transaction blocks. Blockchain records are not unalterable, considered secure by design, with high fault tolerance.Blockchain technology provides various benefits as follow: * Transparency: Transactions stored on the Blockchain are transparent to all the participated users. Blockchain uses the distributed ledger (a shared copy of document) kept by individual parties and can only be updated by the consensus mechanism, which means that the file can only be updated if all the legitimate parties agree to do so. * Security: There are many ways by which blockchain is more secure than the other record management systems. Transactions are added after the consensus by all permitted parties. Once everyone agrees upon the transaction, it is encrypted and securely linked with the previous block. Secured hashing mechanisms attached with each block are used to secure the blocks that hold the number of transactions. And hence, it is practically infeasible to temper a block as it requires modifications to other blocks in the chain too. * Traceability: Tracking of data/process is easy with Blockchain. Transactions are visible to all parties which lead to traceability for any operation. If enterprise deals with the supply chain, the tracking of the product is easy through this technology. * Fast and Efficient: In a traditional system, the paperwork is time-consuming, tedious, and prone to human errors. By automating it with Blockchain, the process becomes more fast and efficient and operates without any third-party intervention. * Cost-effective: For any business, profit/cost-effectiveness is important. With this technology, it doesn’t need any intermediary or third party. Hence, it becomes cost-effective. Blockchain technology became popular and known after it was introduced and used in cryptocurrencies, such as Bitcoin, introduced by Nakamoto, in 2008. Bitcoin was the first electronic payment system without third-party intervention using decentralized and distributed peer-to-peer networks. The term “Block” and “chain” used separately by Satoshi Nakamoto.Blockchain, in a simple word, is a technology that provides accessible and verifiable data control over the distributed (decentralized) environment to every participated node in a fast and convenient way. There is no single or centralized authority to validate/verify the nodes. In order to participate in a network, a node has to validate itself by solving a mathematical puzzle called a proof of work. A node that succeeds in a proof of work can introduce a block. We will see the block and its content in detail in the next subsection called architecture. The action that new data that must be validated and become content of a blockchain is initiated by a node and call transaction. Blockchain's architecture can be seen in fig.<ref>. Then, a blockchain’s subsequents and subparts are: * block: chunk of data (or a few transactions) is grouped to form a block in a blockchain * transaction: a chunk of data to be stored on the blockchain * transaction verification sequence:A user generates a transaction and sends it to all the nodes on the network, then Nodes verify this transaction and add it to their candidate blocks, then nodes then broadcast their candidate blocks to the entire network, and so on and so forth. Blockchains are separated in two types. The public, which are less safe due to lack of restrictions implemented and simultaneously slow because they open to anyone who want to become node. On the other hand, there are the private blockchains that are fast and suppose to be safe. Safety though is a matter of assumptions we make because whenever a provider is private there are few ways to investigate its mechanisms. § UNMANNED AIRCRAFT SYSTEM TRAFFIC MANAGEMENT (UTM) §.§ General The aforementioned chapters <ref>, <ref> generate the strong belief that in the near future our skies will occupied by several kinds of aerial vehicles. This prediction sets as mandatory to reconfigure the control of the airspace. Until now, the Ground Control Systems encapsulate all the modules and functions under rules related to classic aviation, which are the manned vehicles. Moreover, the manned vehicles per se, include equipment capable to interchange data with specific format and under specific variety of other vehicles. For example, manned aerial vehicles utilize Radio Frequencies in order to communicate or navigate towards their destination. Moreover, due to the future congestion of the airspace, the communication and guidance infrastructures must manage the diversity consists of aerial vehicles. Thus, there is a growing research towards the Unmanned Aircraft System Traffic Management (UTM).A UTM is designed to facilitate the integration of the UAVs into the National Airspace System (NAS), particularly in the Very Low Level (VLL) airspace, with utmost emphasis on safety and security. The VLL airspace is set to under of 400 feet Above Ground Level (AGL). To achieve this objective, the UTM provides distinct services to the stakeholders that comply with respective country regulations and separate from those offered by ATM. The physical integration of the UAVs in the NAS entails collaborative efforts between the ATM and UTM, encompassing both regulatory and technological endeavors. Some of the fields that a UTM focuses are the following: * Airspace design and operation * Physical and communication infrastructures * Technical and communication standards, protocols * Regulations A UTM shall be costructed by several agents. For instance, if there is one supplier that provides UTM services, the UTM is classified a monolithic. On the other hand, if there is more than one is called federated. §.§ UTM Architectures In order to to use the airspace with both manned and unmanned vehicles we have to introduce an organization that services the stakeholders who want to utilize the airspace. This organizations are called UTMs and they are divided in two main categories based on their architecture, the centralized and decentralised UTMs. Both of them though, share the same principles, must offer the same functions and includes almost the same entities. Some of these entities are: * The UAV operator * The UAV itself * A UAS Service that allow operators to use the UTM * Supplemental Data Service Providers (SDSPs) that provide additional information and data, like weather conditions of terrain morphology. * A regulator that is responsible to authorize all the parts and make them capable to be part of this ecosystem. The major aim of the regulator is safety. The key element that distinguish discrepancies are the way the data in exchanged between the stakeholder. Furthermore, if there is one entity that controls the data exchange then this kind of architecture is considered as centralized. On the other hand, if the stakeholders that utilize the UTM are free to exchange data in both open or closed manner, without a central entity that controls data flow, then this kind of architecture is consider decentralized. §.§ Centralized Architecture A centralized UTM is based on a central entity having the ability to support relationships and information flow between the stakeholders. The foundation of a centralized UTM’s architecture is argued by the safety issue. A single trustworthy source of critical information is mostly considered more accurate and safer. While a centralized system in considered to be more affordable, they have an significant drawback related to the theory of the single point failure<cit.>. There are a variety of factors that may trigger a single point failure incident which jeopardises the safety of the centralized UTM. Some of these factors may be: * Pilot errors * UAV technical malfunctions * Bad weather conditions * Unexpected events, like drop of the power supply As the number of the UAVs increases the airspace is getting more and more dense. It is an analogy of the command and managing principles, where the more people must be managed and more levels of management must be introduced. Thus, when a lot of aerial vehicles fly on the sky, the possibility to take place an incident based on the aforementioned factors is higher. For instance, if two or more UAVs declare an emergency simultaneously, then the centralized entity must manage this unwanted situation. The dilemma that arises then is where to provide the the most of the sources. If the UTM spent the bandwidth and the computational capability to handled the emergency, then the other vehicles that are still on the sky may encounter high latency. Hence, the possibility to have the sequence of possible accidents increases.In other words, the aforementioned latency, combined with sudden changes in velocity or trajectory, can negatively impact the efficiency of the decision-making process. Therefore, the combination of the dynamic behavior of UAVs, the increasing complexity of the airspace with a growing number of UAVs, and the latency introduced by centralized communication poses significant challenges to the effectiveness and efficiency of a centralized UTM system.Considering the criteria that establish an architecture as centralized, we may distinguish these entities that one of them is in charge of managing the function of the UTM. Thus, we classify the centralized in those: * Based on Regulator Rules. A centralized architecture based on regulator rules relies on a central entity like the FAA (see ch. <ref>). This kind of centralized architecture seems to be significant strict. The rules that the operators must follow in order to utilize the UTM are co-related with the capabilities and the technical characteristics of the unmanned vehicles. Based on literature, there are two such architectures, the one from USA (US UTM) and the other from India. The former developed under the collaboration between the FAA and the National Aeronautics and Space Administration (NASA). This kind of architecture face issues of latency and scalability and thus the decision making procedures in case of emergency increments the overall latency, provoking additional risks.US UTM backbone is based on constant communication between the regulator FAA (see about regulator in ch. <ref>) and the US UTM as shown in fig. <ref>. * Hierarchical Centralized Architecture. The rational behind the function of the Hierarchical Centralized Architecture is that the aerial vehicles to be separated according to the altitude. Thus, the limit set is 400 meters, something that remind us considering the VVL that contain the condensed and crowded airspace from several vehicles. Taiwan is the country that implements this kind of architecture. In this case, the centralized entity that communicates with the stakeholders still exists, see fig. <ref>. The centralized architecture includes the UTM cloud and a principal UTM server. The role of the UTM cloud is to receive UAV surveillance data, whereas the principal UTM server is to receive and treat data from the cloud. This approach focuses primarily on the surveillance function of the UTM. * Centralized Service-oriented Architecture. In this aspect of function, the UTM reorganising the software services and infrastructure in a manner that they interact each other. The architecture is simple and bases on integration, sharing and the reuse of services from several providers. Thus, the services must use common interface standards and protocols to be rapidly incorporated into new applications. * Centralized Architectures based on Specific Cellular Network. Whereas cellular technologies used in UTMs widely, some centralized architectures base their function on the scalability of the cellular concept. Cellular networks are based on high frequency radio waves. Until now the technology have been introduced is 4G and the corresponding frequencies are varied, including the 600 MHz, 700 MHz, 1700/2100 MHz, 2300 MHz, and 2500 MHz bands. The lower frequencies made it possible for carriers to transmit 4G/LTE signals in remote areas. High frequency signals need a mean to rely signals (i.e satellites) to the receivers which are Beyond Visual Line Of Site (BVLOS). This happens because this kind of signals used to be absorbed from mountains , buildings, etc.We are indeed facing an evolution regarding the telecommunications. The next era of such technologies is 6G <cit.>, fig. <ref>. These networks will be significant faster than previous generations and also expected to be more diverse. This generation will support applications beyond current mobile use scenarios. Concerning the very low latency, the 6G generation will be much more reliable than the upcoming 5G. This concept implies that the communication systems that utilizing 6G shall be the next generation of the UTMs. §.§ Decentralized Architecture In the previous chapter mentioned that the main drawback of the centralized systems is the single point failure. This flaw can be extinguished by setting the concept of the blockchain as the backbone of a UTM. This framework is named Decentralized (or Distributed) Architecture. Some of the big companies in the aviation industry founded research for the development of UTMs based totally on a decentralized architecture. In ch. <ref> analyzed the fundamentals of a blockchain's architecture where security is one of the key element, protecting UTMs from attacks like signal spoofing, man in the middle etc.In order to understand the decentralized concept, there are several entities that participate into this ecosystem, which are: * UAVs as autonomous or remote piloted vehicles * Ground Control Stations (GCS) which will guide the drones, control the airspace and ensure collision avoidance. The GCS receive data from the drones and send to them Command and Control signals (C2) * The blockchain network. This will serve distributed database for sharing recorded transactions among network nodes * Cloud server to assist drones' computations * Users that use data from UAVs and GCS from several reasons. All the aforementioned entities are solving the problem of the safe and uninterrupted data manipulation but not storing them. Hence, there is another database called ORBIT DB (see fig. <ref> introduced to ensure that all the data will be available. The data stored are about the drone, its operator and its mission. In order to accomplish a mission, each drone and user must have a unique Remote ID or RID(see ch. <ref>). Firstly, the operator must register its UAV to be assigned an ID and be added in the authority database which shall broadcast during the flight. Then, he must subscribe the flight plan in order to request the services of the UTM. The objective of the conflict management is to avoid physical collisions between the UAVs and static or mobile obstacles namely the buildings and structures, people, animals, manned aircraft and other UAVs. There are studies present conflict management algorithms as a process named strategic deconfliction, tactical deconfliction. The strategic deconfliction (mission scheduling) is done before the UAV flight. To ensure collision avoidance, the operators to submit their flight intent based on precise information about weather, terrain, and airspace constraints. Despite the fact that submitting a flight plan, the risk of collision is still high. Thus, decision-making is a concept that provides the solution to be adopted and is attributed to the emergency management procedures. When multiple UAV flight plans conflict due to a loss of separation, the solution is to modify flight plans by using a geometric approach. As the principle of the blockchain is that the data are pert of all nodes, the new flight plan generated is instantly submitted as a transaction. § COMMUNICATIONS IN A UTM Currently, the pilot and the ground control tower mainly communicate via UHF (Ultra High Frequency, around 1 GHz ) or VHF (Very High Frequency, around 100-300MHz) analog voice radios  <cit.>.When an analog voice radio communication technology is used, all pilots in the same sector in order to communicate with an air traffic controller must be tuned to the same frequency. This can be challenging considering the expected air traffic growth. Statistical data on air traffic reveals that there is an increase trend regarding the transportation industry over the air. Long term forecast studies provided by Boeing predict a 5% growth rate of the world air traffic load between 2011 and 2030 . This growth is due to many factors such as the more competitive low-cost airlines, the increased passenger demand to travel and the greater need for companies to provide a better service to their customers. Nowadays, the air traffic load is still increasing, leading to a congestion of the worldwide analog voice frequency allocated to the civil aviation. The term "Data-Link" is commonly used among the civil and military aviation community as the digital communications between an aircrafts and a ground stations (A2G) or between aircrafts to aircrafts (A2A). There are several benefits that digital data exchange is preferred. For instance, using digital way to transmit signals we are able to: * Confirm the ground instructions aircrafts receive via special on board devices * Correct errors during transmission. Thus, no data loss due to signal jamming. The UTM is based on information flow exchange between all participating entities, making the communication links crucial to conduct command and collision avoidance functions. The needs in communication links differ according to the level of autonomy and the extent between the operator and the UAV, for both Visual Line Of Site (VLOS) and Beyond Visual Line Of Site (BVLOS) cases. The communication technologies used for UTMs must guarantee a real-time communication, a wide coverage area, a secure networking and a very low latency. The evolution of the cellular communications <cit.> may contribute to lower the latency and establish the UTMs communication to tend in a real-time fashion. §.§ Command and Control (C2) Communication The Command and Control (C2) communication ensures that, after being registered, a UAV is constantly under control even if it has to modify its mission or its trajectory. The controller may be a GCS or a dedicated controller. Hence, the C2 concept must be ether between GCS and UAV or between the controller and UAV and the Ground Radio Station and must ensure that this data will be transmitted without any errors. For UAVs there are three flight modes: lateral flight, vertical flight and hovering, with the possibility to change the mode.The information transmitted in a control message differ according to the mode applied. Specific values of key performance indicators (KPI) are associated with each C2 mode. Several technologies are used to ensure C2 communication. They can vary from terrestrial cellular networks to satellite connectivity. In fact, if the operation is VLOS, the communication is done through a direct radio link. A BVLOS communication relies on the use of a satellite connection <cit.> or on more than one connection with redundant C2 links. For the decentralized blockchain UTM systems, the GCS receives data from the UAV and then returns the control commands using the cloud through the Internet. While the UAVs are connected to the Internet, the data security is in great hazard because is exposed to malicious attacks. Threats can vary and some of them are referred in ch. <ref>, <ref>. Their main purpose is to alter the initial data stream reducing the stability and security of the UTM. §.§ Air-to-Air (A2A) Communication In ch. <ref> was implied that there is an incremental increase of the air traffic, foreseeing that the next decades air traffic management will come to a totally new era. Specifically, new air traffic services and operational concepts have been defined and shall be supported during all phases of flight. A set of modern digital data links integrated into a single communications network. LDACS is a cell-based aeronautical communications system operating around 1000 MHz frequency band and supports data and voice communications between ground stations and airborne stations <cit.>. LDACS is a wideband terrestrial system with VLOS coverage intended to work along VHF Data-Link2 (VDL2) for new and more demanding services. It operates in the L-band (around 1 GHz), which has excellent propagation characteristics. The operational compatibility (spectrum interference) with existing L-band systems, such as navigation Distance Measurement Equipment (DME), Global Navigation Satellite System (GNSS), Military LINK16 and mobile telephony, remains an important subject, and desirable technology features were identified that could make LDACS spectrally compatible. An LDACS ground station is located in the center of each cell and communicates with the LDACS airborne stations located in the aircraft flying within its cell. Using a frequency-division duplex scheme, the LDACS ground station transmits in the forward link (FL) of the cell at the same time as the airborne stations transmit in its reverse link (RL). In 2019 took place the first experimental usage of the LDACS <cit.>. In this experimental four simulated ground stations (GS) created as an airborne and an aircraft occupied for the test flight. During the LDACS experiment, six flights took part and those produced numerous data, addressing the communication, navigation and surveillance capabilities of LDACS. Furthermore, there implementation consists of an Ground Control Station (GCS). The air-to-air (A2A) communication within the L-band Digital Aeronautical Communications System (LDACS) is currently in the initial stages of its development <cit.>. Given that, the LDACS A2A mode must be able to operate without any ground or satellite support, the data link must provide means for the aircraft to establish and organize an independent communications ad-hoc network, which imposes a great challenge for the design of the data link and specially for its medium-access control. The Air-to-Air (A2A) links concern both the UAV-to-manned aircraft and UAV-to-UAV (U2U) communications. The UAV and the manned aircraft must exchange positional information to avoid collision, especially when the UAV is inside the controlled airspace or around the airports. The radio technology is used to allow the two-way radio communication with Air Traffic Control (ATC). The U2U communication is especially used in the decentralized UTM to exchange the essential data necessary for the local decision-making in collision avoidance. Besides that cellular communication is proposed to be used in the decentralized architectures <cit.>, Wi-Fi and Bluetooth protocols may be used in case of conflict management in short range distances. § DECISION MAKING IN UTM During execution its flight plan, a UAV may be exposed to external hazards or face an internal malfunction. The level of risk associated with the UAV can vary, resulting in varying degrees of impact on its immediate environment. In some cases, this could lead to incidents or accidents and relating the distance from the airport, an accident may be potential deadly. Therefore, effective decision-making within UTM systems is crucial, especially in emergency situations.In general, an emergency situation may occur for several reasons, such as: * Technical failures, such as the loss of A2G communications, Global Positioning System (GPS) malfunctions, power, camera, or engine failures. * Human errors, involving control or wrong decision making. * Corrupted data due to cyber attacks * Infrastructure problems such as radio control failures * Sudden weather condition changes In case of one of the above parameters occurs, the UAV has the option to either halt the mission or alter its flight path. Onboard decision-making procedure are encapsulated to minimize air-to-ground dependencies. A Finite State Machine (FSM) is implemented, in order to improve the quality of decision making response time. On the other hand, there is another rational that manipulates predicts risk factors and assesses the decision making based on these calculations. These kind of factors may be generated from the reliability of the UAV's performance. The decision is triggered by a combination of the risk-factor prediction, the decision generator and trajectory generator. The three functions can be implemented totally on-board the vehicle allowing UAV to be independent concerning communications. In fact, on board decision-making process serves to effectively reduce the necessity for ongoing data transmission between the UTM and UAVs, a factor of considerable significance, particularly in scenarios involving communication disruptions. However, it requires an on-board computational capacity. An alternative architecture involves conducting the risk prediction remotely, while maintaining the other two functions on-board. It allows the resource sharing and a reduced size and weight on-board but the decision-making is dependent on a communication that must be robust and unceasing. This architecture is based on remotely risk prediction and path generation with on-board decision-making. In the final architecture all the functions are hosted remotely. Here, the decision-making is widely dependent on the network connection. §.§ Centralized vs Decentralized UTM Decision-Making According to <cit.>, the messages that a UAV exchange are categorised to: * Control messages: are sent from the operators to the UAV. * Telemetry messages: include the Global Navigation Satellite System (GNSS) position, status, and altitude information. * Awareness messages: contain information about the current position of the UAV and its future trajectory. A UAV is supposed to be autonomous when it is able to accomplish its mission without any input from the operator. Thus, the more autonomous a vehicle is the less telemetry and awareness signals need to transmit. A high level of autonomy means a capacity of self-decision making and the possibility of communication between UAVs in order to avoid conflicts. For military UAVs, the automation levels must be linked to detection and tracking capabilities and a self tactical deconfliction aptitude. As mentioned in the previous paragraph, this affect vehicles's mass. In the centralized architecture, if a conflict is detected, the information flow from the UTM to the operator contains the updated trajectories, which in turn, transmits the new commands to the UAV. All these data are transmitted through the same channel shared with the other aerial vehicles. The latter use this channel to execute their flight plan and consequently this increases the latency. Moreover, apart from receiving all these data, the centralized entity ought to process and update new date not only to the emergency part but to the other vehicles which leads to a higher demand of quality of service, sources, etc. In this state of function, the decisions made are not representative against the real situation that the system is. The latency and the processing time referred to the data received some ms before and do not represent the current status. The case is different in a distributed decision-making (decentralized) system. The awareness data is no longer transmitted to the UTM but is exchanged between the UAVs. If a conflict is detected, the concerned UAV makes a self decision or reacts by relying on a robust and aligned mechanism. Conflict resolution is a process based on the communication between the UAVs more than the A2G communication. The direct communication between the UAVs and the local decision-making reduces the decision latency. In this decentralized architecure, the degree of collaboration between UAVs implies the existence of three categories: * Decentralized with uniform rules. Each UAV resolves conflicts by executing a set of predefined rules and protocols. The communication between the UAVs is only to share necessary data. * Decentralized with coordination. The UAVs get new coordination according to preset manner. * Decentralized with mixed rules. Depending on the area they fly, UAVs follow a certain batch of rules. § PERFORMANCE EVALUATION OF A UTM A UTM is a perplexing ecosystem consisted of subsystems where the total performance is the sum of the performance of each part. The communication-navigation and the decision-making policy are major subsystems that can been evaluated separately. NASA proposed some quality and quantity metrics that a can evaluate the aforementioned performance. These metrics are values that asses the UTM in emergency cases, conflicts, and communication loss to study the system reaction in such situations. The performance is measured based on resource usage (memory usage, CPU consumption) and the network latency, which is the total time duration taken for the execution of a transaction in the blockchain network. In order apply all these metrics we must built such a system and furthermore to compare the results with other UTMs. This case seems to be difficult to be monitored. Hence, it is essential to apply simulations in a UTM where the input may be some critical situations such us these mentioned in ch. <ref>. In <cit.>, a simulation framework based on the Robot Operating System (ROS) and the simulator Gazebo. This simulator uses the architecture of U-space. U-space is a set of specific services and procedures designed to ensure safe and efficient access to airspace for a large number of drones, and which are based on high levels of digitalisation and automation. Using ROS, the UTM is modeled as a set of independent nodes. The UTM manager node reflects the aspect of the system, and the DB manager is the node linked to all databases and responsible for the basic write and read operations. The main pre-flight and in-flight services namely the flight plan management, tracking, monitoring, emergency management, and conflict solver of U-space are modeled as nodes too. UTSim <cit.> is another simulator that is able to evaluate the performance of a UTM. Its primary advantage is that can use 3D models and represent the nodes of the UTM in a more fancy way. Another important is the scalability and collision scenario in which the number of UAVs that are allowed to fly simultaneously was varied from 20 to 1500. Firstly, the simulate a system that include drones to fly without implementeting a deconfliction algorithm. Then, by inserting several deconfliction algorithms to system, the concluded that the number of collisions is linearly proportional to the number of UAVs flying. The deconfliction is tested with the Barfield’s algorithm <cit.>, which is an existing collision avoidance algorithm based on geometric method. Additionally, it is crucial to test the simulator with more deconflicting algorithms and to insert the deconfliction in the integrated airspace between manned and unmanned aircraft. § UTMS OPEN ISSUES In nowadays, the UTMs are in heavy development and a lot work must be done in order to be ready to manage the airspace. The era we are living is an evolutionary era where the science of communication is beyond all in doubt. Techniques that used to be fundamentals in communications are about to considered obsolete. On the other hand, new technologies are about to extinguish the human factor which is considered from some people controversial. §.§ Interoperability Interoperability is an internal function where all the different parts of the UTM ecosystem must integrate each other with harmony. In case of an entity does not set a communication channel between another one, then there is not a smooth interoperability.In the centralized architecture, the interoperability is a fact due to the reduced parts that must exchange data. On the contrary, in the distributed systems the entities differ. Hence, there must be introduced compatible protocols, data format and standards for flawless communication. In other words, standardisation not only must be introduced in any aspect of the UTMs but also among all the UTMs. Fortunately, there are several agents and authorities that manages standard (see ch. <ref>) which are in a high state of consciousness and aware of the next era in aviation and airspace management. §.§ AI Issues FAA and EASA have initiated discussion around Artificial Intelligence (AI). Generally speaking everyone has a different definition of what AI is. AI allows machines to learn from experience and adjust the way they respond based on the new data they collect. In other words, AI goes through a learning procedure in order to be able to act based on its "thinking". Traditional aviation software is certified to be Deterministic via guidelines such as DO-178C (avionics software) <cit.> and DO-254 (Avionics Hardware) <cit.>. AI essentially enables the same software inputs to yield a different outcome as the software "learns" over time. The main concern around implementing AI into transportation services is safety. Many entities, including the FAA and Department of Defense, look at AI through a "guilty until proven innocent" lens. One fundamental aspect of safety-critical systems is determinism, almost opposing AI, where the same inputs provide the same outputs, every time. This is where DO-178C comes into play. DO-178C is a set of guidelines covering 71 Objectives to ensure that software will perform safely in an airborne environment. The guidelines categorize software on five levels of Reliability, ranging from "No Safety Effect" to "Catastrophic". Artificial intelligence will necessitate high levels of automation and act as an enabler with respect to the integration of unmanned and manned aviation and will ultimately enable safe operations with respect to high numbers of drones utilising the same airspace, and more specifically with respect to detect and avoid capability. AI is going to be heavily developed and utilised by organisations that certify as U-space service providers (USSP’s) when providing a service to Unmanned Aerial Systems (UAS) operators. The equipment utilised by UAS operators will to some extent already benefit from AI but the level of automation is currently constrained by regulation. A legal framework must exist as AI will not only have a significant impact upon existing laws but will ensure a framework that facilitates safety and the fundamental rights of citizens and businesses, with respect to AI. The EU has published a proposed law, namely the Artificial Intelligence Act as permitted under Article 114 of the Treaty on the Functioning of the European Union (TFEU) <cit.>. In a UTM, AI focuses in collision avoidance, detecting and preventing potential conflicts between UAVs and other objects such as buildings, aircraft, or other UAVs. An issue that may consider is how much time AI algorithms need to evaluate and learn from new data. As a matter of fact, artificial intelligence systems that make decisions based on historical data. Thus, AI must learn in a simulation mode, initially. While the concept of simulating UTM is simultaneously in recent stages, then the implementation of AI in this period of time affects the evolution of the concept per se. Furthermore, research has shown that artificial intelligence (AI) systems often include bias against minority subgroups <cit.>. How this drawback may affect decision-making in a UTMs ecosystem? Regarding this assumption, the regulators must be aware of these algorithms and more precisely they must set the roadmap. Indeed, civil and military drones may share the same skies but considering their missions military drones seems to be more rock solid. This seems to be good but also seems to set military drones as the ultimate vehicle flying in skies. In fact, most AI-based systems are perceived as a black-box that allows powerful predictions, but it cannot be directly interpreted due to the difficulties in determining how and why it makes certain decisions. Actually, lack of transparency and trust in modern AI systems poses important ethical issues as highlighted in the "ethical guidelines" <cit.>. In <cit.> proposed a way to mitigate such issues, implementing ethical rules. This approach based on quantitative ethics determines which action maximizes benefit and minimizes harm. Its objective is to make it possible for an AI algorithm to take the right decisions particularly when it encounters an ethical dilemma. Additionally to the aforementioned internal flaws, in the more technical view, machine learning and AI may suffer for several kind of outer parameters, like attacks on data. <cit.> mentions that blockchain and AI (two of the fundamental parts of the decentralized UTM architecture) have been recently found vulnerable to several cyberattacks and a number of security issues have arisen, especially when it comes to processing sensitive data. AI systems are also vulnerable to adversarial attacks, which become an inherent weakness of Machine Learning and Deep Learning models. §.§ Data Security Against Classical Computers The information while a UAV transmits includes remote control commands, telemetry information, and mission sensor information. A remote control command is sent from the GCS to the targeted UAV. The main function is to control the UAV flight attitude and then guide it to the designated position and control the work of the mission equipment. Telemetry information includes aircraft attitude, flight parameters, equipment status and other related information that the UAV sends to the GCS. Regarding the remote control and telemetry information, these data sizes are very small. The transmission is not high but it requires real-time, reliable and secure transmission <cit.>. Mission sensor information refers to the information obtained by the UAV mission equipment, such as cameras, infrared scanners, multi-spectral sensors, synthetic aperture radar, etc. The data volume of each mission sensor node is related to factors such as sensor type, image format size, resolution, and data compression technique. §.§.§ Miscellaneous Attacks Communication security is crucial and important for the success of Unmanned Aerial Vehicles (UAVs). With the increasing use of UAVs in military and civilian applications, they often carry sensitive information that eavesdropper would like to retrieve for several reasons. While UAVs consist of various hardware and software modules, potential security vulnerabilities may also exist in those modules. For example, by launching a GPS spoofing attack or WiFi attack, eavesdroppers can capture the targeted UAV and access the sought after information.Regardless the architecture that a UTM has been built, the technologies used for communication are certain and each of them have their vulnerability against a number of attacks. In <cit.> there is a reference to such attacks against an unmanned vehicle, in general. Thus, even it is a UTM and is based on a centralized or decentralized architecture, an eavesdropper has plenty of tools in his hands to attack a UAV solely or its communication channel, with the ability to affect more UTM's nodes. More specific, in case of a decentralized system, the blockchain's technology complexity implementation allows a great number of attacks to be applied<cit.>. Indicative, some of them are: * Liveness Attack: This attack aims to delay the transaction confirmation time. The attacker tries to gain a potential advantage against honest players to build their private chain. Next is the transaction denial phase in which the attacker attempts to delay the genuine block that contains the transaction and when the attacker decides the delay is unconvincing, then attempt to decrease the rate at which the chain transaction grows. * Double Spending Attacks: This kind of harm is generated when one successful transaction is duplicated with the same funds. It represents a potential flaw in digital cash, as the same digital token can be spent two times when such an attack occurs. The conditions allow modified blocks to enter the blockchain. If this happens, the person that initiated the alteration can reclaim sources. * 51% Vulnerability Attack: In this attack is an attack on a cryptocurrency blockchain by a group of miners who control more than 50% of the network's mining hash rate. Owning 51% of the nodes on the network theoretically gives the controlling parties the power to alter the blockchain. * Sybil Attack: An entity on a peer-to-peer network is a piece of software that has access to local resources. An entity advertises itself on the peer-to-peer network by presenting an identity. More than one identity can correspond to a single entity. In other words, the mapping of identities to entities is many to one. Entities in peer-to-peer networks use multiple identities for purposes of redundancy, resource sharing, reliability and integrity. In peer-to-peer networks, the identity is used as an abstraction so that a remote entity can be aware of identities without necessarily knowing the correspondence of identities to local entities.The Sybil attack in computer security is an attack wherein a reputation system is subverted by creating multiple identities. A reputation system's vulnerability to a Sybil attack depends on how cheaply identities can be generated, the degree to which the reputation system accepts inputs from entities that do not have a chain of trust linking them to a trusted entity, and whether the reputation system treats all entities identically. In 2018, a successful Sybil attack on Google’s autonomous car led the car to show an incorrect GPS location and caused the vehicle to stop in the middle of the road <cit.>. In this scenario of attack, several fake nodes were successfully added to the network and sent misleading location and traffic condition information to the Google car by exploiting the routing table’s flaws and non-encrypted messages of Google cars. For the above indicative attacks against a blockchain, there is a healthcare scheme that aims to resist in each occasion proposed in <cit.>. §.§.§ Side Channel Attacks* As mentioned before, besides software attacks, a UAV or even the whole UTM may suffer from hardware attack, such as Side Channel Attacks. In general, this kind of attacks<cit.> are a class of physical attacks in which an eavesdropper tries to exploit physical information leakages such as timing information, power consumption, or electromagnetic radiation. Since they are passive and they can generally be performed using relatively cheap equipment, they are a significant threat to the security of most cryptographic hardware devices. Such devices may be a personal computers, a small embedded device, smart cards and Radio Frequency Identification Devices (RFIDs). Their introduction in a continuously growing spectrum of applications has turned the physical security and side channel issue into a real concern.Side channel attacks are closely related to the existence of physically observable phenomenons caused by the execution of computing tasks in present microelectronic devices. For example, microprocessors consume time and power to perform their assigned tasks. They also radiate an electromagnetic field, dissipate heat, and even make some electromagnetic noise. A significant part of digital circuits is based on complementary metal-oxide semiconductors (CMOS). These components used for analog circuits such as image sensors (CMOS sensors), data converters, RF circuits (RF CMOS), and highly integrated transceivers for many types of communication. important characteristics of CMOS devices are high noise immunity and low static power consumption. Since one transistor of the metal–oxide–semiconductor field-effect transistor (MOSFET) pair is always off, the series combination draws significant power only momentarily during switching between on and off states. Consequently, CMOS devices do not produce as much waste heat as other forms of logic, like NMOS logic or transistor–transistor logic (TTL), which normally have some standing current even when not changing state. These characteristics allow CMOS to integrate a high density of logic functions on a chip. It was primarily for this reason that CMOS became the most widely used technology .In side-channel attack though, the eavesdropper is able to monitor the power consumed during performance of decryption and signature generation. Also, it is possible for eavesdropper to measure the time during performance of cryptographic operation and it is able to analyze how a cryptographic device behaves when certain errors are encountered (fig. <ref>) In the survey <cit.> has be done an effort to classify the IoT security attacks. One of them includes the side-channel attack which is sub-classified to several types: * Simple Attacks: The attacker can directly guess the secret key using side-channel information. A simple analysis can help attacker to exploit the relationship between executed operations and the side-channel information * Differential Attacks:This attack exploits relationship between side-channel information and processed data. * Power Analysis Attacks: * Simple Power Analysis(SPA): SPA is a method that calculates directly the power consumption which is collected during encryption and decryption operation. It is based on looking at the visual representation. SPA are even able to reveal information about the key used for encryption. Furthermore, SPA can be used to retrieve information about the cryptographic implementations, i.e how many rounds are used during encryption/decryption. It is the simplest form of power analysis. * Differential Power Analysis (DPA): This is an attack method which is much more powerful attack than SPA. In addition to large-scale power variations found with SPA, DPA searches for correlations between different traces. There are several different DPAs. * Correlation Power Analysis (CPA): The CPA is a form of DPA. It differs a bit from the difference of means attack when searching for correlations. CPA uses a power model. This model is used to say something about the power consumption given a specific plaintext and key combination. CPA attacks have many models for expressing this. The two most common power models are the Hamming weight and the Hamming distance models §.§ Data Security Against Quantum Computers As mentioned in <ref>, the blockchain architecture bases its function in strong cryptographic schemes when hashing the new data. Considering the new era of quantum computers, the flawless operation of such a system is jeopardised. Thus, it is highly recommended that the decentralized UTM architectures must encapsulate quantum-resistant algorithms in order to secure not only their data but also human lives.In the popular RSA system, the public key is the product n=pq of two secret prime numbers p and q. The security of RSA relies critically on the difficulty of finding the factors p, q of n. However, in 1994, Shor introduced a fast quantum algorithm <cit.> to find the prime factorization of any positive integer n. In general, suppose the order r of some integer x: x<n. Suppose further that x is even. This is necessary in order that x^ r/2 be an integer. By definition then, r is the smallest factor of x.Shor's algorithm, which is designed to take advantage of the inherent potential of quantum, in contrast to classical computers, exploits a factorization method that differs from the trivial, for large key number q. Then, compute the great common divisor between x^ r/2-1 and n, denoted as gcd(x^ r/2-1, n). The Euclidean algorithm takes polynomial time to compute the gcd between two numbers. Since the multiplication of the two numbers gcd(x^ r/2-1) gcd(x^ r/2+1)=x^r-1 ≡ 0 (mod n), then gcd(x^ r/2-1) and gcd(x^ r/2+1) will be two factors of n. This procedure fails only if r is odd, in which case r/2 is not in integer or if x^ r/2≡ -1 (mod n) The probability that a randomly selected x < n=pq and coprime to n will have an even order r, satisfying the aforementioned equations is at least 1-1/2^k-1, where k is the is the number of distinct odd prime factors of n, and at most is 1/2. Calculating the gcd of a pair of large numbers using classical computers is a straightforward procedure requiring negligible computing time. Therefore, the feasibility of factoring a large x< n=pq via the procedure described depends primarily on the feasibility of determining the order r of x (mod n), for arbitrarily selected x. With classical computers this determination requires solving the discrete log problem.Assume that large quantum computers are built and that they function as smoothly as one could possibly hope. Shor's algorithm and its generalizations will then completely break asymmetric key algorithms like RSA, DSA, ECDSA and many other popular cryptographic systems. For example, a quantum computer will find an RSA user's secret key at essentially the same speed that the user can apply the key.That algorithm is not the only application of quantum computers. A quantum searching algorithm, introduced by Grover in <cit.> finds a 256-bit AES key in only about 2^128 quantum operations given a few known plaintexts encrypted under that key, which means from N operations to √(N). Users who want to push the attacker's cost significantly higher than 2^128, the original motivation for 256-bit AES will need a cipher with significantly more than a 256-bit key. §.§.§ Quantum Resistance Algorithms While the RSA cryptographic system, which is considered reliable and robust, the upcoming era of quantum computers provokes skepticism and anxiety, considering the security of data, including digital signatures. Hence, National Institute of Standards and Technology (NIST) have initiated a process to develop and standardize one or more additional public-key cryptographic algorithms, initializing the era of the Post quantum cryptography (PQC). As a first step in this process, NIST requested public comment on draft minimum acceptability requirements, submission requirements, and evaluation criteria for candidate algorithms. Comments received are posted at its website, along with a summary of the changes made as a result of these comments. The purpose of this notice is to announce that nominations for post-quantum candidate algorithms may now be submitted, up until the final deadline of November 30, 2017. The main cause of the submissions was NIST to choose some algorithms for standardization in both signatures and public key cryptography.During the third round the finalists were the first seven and the the other eight algorithms was named as "alternatives". The finalists could continue to be reviewed for a consideration to become a standard, at the conclusion of the third round. Several of these alternate candidates had worse performance than the finalists but might be selected for standardization based on a high confidence in their security. Other candidates had acceptable performance but require additional analysis or other work to inspire sufficient confidence in their security or security rationale. In addition, some alternates were selected based on NIST’s desire for a broader range of hardness assumptions in future post-quantum security standards, their suitability for targeted use cases, or their potential for further improvement.NIST has completed the third round of the Post-Quantum Cryptography (PQC) standardization process which selects public-key cryptographic algorithms to protect information through the advent of quantum computers. A total of four candidate algorithms have been selected for standardization, and four additional algorithms will continue into the fourth round. So, it recommends two primary algorithms to be implemented for usage: CRYSTALS-KYBER for key-establishment, while CRYSTALS-Dilithium and Falcon, <cit.>, for digital signatures. In addition, the signature schemes FALCON and SPHINCS+ will also be standardized. In a decentralized scheme, the computations for decision making accomplished in a remote note which is a cloud server. Thus, the UAVs have the potential to reduce their weight. This case allow to consider the implementation cryptographic system as an embedded hardware. This implementation may increase the weight of a UAV but guarantee that the corresponding module will always function without flaws, latency and the need of updates. It is an asymmetric encryption algorithm developed in 1978 by Robert McEliece. It was the first such scheme to use randomization in the encryption process. The algorithm is based on the hardness of decoding a general linear code (which is known to be NP-hard). For a description of the private key, an error correcting code is selected for which an efficient decoding algorithm is known, and which is able to correct t errors. McEliece with Goppa codes has resisted cryptanalysis so far. The most effective attacks known use information-set decoding algorithms<cit.>. The McEliece is a cryptographic system <cit.>, which is one of the finalists at NIST's contest in order to standardise cryptosystems, foreseen the post quantum era. There have been several efforts to hardware this system <cit.> in order to make it faster and more reliable. Concerning the concept of the on-board functionalities, the more autonomous an aerial vehicle is the more heavy is. In a centralized architecture the main entity has the major management of the UTM. This implies that the UAVs have less on board procedures to execute. Thus, more mass is available to allow miscellaneous functions to be inserted. A hardware version of the McEleiece cryptosystem seems feasible to be installed on-board. This action enhance the capability of the UAV to defend against cyber attacks. Furthermore, as the distributed concept highlights the capability to keep the safety of the data on a high level, the centralized architectures which suffer in this sector, can utilize hardware cryptosystems. § CONCLUSION The telecommunication science has been evolved in an exotic way. Despite the fact that analog communication did pretty well all the previous years, we witness the trend of digital communication. The key factors are confidentiality, integrity and many other are part of the reliability function. Another factor, that distinguish in our modern and fast era, is speed. Safety is a matter of speed. The more time you have to decide your move, the more robust move you do. The more data you have to process, the robust move you do. Consequently, decision-making , relying in AI, need data in short of time in order to extend the learning capability. Artificial intelligence is another entity that must take a lot of consideration. Several ethical issues arise. These issues have to deal with the proprietary editions of AI algorithms. If these are not open source or they have learned based on biased data, then their usage consider ineffective. As we tend to use unmanned vehicles more and more, then we create a jammed airspace. Simultaneously, aviation industry has been enlarged and the agencies must focus in order to merge the utilization of the airspace. The manned and the unmanned vehicles must to authenticate themselves when they approach airports, despite their architecture.When a UTM has been built based on decentralized architecture, then different kind of communication applied. Even if they assume to be more reliable, because they eliminate the single point failure, they are exposed to different kind of attacks. Considering that the next era of computation capacity may force entities to reconsider safe communication. While the quantum computers are in the initial stage, NIST managed to standardise algorithms that ensure reliable communication and authentication. plain
http://arxiv.org/abs/2408.11789v1
20240821172217
Fractional Quantum Hall phases of graphene beyond ultra-short range intervalley-anisotropic interaction
[ "Oleg Grigorev", "Ankur Das" ]
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.str-el" ]
Department of Molecular Chemistry and Material Science, Weizmann Institute of Science, Rehovot 7610001, Israel Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot 7610001, Israel ankur@labs.iisertirupati.ac.in Department of Physics, Indian Institute of Science Education and Research (IISER) Tirupati, Tirupati 517619, India Department of Condensed Matter Physics, Weizmann Institute of Science, Rehovot 7610001, Israel § ABSTRACT Recent experimental and theoretical development in the Quantum Hall effect in monolayer graphene showed that the previous model of the valley-anisotropy interaction is incomplete, as it was assumed to be ultra-short range (USR). In this work, we use exact diagonalization to go beyond the ultra-short range to find the different phases for ν=2/3. We model the interaction as Yukawa so that we can control the range as a proof of concept. Even in this simple setting, we discovered how dropping the USR condition shifts the transition borders in favour of certain phases, leads to a new bond-ordered phase appearing, and breaks the ferromagnetic phase in two competing states as a result of lifting the USR-driven degeneracy. Fractional Quantum Hall phases of graphene beyond ultra-short range intervalley-anisotropic interaction Ankur Das August 26, 2024 ======================================================================================================= § INTRODUCTION Quantum Hall effect is among the most cherished topics in condensed matter physics and has been thoroughly researched since its inception more than 40 years ago <cit.>. The reason for this is that it provides a viable platform for the study of strongly correlated electron systems, with Landau levels being naturally flat. For each Landau level, there is more than one flavour, e.g. spin, and only a few of them can be filled. Then, the ground state will be chosen by breaking these flavour symmetry due to interactions <cit.>. It results in the phenomenon of quantum Hall ferromagnetism <cit.>. However, this is not restricted to the integer quantum Hall but is also applicable to the Fractional quantum Hall states. Even in fractional quantum Hall states, some of these internal symmetries will be broken <cit.>. Observed as early as twenty years ago in multilayer heterostructures <cit.>, these effects gained prominence with the rise of graphene and other sister materials, arguably because of sample preparation in graphene being relatively simpler and more independent experimental groups being now able to contribute to the expanding body of research. Modelled as Dirac electrons coming in two flavours (valley) with two spin polarizations, low-energy electrons in graphene provide a plethora of ground states at different fillings, even in monolayers. A simple theoretical model of ultra short-range valley symmetry breaking electron-electron interactions was developed in Refs. AliceFisher,Kharitonov. Building on it, a number of emerging phases were theoretically predicted at charge neutrality in MLG <cit.>. There have been a number of experiments that point towards these phases <cit.> mainly targetted at the charge neutrality i.e. ν=0. The canted antiferromagnet phase was observed in tilted field experiments <cit.> showing the transition to Ferromagnet. The transition to the ferromagnetic phase was reached in magnon transport experiments <cit.>. Bond-ordered Kékulé distortion state was observed by STS methods <cit.>, and further investigation led to discovering charge density wave state <cit.>. However, more puzzling results were uncovered in STS as well, with an apparent CDW-KD coexistent phase not predicted in previous numerical studies <cit.>. Therefore, assumptions leading to the initial model were revisited, with claims being made that the ultra-short range (USR) approximation is too stringent. Some initial progress indicating the existence of new states in MLG at both charge neutrality and numerous fractional fillings was made in <cit.> using Hartree-Fock mean-field theory. These do not, therefore, make full use of the concrete form of non-USR interactions. One reason for that is that the exact expressions for this function remain an open question, though there were some recent developments in determining it in bilayer graphene <cit.>. We argue, however, that with the use of a model interaction, satisfying the requirements drawn of physical intuition, combined with exact diagonalization in a finite system, a number of conclusions can be drawn that will provide us with a number of qualitatively correct results which remained undiscovered. We chose Yukawa-like potential, naturally providing us with a range parameter and reducing to a USR interaction in a limit for such a model interaction. The structure is as follows. In <ref>, we provide the necessary background on the model Hamiltonians for interacting electrons in graphene. In <ref>, we provide the results of exact diagonalization. We show that a new bond-ordered phase occurs in the case of both charge neutrality and fractional filling between FM and CDW states as the USR condition is dropped; in the case of fractional filling, we also show how the finite range lifts the added degeneracy in FM sector, leading to two competing ferromagnetic states, and shifts the transition of AF to KD state in favour of the former. In <ref>, we conclude by summing up the results and open questions. In appendices, we provide a concise calculation of all the elements needed for our exact diagonalization scheme, as well as some additional computational results. § GRAPHENE WITH NONZERO RANGE VALLEY SYMMETRY BREAKING TERMS To compute the energy levels of electrons in monolayer graphene in a strong perpendicular magnetic field, we use the continuous model (Ref <cit.>); we will discuss the adjustments we make to this model to take lattice-scale physics into account. The most simplistic picture is that of all the electron-electron interactions being ignored, as well as Zeeman and substrate-induced potentials (valley Zeeman) effects. Under such a condition, the emerging band structure can be approximated by Landau levels for Dirac electrons (E_n=sgn(n) √(2 |n|)ħ v_F/l_B∼sgn(n) √(|n| B)) <cit.>. Each level has an additional fourfold degeneracy due to electrons having different spin polarizations and valley-spin “flavours"; in other words, all the states belonging to one LL lie in the same invariant subspace of SU(4) spin-valley rotations. If we do take electron-electron interactions into account, a natural division into magnetic length scale SU(4) symmetric part (“long-range" Coulomb) and lattice-scale valley symmetry breaking terms naturally emerges <cit.>, as in a realistic magnetic field we have l_B/a∼10-100 <cit.>. What one would also expect is the cyclotron energy exceeding Coulomb energy: e^2/ϵ l_B≪ħ v_F/l_B. However, the “graphene fine structure" constant E_C/E_m=e^2/ϵħ v_F was shown to be in the range of 0.5-2.2, depending on ϵ in a substrate of choice <cit.>. Therefore, instead of being simplified via restriction to the 0LL, the Hamiltonian requires a tedious task of accounting for LL mixing by renormalizing electron interactions in the 0LL. The first scheme was suggested by Kharitonov in Ref <cit.>. Building on the previous works (Refs <cit.>) and introducing several assumptions, namely, treating higher LLs as a continuum and explicitly demanding all the symmetry-breaking terms to vanish at the magnetic scale, he arrived at the Hamiltonian H = H_C+H_v+H_Z, H_C = ∑_i<je^2/ϵ |r_ij|, H_v = ∑_i<j(g_z τ^i_z τ^j_z + g_⊥(τ^i_xτ^j_x + τ^i_y τ^j_y )) δ^(2)(r_ij), H_Z = - E_Z∑_iσ^i_z Here the τ^a and σ^a stand for, respectively, spin and valley Pauli matrices. The degeneracy of the SU(4) multiplet of pure Coulomb ground states is weakly lifted by valley SU(2) symmetry-breaking terms. Therefore, despite symmetry breaking couplings g_⊥/2π l^2_B,g_z/2π l^2_B are far smaller than the Coulomb energy e^2/ϵ l_B, they choose the ground state of Kharitonov's Hamiltonian. The phase diagrams of MLG with this model Hamiltonian were thus obtained in <cit.> for charge neutrality case as well as for a number of fractional filling factors. However, it was argued that the picture of ultra-short range intervalley interactions may be too simplistic <cit.> to explain the recent experimental findings <cit.>; another renormalization group calculation performed in Ref <cit.> showed that the symmetry-breaking terms remain nonzero at magnetic-length scale. Said terms can now be expressed in the following form H_v=∑_i<j(g_z(r_ij) τ^i_z τ^j_z + g_⊥(r_ij)(τ^i_xτ^j_x + τ^i_y τ^j_y )) Symmetry-breaking terms now depend on an unknown function – pair potential. It proved difficult to compute using RG techniques <cit.>, and without an exact expression for it we seem left with an infinite number of parameters governing the behaviour of MLG Hamiltonian (some progress has been made in bilayer graphene with experimental support <cit.>). There are several ways to reduce this uncertainty and take only essential parts of potentials into consideration. §.§ Hartree-Fock approximation Within variational approach, the ground state is a single Slater determinant (SSD), and only depends on a few numbers characterizing the interaction. No further specifics about the interaction are needed to assess the band structure of a strongly correlated system. This was done for half-filled zeroth Landau level in Refs <cit.>. In this case, we can attribute two out of four occupied sublevels in 0LL to each candidate state and thus associate it with a projector Δ onto the occupied subspace. Energy of a state corresponding to Δ is then <cit.> ℰ_HF/N_ϕ = 1/2∑_a=x,y,z g_a,H(Tr (τ_a Δ))^2 - g_a,FTr((τ_a Δ)^2) - h Tr ( σ_z Δ) where, if Ṽ(q) stands for a Fourier transform of pair potential, g_a,H=1/2π l^2_bṼ_̃ã(0) and g_a,F=1/(2π)^2∫_ℝ^2Ṽ_̃ã(q)e^-q^2l^2_B/2d^2 q are Hartree (direct) and Fock (exchange) couplings. This expands on the expression used in Ref. Kharitonov for pointlike potentials, for which V_H=V_F. For an expression where this identity no longer holds, it proved that states delivering extrema do not have to be either spin or valley ordered. Elaborated further, this approach led to finding evidence of new phases as well as their coexistence with those already discovered in the charge neutrality case. There were recent reports of using this method for certain fractional fillings as well <cit.>. However, the state remaining a pure SSD is still limiting and prevents finding regions of parameters where a true ground state is more complicated. Besides, accounting for candidate states given by polarizations of their flavour components is rather involved even for the case of charge neutrality and becomes even more tedious for fractional fillings. In this paper, we will follow a more impartial method of exact diagonalization, which will also allow us to assess the low-lying excitations. However, other simplifications are due to make the problem tractable. §.§ Model pair potential and pseudopotentials The influence of the potential on the band structure of the system under consideration still boils down to its dependence on a set of numbers; they are known as Haldane pseudopotentials (Ref. Haldane89,Haldane83). We can, therefore, encode all the physically relevant features in a certain parametric family of model potentials. Pseudopotential V_n=V(nħ) can be thought of as the interaction energy of two electrons with relative angular momentum nħ. For USR interaction, all the V_>0=0, so V_0 is the only factor at play. One can consider changes in the features the system exhibits with the interaction obtaining range as the perturbation caused by subleading pseudopotentials moving away from zero. The natural choice of the parameter we add to tweak the interaction is its range; we want the model pair potential g_0 u_α(r_12) to reduce to USR limit with α (inverse range) tending to infinity. The range gives a scale at which the interaction is still non-negligible but decreases outside of it. The simplest case is when the electrons with close (but not equal) angular momenta become coupled, with coupling weakening with larger relative angular momentum. Lead by this logic, we picked “Yukawa-like" pair potential V(r)=g_0 α/2πe^-α r/r for a model interaction. Yukawa pseudopotentials V_n decrease monotonously with n. Pushing α→∞ gives the USR interaction V(r)=g_0 δ^(2)(r). Curiously, you can think of this interaction modelling the lattice-scale part of Coulomb as a “screened Coulomb" with a “Debye radius" depending on direction. It is also noteworthy that one can deduce the Hartree and Fock couplings from V_n (the details are given in Appendix <ref>) and, therefore, collate the numerical results obtained in this work to those done by variational methods. § EXACT DIAGONALIZATION SCHEME AND RESULTS It is convenient to rewrite the symmetry-broken Hamiltonian (<ref>) in terms of fermion operators (we will ignore Zeeman and valley-Zeeman terms unless stated otherwise in the rest of the paper to make the way pair potential influences the energy levels clear) H=∑_I_1,I_2,I'_1,I'_2(V^C_I_1,I_2;I'_1,I'_2+. . g· V^SR_I_1,I_2;I'_1,I'_2(θ, α_⊥, α_z)) c^†_I_1c^†_I_2c_I'_1c_I'_2 where I_j runs over all combinations of available one-particle angular momenta, valley, and spin quantum numbers. The four-fermion coefficients are sparse and possess numerous symmetries; for the exact expression as well as the details of the calculation, see <ref>. Here, we point out their most important qualitative features. The coefficients are sums of Coulomb term and valley Yukawa; the latter depends on the ranges and projectively depends on the strengths of interaction. That is, we can express it as a function of θ=arctang_⊥/g_z, with dependence on the radial term g=√(g^2_z+g^2_⊥) brought down to a mere prefactor. In the absence of symmetry-breaking terms, the Hamiltonian has full SU(4) symmetry. The ground eigenspace, therefore, is an irreducible representation of SU(4) or a multiplet. To each of those, one may assign the “flavour filling factor" in the following way. Suppose we fix a direction of polarization z and define the generators of SU(4) w.r.t. this choice. Then, each multiplet, similarly to the case of SU(2), has a number of highest-weight vectors. For each of those, the flavour (↑ K, ↑ K', ↓ K, ↓ K') population are good quantum numbers; the ordered quadruple of them fully defines the multiplet. (see Appendix <ref> for more details as well as generators of SU(4) given in fermionic terms) As the energy scale of symmetry-breaking terms is nonzero yet small compared to Coulomb, the band structure of our system is a series of levels perturbatively fanning out of consecutive SU(4) multiples, with leftover SU(2)_s× U(1)_v × (ℤ_2)_v symmetry (s subscript stands for spin and v for valley). As the Coulomb term contribution to each state descending from a multiplet is roughly the same, we can take all parameter dependencies of V^SR into account by taking fixed ranges α_z,⊥ and plotting the spectrum depending on θ. The Hilbert space we use is that of N_e electrons staying in the 0LL with arbitrary spin and valley polarization and the angular momentum not larger than N_ϕħ – therefore, occupying 4N_ϕ available sites. In other words, we perform exact diagonalization in disk geometry. The Hamiltonian (<ref>) commutes with the net spin S, net valley polarization T_z and net angular momentum operators; therefore, Hilbert space is divided into sectors where we can perform calculations independently, and we can assign relevant quantum numbers to candidate states. Disk geometry is usually not the first choice for quantum Hall systems. Although it was introduced in the pioneering paper on the topic <cit.>, it demonstrates poor performance in systems with long-range interactions <cit.>. However, the long-range part – Coulomb coupling – gives us the base level, and correction to it is what we are most curious about. The interaction causing these corrections, on the other hand, decreases exponentially outside the region limited by magnetic length or less; short-range interactions like that match the technique much better <cit.>. On the other hand, two alternative geometries, on a sphere <cit.> and on a torus <cit.>, have their own shortcomings. In spherical geometry for a multicomponent system, the Wen-Zee shift <cit.> (filling factor differing from N_e/N_phi) leads to a following discrepancy. If we have two systems with the same combined number of electrons but different flavour contents, we end up with different filling factors; this adds an artificial complication to comparing the band structures of those systems. Torus geometry adds a topological degeneracy to the energy levels for certain filling factors. It obscures the symmetries of the ground states, complicating the job of matching the ED results with the corresponding variational theory predictions. However, perhaps more importantly, it bloats the Hilbert space, requiring more computational resources to process. Taking these considerations into account, we choose disk geometry as both more natural and less computationally demanding and robust enough for the sort of interaction we are studying. §.§ Charge neutrality We start by presenting and analyzing the results of our calculations for the case of charge neutrality. We took N_e=12, N_ϕ=6 (thus two out of four spin-valley sublevels of 0LL filled). We take the square mean Yukawa coupling g/2π l^2_B=0.01 e^2/ϵ l_B. <ref> shows the (6,6,0) multiplet for cases of almost ultrashort and short-range interactions. §.§.§ USR α_⊥=α_z=(10^-4 l_B)^-1 θ=0 to 3π/4: the ground state belongs to the T_z=0, S=6 sector. Having the highest possible spin, this state can be identified as ferromagnetic. Due to unbroken SU(2)_s symmetry, it is natural that the ground state has 2S+1=13-fold degeneracy (therefore, the eigenspace comprises the whole T_z=0, S=6 sector, and the base states are pure Slaters). These results are consistent with both the mean-field approach in ref. Kharitonov and ED results in toric geometry ref. Wu. θ=3 π/4 to 5 π/4, the state comes from S=0, T_z=± 6 sector (double degenerate with (Z_2)_v acting on the eigenspace, base states of which are again pure Slaters). This we identify as a valley ferromagnet (CDW state), also consistent with <cit.>. The transition it undergoes from FM to CDW is first-order, as indicated by a true level crossing. θ=5 π/4 to 2π, the ground state is from T_z=S=0 sector with no degeneracy. The quantum numbers are consistent with both KD and AF states predicted by mean-field techniques; however, there are some key differences. First of all, the eigenstates predicted by our ED analysis are mixed. Second, degeneracy predicted by HF studies is no longer present in a finite system; e.g., AF state, having the largest Néel vector, cannot be a singlet. With this in mind, it is not surprising that ground state level crossing is absent at 7π/4, where a BO-AF transition was predicted in <cit.>. A level caving in towards the crossing of the first excited levels, however, hints at a possible tower of state collapse <cit.> as the finite system degeneracy lifting begins to wane as the system size grows. As this particular calculation was more meant to align our results with ones already established, we refrained from further analysis. It should be pointed out, though, that in ref. Wu, a similar feature was observed in toric geometry. The results for USR regime largely agree with preexisting findings. With this additional validation, we apply ED with disk geometry to the more intriguing case of short-range Yukawa interactions. §.§.§ SR α_⊥=α_z = (1/3 l_B)^-1 The ferromagnetic sector (T_z=0, S=6) occupies a smaller sector, from θ=0 to 3π/4-δ, δ∼π/50. On the border of the ferromagnetic sector and CDW, a new phase appears, with S=0, T_z=0. For the parameters we chose, it spans roughly the slice θ=3π/4;5π/6. We point out that it shares the spin-valley subspace with BO states, which is in line with lattice calculations in Ref <cit.>, showing that with symmetry-breaking interaction gaining range, a number of BO regions emerge. As the range becomes wider, the new phase domain increases, though its dependence on the interplay of different ranges is awaiting further research. The CDW area (S=0, |T_z|=6) is smaller, occupying the sector θ=5π/6;5π/4, as is the energy gap, while the rightmost transition point to the valley unpolarized state remains unaffected. As before, in the region θ=5 π/4;2π there is no level crossing. The ground state has no degeneracy and has T_z=S=0. It is tempting to treat the first excitation crossing as another indication of BO-AF transition happening at the thermodynamic limit; however, this question deserves a more careful treatment, and we will address it elsewhere. A new phase appearing at the border of FM and CDW alone is unambiguous evidence that the interaction gaining nonzero range entails novel features in the phase diagram for charge neutrality. We mention in passing that the gaps between the ground state and low excited ones look largely unaffected in “spin-ordered" zones and markedly different in “valley-ordered" parts. This is of interest for further inspection of the excitation structure. §.§ Filling 2/3 We studied the way energy levels are affected by long-range interaction for a certain filling factor 2/3. We chose it as it demonstrates additional degeneracy, which, as we show below, is lifted by relaxing the USR assumption. The case of fractional 0LL fillings was extensively treated in Ref <cit.> using an extension of HF method. In Ref <cit.>, ED calculations were performed for a number of fractions in the USR limit, validating some mean-field results and contradicting others. One of the key takeaways of that research was that the picture of phases for ν fillings is interlinked with the one emerging for charge neutrality. The nomenclature of these fractional phases is, therefore, defined by the state of underlying filled shells. §.§.§ USR θ=0;3π/4 – the states with valley polarization T_z=[-2;2] and spin S=4. The spin polarization is the highest possible for a state in the chosen multiplet, so we identify those states as ferromagnetic. Despite the Hamiltonian lacking SU(2)_v symmetry, both SU(2)_v and SU(2)_s act nontrivially on this eigenspace, with the presence of valley degeneracy due to the interactions being pointlike. This phenomenon has already been addressed in Ref. LeThierr for another filling factor; we take a somewhat different angle, which will be further unravelled in the next subsection. Atop the ferromagnetic shell, we have four more electrons, all occupying the same valley flavour; due to the Pauli principle, they all have different angular momenta. As the only pseudopotential that is nonzero is V_0, it has no effect on the energy of the states. This, however, ceases to be true for the short-range V we clarify below. θ=3π/4 to 5π/4 – states with valley polarization at its highest and S=2. Built atop the completely valley-polarized shell, those states represent the CDW phase. θ=5π/4 to 7π/4 – here the ground level has quantum numbers T_z=0,S=2 (a spin multiplet). The region and quantum numbers are in accord with <cit.>, so we identify this phase as Kékulé distortion. θ=5π/4 to 2π – the state with quantum numbers T_z=2,S=2, which we identify as antiferromagnetic. Unlike charge neutrality case, the bond ordered and AF states now belong to different spin-valley sectors, therefore transition between them now should include a level crossing; hereinafter we can safely make a conclusion that a first-order transition happens. As was already mentioned, the fractional phases are “built atop" the charge neutrality counterparts, as illustrated by the phase diagram, summed up in Fig. <ref>. There, the partition into phases (and, to an extent, the nature of these phases) is the same as in the charge neutrality case. However, as we introduce corrections to the USR Hamiltonian, the phase diagrams diverge. In <cit.>, the corrections caused by Zeeman splitting were researched; we show how introducing range affects the picture. §.§.§ SR (α_⊥ z^-1=0.2 l_B) The V_>0 pseudopotentials no longer vanish; the degeneracy we mentioned in the previous section is now lifted. We have “Ising-like" ferromagnet from 0 to π/4, with S=4, |T_z|=2 and XY-like ferromagnet with S=4, T_z=0 in the rest of the ferromagnetic sector. The transition is first-order. As in the half-filled system, a new state arises at the border of the ferromagnetic and CDW sectors. The quantum numbers for it are S=2, T_z=0-3 For the parameters we chose (V_1 ∼ 10^-2 V_0) it occupies a sliver of space, reducing FM by approximately 0.04 π and CDW by 0.1 π. Similarly to the USR interaction, the level crossing happens at 5π/4, indicating a first-order CDW-BO transition. The KD sector (S=2,T_z=0) loses ground to the AF sector (S=2, T_z=2). The transition happens in the sector (76 π/48,77 π/48). As we can see, with our model intervalley interactions gaining range, several novel features appear in the phase diagram. First, two competing ferromagnetic states that were indistinguishable at the USR limit emerge; the point where the spin-flop transition between them happens depends on the interplay of ranges. We elaborate on it further in Appendix <ref>. We also provide the band structures for larger ranges Second, similarly to the charge neutrality case, there are some drastic changes along the g_z+g_⊥=0 line, where in the USR limit, the system demonstrates a high-level SO(5) symmetry. We can state for certain that a new phase appears between the FM and CDW; numerical experiments suggest that it becomes more prominent as the ranges become longer. The border between AF and KD demonstrates a very curious behaviour as well, shifting in favour of the AF region and leaving the nature of AF-KD transition an open question; interestingly, there were suggestions of exotic transition between these phases in bilayer graphene <cit.>. These features deserve a separate detailed assessment, which is beyond the scope of this work. § DISCUSSION AND SUMMARY In this work, we show a proof of concept in understanding the effect of anisotropy interactions beyond the USR assumptions. We considered the case of neutrality and filling factor ν=2/3. For both of these cases, the phase diagram in the USR limit and without Zeeman splitting includes four phases, those being antiferromagnetic, Kékulé bond ordered, ferromagnetic and charge density wave. These results are in agreement with the previous studies. Presenting hardcore interaction as a marginal case of a one-parametric family of interactions modelled by Yukawa potential, we moved away from it by changing ranges. For the system thus obtained, we computed the energy levels using exact diagonalization in disk geometry. The structure of the latter showed how the simple four-phase diagram changes with the appearance of the range. At charge neutrality, we saw curious dynamics along g_z+g_⊥=0 high symmetry line. Along the border of F and CDW phases, a new bond ordered phase occurs. The transition between AF and KD phases shifts in favour of the former. The borders of AF-F and CDW-KD remain unchanged, with level crossings indicating first-order transitions. For the case of 2/3, the picture is expected to repeat some dynamics of the underlying half-filled system; indeed, we saw a shift in AF-KD transition and a new bond-ordered phase between F and CDW. The quantum numbers inherent to AF and KD phases in the system are different, unlike the neutrality case, and thus, we were able to witness a level crossing, making more convincing evidence of a phase transition; the nature of this transition, though, remains an open question. Another finding that was special for this particular filling factor was the observation of two different ferromagnetic phases emerging, with our model interaction lifting a degeneracy specific to the pointlike interaction in this fraction. This may be a promising way to test the validity of our model in experiments. Unlike the new bond-ordered phase we mentioned, those ones are expected to present at a wide range of parameters, even with interaction slightly deviating from the USR one. The accurate model for intervalley interaction, to the best of our knowledge, is still not available. Instead, we chose the simplest ansatz demonstrating physically justified properties to make initial predictions of emerging phases that can be proved or disproved experimentally in the future. Curiously, some of our predictions align with results obtained by completely different methods <cit.>. We intend to conduct a more detailed study of the AF-KD transition and investigate the connection between breaking SO(5) symmetry and non-USR potentials. We hope this work serves as proof of concept for researching more involved systems. This can be extended to other interesting fractions, such as double-layer graphene, bilayer and trilayer graphene, and non-abelian phases. We would also like to do some future studies where we would like to change externally controlled parameters like spin and valley Zeeman. We thank Yuval Gefen and Ganpathy Murthy for useful discussions. O.G. gratefully acknowledges the support and hospitality of Weizmann Institute of Science, emergency program coordinator Joel Sussman and his host during his stay at the Condensed Matter Department, Yuval Gefen, among others. A.D. was supported by DFG MI 658/10-2 and DFG RO 2247/11-1. A.D. also thanks the Israel Planning and Budgeting Committee (PBC) and the Weizmann Institute of Science, the Dean of Faculty fellowship, and the Koshland Foundation for financial support. A.D. thanks IISER Tirupati start-up grant for support. Our implementation of the exact diagonalization method was based on the open-source DiagHam package. § HAMILTONIAN FOR DISK GEOMETRY Making the representation of the interaction parts of Hamiltonian H_C+H_v (<ref>),(<ref>) more appropriate to use in ED, we rewrite it in second-quantized language H_SR=1/2∫ d^2 r_1 d^2 r_2 V_0(r_12) N(r_1) N(r_2)_H^0_SR+ 1/2∫ d^2 r_1 d^2 r_2 (V_⊥(r_12) [T_x(r_1)· T_x(r_2)+T_y(r_1)· T_y(r_2)]+ V_z(r_12)T_z(r_1)· T_z(r_2))_H^⊥_SR+H^z_SR, where V_0(r)=e^2/ϵ r is Coulomb and V_z,⊥=v_0α_z,⊥e^-α_z,⊥ r/r are Yukawa-like potentials, N(r) is the occupation number operator, and T_a(r) are second-quantized valley operators, which are easily expressed through field operators as N(r) = ∑_t=K,K'∑_s=↑,↓ψ̂^†_t,sψ̂_t,s T_a(r) = ∑_t_1,t_2∑_sψ̂^†_s t_1(r)(τ_a)_t_1 t_2ψ̂_s t_2(r) with field operators being expressed via 0LL wavefunctions ψ̂_s,t (r) = ∑_j=0^N_ϕ-1<r| . 0, j >_s,tâ_j,s,t Plugging in all of the above into (<ref>), and with some simple Pauli matrix algebra, we obtain (j_k run over all possible angular momenta, s_l run over all possible spin polarizations) H^0_SR = ∑_s_k,t_k∑_j_1,j_2,j_3,j_4 V^0_j_1j_2j_3j_4 a^†_j_1,s_1,t_1a^†_j_2,s_2,t_2a_j_3,s_2,t_2a_j_4,s_1,t_1; H^⊥_SR = ∑_s_1,s_2∑_j_1,j_2,j_3,j_4 2 V^⊥_j_1j_2j_3j_4(a^†_j_1,s_1,Ka^†_j_2,s_2,K'a_j_3,s_2,Ka_j_4,s_1,K' + a^†_j_1,s_1,K'a^†_j_2,s_2,Ka_j_3,s_2,K'a_j_4,s_1,K); H^z_S-R = ∑_s_1,s_2∑_j_1,j_2,j_3,j_4 V^z_j_1j_2j_3j_4(a^†_j_1,s_1,Ka^†_j_2,s_2,Ka_j_3,s_2,Ka_j_4,s_1,K + a^†_j_1,s_1,K'a^†_j_2,s_2,K'a_j_3,a_2,K'a_j_4,s_1,K'). with the four index coefficients being expressed through the coordinate part of 0LL states V^·_j_1j_2j_3j_4= < 0, j_1; 0, j_2| V^·| 0, j_3; 0, j_4> § MATRIX ELEMENTS In this work, we study how the energy levels of a system depend on a certain set of parameters (in our case, ranges of intervalley interactions) by tweaking which we change the pair potential. To make the calculations of matrix elements more effective, it is useful to single out those parts that essentially depend on these parameters and those that are purely geometric. The former is limited to a series of numbers called Haldane pseudopotentials <cit.>, already mentioned in the main text. We deal with a system of charged particles subject to a magnetic field, interacting via rotationally invariant pair potential V_α(r_i,r_j)=V_α(|r_i-r_j|). Limiting ourselves to a projection of V to 0LL, and, if M stands for the center of mass and m for relative angular momentum: V̂=∑_M,m<M,m|V|M,m>|M,m><M,m|=∑_m V_m P̂_m Haldane's key observation was that V matrix elements V_m=<M,m|V|M,m> only depend on m, justifying the last equation. V_m, called Haldane pseudopotentials encode the essential part of V(r). If the Hilbert space of our system is that of N_e particles bound to 0LL with angular momentum number not exceeding N_ϕ (disc geometry), to project our Hamiltonian means to compute the following matrix elements: <0, n_1; 0, n_2 |V̂|0,n_3; 0,n_4>=∑_m V_m ∑_M <0, n_1; 0, n_2 . |M,m><M,m|.0,n_3; 0,n_4> Then, using explicit expressions for the wavefunctions in 0LL, |M,m> = 1/2π l_B^21/(2l_B)^M+m√(m! M!) (z_1+z_2)^M (z_1-z_2)^m exp(-z_1 z_1 + z_2 z_2/4 l^2) |0,n_1;0,n_2>= 1/2π l_B^21/√(n_1! n_2! (2l_B^2)^n_1+n_2) z_1^n_1 z_2^n_2exp(-z_1 z_1 + z_2 z_2/4 l_B^2) we compute the scalar products separately <0, n_1; 0, n_2 . |M,m> = 1/√(2^n_1+n_2)√(C_n_1+n_2^m/C_n_1+n_2^n_1)∑_k_1+k_2 = n_1 (-1)^k_2 C_n_1+n_2-m^k_1 C_m^k_2δ_n_1+n_2,M+m and pseudopotentials V_m = 1/2^2m+1m!∫_0^∞dϱϱ^2m+1V(l_B ϱ)e^-ϱ^2/4 Though we use exact diagonalization, it is useful to deduce Hartree (direct) and Fock (exchange) terms, arising in the Hartree-Fock approximation of pair potential via Haldane pseudopotentials. These are usually given by an expression through Fourier transform of interaction <cit.> g_H=ṽ(q=0)/2π l^2_B, g_F=∫_ℝ^2d^2q ṽ(q)e^-q^2 l^2/2/(2π)^2. Using an alternative expression for Haldane pseudopotentials <cit.> V_m=∫d^2 q/(2π)^2ṽ(q) e^-q^2 l_B^2 L_m(q^2) and well-known decompositions of exponent and Dirac delta in terms of Laguerre polynomials δ(x)=e^-x/2∑_k≥ 0 L_k(x), e^x/2=2∑_k≥ 0 (-1)^k L_k(x) we have. g_H=2 l^2_B ∑_0^∞ V_m, g_F = 2 l^2_B ∑_0^∞ (-1)^m V_m This connection provides us with some information on the nature of interaction by looking at the associated pseudopotentials. For example, if all are nonnegative, then the Hartree term is always greater than the Fock term; moreover, if the pseudopotentials are monotonous, the Fock term will be positive in itself. These statements will be useful for assessing the concrete pair potential we use. § HALDANE PSEUDOPOTENTIALS FOR YUKAWA-LIKE INTERACTION Apart from physical motivation given in the main text, Yukawa-like interactions V_α(r)= v_0 α e^-α r/r come with a purely computational advantage, as it is easy to compute respective Haldane pseudopotentials and observe the relevant qualitative features. From (<ref>) we deduce, substituting ϱ=2α l_B (√(t+1)-1) V_m=v_0 α l_B ∫_0^∞ dϱϱ^2m e^-α l_B ϱe^-ϱ^2/4/2^2m+1m! = v_0α^2 l_B^2/2^2m+1m!∫_0^∞(2α l_B)^2me^-α^2 l_B^2 t(√(t+1)-1)^2m/√(t+1)_f(t)dt For the function on the right-hand side, the first m derivatives f^(k)(0)=0 and f^(m)(t)=Γ(m+1/2)/Γ(1/2)t^m/(t+1)^m+1/2. Integrating by parts, we arrive at V_m=Γ(m+1/2)/2Γ(1/2)v_0α^2 l_B^2/m!∫_0^∞ e^-α^2 l_B^2 t t^m (t+1)^-m-1/2 dt=v_0α^2 l_B^2/2Γ(1/2)Γ(m+1/2)U(m+1;3/2;α^2 l_B^2) here U(k,l,a) is the confluent hypergeometry (Tricomi) function. As the range of interactions tend to zero, α→∞, and asymptotic of Tricomi function gives us U(m+1;3/2;α^2)∼1/α^2(m+1) <cit.>. Therefore, the ultra-short range limit indeed gives 2V_0=v_0, V_>0=0. One important thing to note is that these pseudopotentials are nonnegative and monotonous. As we already said, this means that regardless of the value of parameters, the Hartree term will exceed the Fock term in our model. This may seem rather limiting, as new features occur mostly out of this area <cit.>; however, we argue that the very simplicity of the Yukawa model allows for a clearer picture; we may miss some phenomena, but those that do occur already in this model are made more striking and easier to trace. Besides, spectra of systems with arbitrary pseudopotentials can be computed just as easily (see <ref>), so this question is to be addressed in future works. To illustrate the properties of pseudopotentials for parameters of choice for the systems we studied, we refer to <ref>. There are several key points we want to emphasize. For our `effectively USR' interaction, even the first pseudopotentials are indeed at least 9 orders weaker than V_0. With the range increasing, V_0 starts to dwindle and V_>0 increases; as the range approaches the magnetic length, a change in pseudopotentials accelerates. Finally, you can see the asymptotics all the V_n share regardless of the range. We chose to plot a double logarithm of V_n; due to variation in scale, it was the optimal way to showcase all those features in one chart, although not without drawbacks. It should be pointed out that Haldane pseudopotentials for Yukawa were also treated for a different problem in <cit.>; the resulting expressions agree to our conclusions. § SU(4) MULTIPLETS OF STATES To fully describe a state of our system, we use the Fock space of N_e fermions with spin and valley spin 1/2, each of them having an angular momentum n∈ 0, N_ϕ. This space delivers a C_4 N_ϕ^N_e dimensional representation of SU(4), which breaks into a direct sum of irreducible representations, or multiplets. Below, we sum up those statements regarding SU(4) representation theory that are most relevant to the studies of multielectron states in graphene; we also choose the language so that their connection to physics is as transparent as possible. Firstly, we adopt, following ref. FischerDzRom, the nuclear physics notation for SU(4) spin configurations, denoting ↓ K',↓ K, ↑ K', ↑ K as down, up, strange, and charmed (d,u,s,c), respectively. This way, we can use it both for the ℂ^4 space of one electron SU(4) polarizations, where it acts tautologically and for the many-electron spaces of states. Like su(2), the multiplets of su(4) coincide with multiplets of a corresponding special linear algebra; i.e. we may study the generators of sl(4,ℂ) instead. Among those, there are 3 traceless diagonal matrices and 12 upper/lower-diagonal ones. We choose the base in such a way that the 3 diagonal generators correspond to spin, valley spin and Néel vector polarizations (2S_z=C_dd+C_uu-C_ss-C_cc, 2T_z=C_dd-C_uu+C_ss-C_cc, 2N_z=C_dd-C_uu-C_ss+C_cc), and the rest are “matrix unities" C_km. The commutation relations are deduced from the matrix unities ones [C_ij,C_kl]=δ_ilC_jk-δ_jkC_il. Now, for a multiplet, a base can be chosen so it is an eigenbase for each of (commuting) polarization operators, so S_z,T_z, N_z are good quantum numbers. Out of the remaining 12 operators, for each polarization, four will commute with its operator, leaving the quantum number unchanged; four will increase it by 1 and four will decrease it. If we represent each vector by the triplet of its quantum numbers, it will span a convex symmetric polyhedron, with its vertices standing for “highest weight" vectors. Choosing the vector such that S_z≥ T_z≥ N_z, we can define a Young diagram with three rows consisting of L_1=S_z+T_z, L_2=S_z-N_z, L_3=T_z-N_z cells; such a diagram defines an irrep uniquely. If we take p_1=L_1-L_2=T_z+N_z, p_2=L_2-L_3=S_z-T_z, p_3=L_3=T_z-N_z,the dimension of a representation is given by the formula <cit.> d(p_1,p_2,p_3)=1/2!3!(p_1+1)(p_2+1)(p_3+1)(p_1+p_2+2)(p_2+p_3+2)(p_1+p_2+p_3+3) In the case of SU(2), it is easy to deduce to what multiplet an S_z eigenvector belongs – we should either compute the (quadratic Casimir) operator S^2 action on them or act with step operators S_± until it reaches the highest weight vector. Similarly to these strategies, it is possible to classify the vectors by action of three Casimir operators <cit.>, or act until we reach the “corner" of our irrep – the highest weight vector. In practice, however, we judge by the action of unterperturbed Hamiltonian (which can in principle be expressed in terms of Casimirs), and then check the dimension of our eigenspace against (<ref>) to make sure that there is no further degeneracy of several multiplets. In terms of fermionic Fock operators in the central Landau level, the set of aforementioned matrix unities looks like <cit.> C_i,j=∑_m=0^N_ϕ-1ĉ^†_m,iĉ_m,j where i,j are any of 16 combinations of d,u,s,c. Notice that these operators commute with total angular momentum. This means that to study a certain su(4) multiplet, you may restrict yourself to a certain angular momentum sector, thus greatly reducing the number of calculations needed. To make all of the above more tangible, we can consider a concrete case of charge neutrality with N_ϕ=2,N_e=4. This means a relevant Young diagram is boxsize=0.15cm2,2. Placing the vectors of such a multiplet in a three-dimensional space spanned by quantum numbers S_z,T_z,N_z, all the highest weight vectors will form a set of vertices for the octahedron, their descendants under the action of “ladder operators" will occupy the middle points of octahedral edges, and finally, their two-second descendants will dwell in the origin. The states are shown by their flavour content. A part of these states are shown in <ref>; one can imagine algebra generators acting along certain edges of the octahedron. § SPIN FLOP TRANSITION Ferromagnetic phase splitting into two competing versions in fractional filling 2/3 with interactions gaining range is potentially a useful feature that can be tested in experimental settings. The presence of a phase transition between two distinct ferromagnetics may prove that effective interactions have range, and its details may provide a glimpse into the specifics of these interactions. One of the simplest properties that can be additionally studied is how the predominance of each of these phases changes when the ranges are not isotropic. In the main text, we chose an isotropic system because, not contradictory to our expectations, the phase transition occurred at line v_z=v_⊥. As a level crossing occurs in one of the test points, the transition is more evident for this system. But it is interesting to study the dynamics of this transition with changing ratio of ranges. For this study, we chose a slightly larger range than the one considered in the main text. This way, although it makes the whole phase diagram harder to map, the transition we are focusing on became clearer (cf left panel of <ref>, for α_z l_B=1, α_⊥ l_B=1.3, with degeneracy lifting clearly visible on a chart). We fix α_z l_B = 2 and give different values to α_⊥ l_B, going through the powers of 2. This way, we can access the asymptotic behaviour as α_⊥l_B goes to the USR range. To put a picture in a broader frame, we also included systems with ⊥ range larger than z range; these systems being less suitable for disk geometries, we considered only a couple of such terms. We see that as the (α l_B)^-1 range tends to zero, the transition moves in favour of Ising ferromagnetic phase, swapping the whole quadrant as valley-valley interaction stays finite range in z direction but becomes USR in perpendicular. Other numerical experiments do not indicate that this limit depends on α_z; thus, a region of the phase diagram where XY-FM phase persists remains even if only one component of interaction is non-USR.
http://arxiv.org/abs/2408.11445v1
20240821085526
Verifying Approximate Equilibrium in Auctions
[ "Fabian R. Pieroth", "Tuomas Sandholm" ]
cs.GT
[ "cs.GT", "econ.TH" ]
Green Probabilistic Semantic Communication over Wireless Networks Ruopeng Xu, Zhaohui Yang, Yijie Mao, Chongwen Huang, Qianqian Yang, Lexi Xu, Wei Xu,  Senior Member, IEEE, and Zhaoyang Zhang Senior Member, IEEE, R. Xu, Z. Yang, C. Huang, Q. Yang and Z. Zhangare with the College of Information Science and Electronic Engineering, Zhejiang University, and also with Zhejiang Provincial Key Laboratory of Info. Proc., Commun. & Netw. (IPCAN), Hangzhou, 310027, China (e-mails: {ruopengxu, yang_zhaohui, hongwenhuang, qianqianyang20, ning_ming}@zju.edu.cn). Y. Mao is with the School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China (e-mail: maoyj@shanghaitech.edu.cn). L. Xu is with the Research Institute, China United Network Communications Corporation, Beijing 100048, China (e-mail: davidlexi@hotmail.com). W. Xu is with National Mobile Communications Research Laboratory, Southeast University, Nanjing, 211189, China (e-mail: wxu@seu.edu.cn). August 26, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ § ABSTRACT In practice, most auction mechanisms are not strategy-proof, so equilibrium analysis is required to predict bidding behavior. In many auctions, though, an exact equilibrium is not known and one would like to understand whether—manually or computationally generated—bidding strategies constitute an approximate equilibrium. We develop a framework and methods for estimating the distance of a strategy profile from equilibrium, based on samples from the prior and either bidding strategies or sample bids. We estimate an agent's utility gain from deviating to strategies from a constructed finite subset of the strategy space. We use PAC-learning to give error bounds, both for independent and interdependent prior distributions. The primary challenge is that one may miss large utility gains by considering only a finite subset of the strategy space. Our work differs from prior research in two critical ways. First, we explore the impact of bidding strategies on altering opponents’ perceived prior distributions—instead of assuming the other agents to bid truthfully. Second, we delve into reasoning with interdependent priors, where the type of one agent may imply a distinct distribution for other agents. Our main contribution lies in establishing sufficient conditions for strategy profiles and a closeness criterion for conditional distributions to ensure that utility gains estimated through our finite subset closely approximate the maximum gains. To our knowledge, ours is the first method to verify approximate equilibrium in any auctions beyond single-item ones. Also, ours is the first sample-based method for approximate equilibrium verification. § INTRODUCTION The "Estimating approximate incentive compatibility" paper is the following version: [v3] Mon, 4 Nov 2019 21:16:03 UTC (1,295 KB) arXiv:1902.09413v3 [cs.GT] 4 Nov 2019 - we present techniques for estimating how far a strategy profile is from equilibrium in an action - given samples from the agents' type and bidding distribution, we show how to estimate the extend to which an agent can improve its utility by deviating from its current strategy. - we do so by measuring the maximum utility an agent can gain by considering strategies from a finite subset of alternatives – which our method constructs – of the strategy space. - notably, we also consider the case of interdependent prior distributions. That is, knowing its type gives additional information about the opponents' type distribution - The primary challenge is that one may miss out on large utility gains by considering only a finite subset of the strategy space - compared to previous work, we consider two novel difficulties. - first, a bidding strategy may arbitrarily distort the prior distribution experienced by the opponents - second, for interdependent priors, an agent reasons about a possibly different distribution for every type. As we are dealing with continuous distributions, every received sample is from a different conditional distribution with probability one. - our main technical contribution is to classify a set of strategy profiles and present a closeness assumption on conditional distributions for interdependent priors so that the utility gain measured by a sampling-based approach when considering only a finite subset of the strategy closely matches the maximum utility gain overall. - We conclude that these methods not only enhance our understanding of equilibrium behavior in auctions but also offer practical tools for auction designers and economists to assess the effectiveness of strategic behavior for different auction mechanisms// A central problem in mechanism design is understanding the strategic incentives of participants–in order to design mechanisms that lead to desired outcomes. A Bayesian Nash Equilibrium (BNE) <cit.> represents a fixed point in strategy space, where no agent has an incentive to deviate. This concept constitutes the central solution concept for games with incomplete information, such as auctions. Mechanism design devotes significant attention to designing incentive-compatible mechanisms, where truthful bidding constitutes a BNE <cit.>. When bidders are aware that it is in their best interest to report their true valuation for an item, this knowledge leads to several desired effects. For example, one can guarantee efficient outcomes, ensuring that the item is allocated to the bidder who values it the most. Furthermore, it simplifies the strategic decision-making process for the bidders, thereby saving resources. Nonetheless, practitioners typically employ mechanisms that are not incentive compatible, referred to as manipulable mechanisms. For instance, the first-price mechanism is commonly used in real-world auctions. In the context of multi-unit sales, the U.S. Treasury has utilized discriminatory auctions for selling treasury bills since 1929 <cit.>. Moreover, combinatorial auctions in practice typically use manipulable mechanisms such as first-price payments for their simplicity and other desirable features, or core-selecting payment rules intended to ensure the winners' payments are sufficient to maintain envy-freeness <cit.>. Several factors were identified why manipulable mechanisms are prevalent in practice. First, their rules are typically more straightforward to communicate. Second, incentive-compatible mechanisms have the potential to more readily expose the bidders’ confidential private information <cit.>. Third, if information acquisition is costly, even incentive-compatible mechanisms have no dominant strategy for making information-gathering or valuation-computation decisions <cit.>. Additionally, incentive-compatible mechanisms, like the VCG mechanism <cit.>, exhibit significant drawbacks in combinatorial auction contexts. First, they can result in minimal or even null revenues in spite of intense competition for the items <cit.>. Second, they may encourage collusion <cit.>. Third, they may beget arbitrage opportunities <cit.>. Fourth, mechanisms that are incentive compatible in single-shot settings—like the VCG—typically do not remain incentive compatible over time across auctions where complementary or substitutable items are sold <cit.>. Fifth, in scenarios such as sourcing, the repeated application of an incentive-compatible mechanism is not incentive compatible as the bid taker uses bids from one auction to modify the parameters (reserve prices or more sophisticated parameters) of future auctions <cit.>. Despite the significant academic work in auction theory, equilibrium strategies for manipulable mechanisms are primarily known only for very restricted, simple market models, such as single-item auctions with independent prior distributions <cit.>. Even worse, equilibria are not known to exist in general, but only in specific settings <cit.>. Fortunately, every strategy profile can be considered a ε-Bayesian Nash Equilibrium (ε-BNE) for some approximation factor ε > 0. Intuitively, ε measures the potential utility gain an agent could achieve by deviating from its current strategy, assuming the other agents’ strategies remain unchanged. As a result, recent efforts have concentrated on identifying strategies with an ε as small as possible. Several computational techniques have demonstrated promise in discovering strong bidding strategies (e.g.,<cit.>). Although there is strong empirical evidence suggesting that the approximation factor ε is small for the computed strategies, their theoretical guarantees are limited in settings where no analytical equilibrium is available. <cit.> rely on significant assumptions, including complete knowledge of the joint and marginal prior distributions, and their results are restricted to single-item auctions. <cit.> introduce error bounds based on the precise calculation of metrics that are typically intractable to compute, such as the best-response ex interim utility. Meanwhile, <cit.> employ a sampling-based strategy but do not provide error bounds. §.§ Contributions We introduce techniques with provable guarantees that identify the smallest approximation factor ε for a strategy profile. Our methods require only access to samples from the type and bid distribution. The bids can either be observed directly, or, given access to the strategies, one can map the sampled types to their corresponding bids. Our results are applicable to single- and multi-item auctions with independent and interdependent prior distributions. We analyze both the ex interim and ex ante settings.[We exclude the study of ex post approximate equilibrium from our analysis because these concepts are based on worst-case, distribution-independent notions, rendering it impractical to assess through sampling from agents' type distributions.] In the ex interim case, we bound the amount any agent can improve its utility by deviating from its current strategy, in expectation over the other agents' types, regardless of its own true type. In the weaker ex ante setting, the expectation also includes the agent's own true type. Our estimate is simple. It measures the maximum utility an agent can gain by deviating from its current strategy, averaged over the samples, where the alternative strategies considered are from a finite subset of the strategy space. We present upper bounds in the ex interim case, denoted by ε̂, and in the ex ante case, denoted by ε̃. Specifically, we offer ex interim guarantees ε̂ for scenarios with independent prior distributions and ex ante guarantees ε̃ for interdependent prior distributions. Prior sampling-based methods operated under the assumption that agents play truthfully and have independent prior distributions, meaning they were only capable of verifying the truthful strategy under independent priors <cit.>. We expand upon this in two significant ways. First, our results hold for a large class of bidding strategies (as long as bids can neither change too fast nor too slowly as a function of an agent's type). This class satisfies common assumptions on equilibrium strategies made in auctions, such as monotonicity <cit.>. Second, we introduce findings for interdependent prior distributions. To achieve this, we consider a partition ℬ of an agent's type space and establish an upper bound on the estimation error that utilizes the maximum total variation distance between the opponents' conditional distribution within each element B ∈ℬ. In the arXiv version of their EC-19-Exemplary-AI-Paper-Award-winning extended abstract, <cit.> also presented—among other results—ex ante guarantees for interdependent prior distributions. However, they retracted that result after we pointed out that it is incorrect <cit.>. That approach was flawed because it did not consider bidding strategies that can be functions of a bidder's type. We apply our estimation technique across several important auction classes. For instance, in the first-price auction, our error bound for a B ∈ℬ is Õ(τ_B + (n +(κ_B L_β^-1_max)^-1) /√(N_B)), where n is the number of bidders, N_B is the number of samples within B, [0, κ_B] denotes the range of the prior density, L_β^-1_max is the maximum Lipschitz constant of an agent's inverse bidding strategy, and τ_B denotes the maximum total variation distance among the conditional prior distributions for types from B. It is important to note that τ_B does not need to become small for every B in order to provide meaningful ex ante guarantees, as the overall bound for the entire partition can still be small in expectation. For the case of independent prior distributions, this bound improves to become an ex interim guarantee with τ_B=0 and N_B = N. We present similar results for a variety of auction formats, including combinatorial first-price auctions, uniform-price auctions, and discriminatory auctions. Key challenges To prove our guarantees, we aim to estimate the maximum possible amount an agent can improve its utility by deviating from its current strategy in both the ex interim and ex ante cases, respectively. We determine our error bounds, ε̂ or ε̃, by quantifying the extent to which an agent may improve its utility, averaged over the samples, when considering alternative strategies from a finite set. To achieve this, we encounter two major technical challenges. The first challenge arises from the limitation of searching over a finite set, potentially causing an agent to miss strategies that could significantly improve its utility. This occurs from auctions often having discontinuities in the utility functions. For instance, in both first- and second-price auctions, a slight increase in an agent's bid from just below to just above the highest bid of other agents alters the allocation, resulting in a sudden jump in utility. For a given type of an agent, we consider a grid with an edge length w over the action space, assuming the action space is [0, 1]^m for some integer m. The critical question then becomes how much potential utility might be missed when searching over this finite grid and the effect of w on this potential loss. To tackle this issue, we utilize the concept of dispersion <cit.>. In broad terms, a set of piecewise Lipschitz functions is (w, k)-dispersed if every ball of radius w in the domain contains no more than k discontinuities of the functions. Given N samples from the prior and bidding distributions, we examine the dispersion of a set of ex post utility functions, each defined by a sample and varying over one agent's bid. We demonstrate that if this set of functions is sufficiently dispersed, it is possible to control the error by searching for a best response over a finite grid with edge length w, rather than in the infinite action space. Crucially, we establish sufficient conditions on both the prior distribution and bidding strategies to ensure this approach is viable. The second major challenge arises under interdependent prior distributions. In such contexts, an agent gains additional information about the opponents' prior distributions upon learning its type. Given the continuous nature of these distributions, the probability of drawing the identical type more than once is zero, leading to the expectation that one would not collect more than a single sample from the same conditional prior distribution. We tackle this issue by considering a partition ℬ of the type space for each agent and grouping samples that fall into the same element B ∈ℬ. We demonstrate that if the total variation distance for the conditional distributions from types within B is sufficiently small, then the aggregated samples can provide valuable insights about the conditional prior distributions for all types from B. Finally, provided that the intrinsic complexities of the agents' utility functions are manageable (as determined by the learning-theoretic concept of pseudo-dimension <cit.>), our empirical estimates ε̂ and ε̃ quickly converge to the true approximation factors as the sample size increases. Results and Contributions: - We conclude that these methods not only enhance our understanding of equilibrium behavior in auctions but also offer practical tools for auction designers and economists to assess the effectiveness and fairness of auction mechanisms// - Nobel laureate William Vickrey <cit.> was the first to model markets as games of incomplete information, using the Bayes-Nash equilibrium concept. Participants do not have complete but only distributional information about the valuations of competing market participants. An equilibrium bid function determines how much they bid based on their value draw and their knowledge of the prior distribution. What (problem)? - we build upon the award winning work by <cit.> that gives approximate incentive-compatibility guarantees for different kinds of auctions, i.e., they bound the maximum utility loss of the truthful strategy - we extend their work in two fundamental ways. First, we give guarantees for non-truthful strategy profiles. - the estimator is simply the maximum utility an agent can gain by deviating from its current bid on average over the opponents' or all type samples, where finitely many bids and types are considered as alternatives - in the case of independent prior distributions, our guarantees do not rely on knowledge of the prior distribution. We only need a dataset of samples. This is important for some practical applications <cit.>. - in the case of interdependent prior distributions, our guarantees depend on a smoothness assumption of the opponents' conditional prior distribution. - while this is restricting, influential models in microeconomic literature work with known interdependent prior distributions <cit.>. We show that our assumptions hold for the important mineral rights correlated prior setting <cit.>. Additionally, we verify an the first known approximate equilibrium in this setting under the first-price auction, to the best of our knowledge. §.§ Equilibrium Computation - over several decades, researchers derived equilibrium strategies for several settings analytically. - in particular, there are many known equilibrium strategies for the single-item setting <cit.>. - however, there are several modeling choices relevant for practice that make analytical derivations hard or impossible. - For example, no explicit characterization of BNE strategies is known for first-price sealed-bid auctions of multiple homogeneous goods (multi-unit auctions), nor for first-price sealed-bid combinatorial auctions where bidders can submit bids on packages of goods <cit.>. - one particularly challenging assumption is considering interdependent information available to the bidders <cit.> Need some more sources for this statement.. Even for single-item auctions, the specification of equilibria may end up in a system of partial differentiable equations with no available closed-form solution <cit.>. - <cit.> perform an abstraction of the strategy space by only allowing linear bidding strategies - while they can recover known equilibrium strategies within this abstracted strategy space, they do not show that the abstraction error itself can be controlled. - similarly, <cit.> discretize the bidding space, i.e., allowing only finitely many actions while the valuation space remains continuous. They do not show whether their approximate equilibrium strategies translate to the continuous game. - <cit.> and <cit.> use neural networks in combination with pseudo-gradient algorithms to learn equilibrium strategies in auctions. While they perform a Monte-Carlo based approach to estimate the utility loss similar to ours presented in Section <ref>, they do not provide a theoretical analysis of this approach. § RELATED RESEARCH In this section, we discuss additional related work on equilibrium-verification methods, emphasizing the contributions and limitations of prior efforts. Estimating approximate incentive compatibility <cit.> introduce techniques to estimate the proximity of a mechanism to being incentive compatible, specifically addressing the utility loss associated with truthful strategies. By analyzing samples from agents' type distributions, their method evaluates potential utility gains from misreporting types, leveraging finite subsets of the type space. The work provides PAC-guarantees for the approximation of incentive compatibility, utilizing the pseudo-dimension and dispersion of utility functions, and applies these techniques across a variety of common auction formats. The paper received the Exemplary AI Track Paper award at the ACM Conference on Economics and Computation (EC) in 2019. Building on these results, we derive sampling-based error bounds and extend them in two significant directions: first, by accommodating strategic bidding, thereby determining the utility loss for strategies beyond truthful bidding, and second, by offering guarantees for interdependent prior distributions as well. Verification via game abstraction Game abstraction is a key technique for solving large imperfect-information games <cit.>, and has led to breakthroughs such as superhuman AIs for two-player limit Texas hold'em <cit.>, two-player no-limit Texas hold'em <cit.>, and multiplayer no-limit Texas hold'em <cit.>. The basic idea is that the game is automatically abstracted into a smaller game, then the smaller game is solved for (approximate) equilibrium, and then the strategies are mapped back into the original game. However, most game abstraction techniques do not yield guarantees for equilibrium approximation in the original game <cit.>. <cit.> developed a lossless abstraction technique for games with finite actions and finite states that yields an exact equilibrium in the original game, but the abstracted game to be solved is only about two orders of magnitude smaller than the original game, so that does not scale to very large games. More recently, game abstraction techniques that can abstract more and still yield a provably approximate equilibrium in the original game have been developed <cit.>. Some game abstraction work has focused on games with continuous actions <cit.>. However, these models typically are not rich enough to model a Bayesian game with continuous types and actions, and have not yielded techniques for verifying approximate equilibrium in auctions. <cit.> perform an abstraction by discretizing the valuation and bidding spaces to compute and verify equilibrium using distributional strategies, providing theoretical guarantees that the abstraction error can be controlled in the case of single-item auctions. Their verification results assume explicit access to both the joint and marginal prior density functions, allowing for querying at specific points and integrating over cells of their discretization. In contrast, our results only assume access to the prior distribution through sampling. Additionally, our findings are also applicable to multi-unit auctions and combinatorial auctions. <cit.> offer a verification method using a limiting argument applicable to sequential games with continuous observation and action spaces. However, their assumption of continuous utility functions means that their results do not directly apply to auctions. Instead, they rely on a game abstraction strategy that involves smoothing the allocation and price functions, drawing from the work of <cit.>. They demonstrate that the abstraction error can be controlled for single-unit auctions with independent prior distributions. In contrast to their work, we provide explicit bounds that can be computed for a specific setting and sample size. Additionally, our results apply to multi-unit and combinatorial auctions, and with interdependent priors. Verification methods in full auction games <cit.> propose a reinforcement learning-based method to estimate a lower bound on the maximum utility loss. While this can provide valuable insight into potential gains from deviation, it does not verify whether the candidate strategy profile is an approximate equilibrium. On the other hand, the work by <cit.> introduces a verification method for approximate equilibrium strategies in combinatorial auctions with independent prior distributions. Similar to our approach, they approximate the utility loss at a finite number of grid points. They employ a Monte-Carlo sampling method to estimate the expected utility and use a local search algorithm to estimate the best response for each grid point. Additionally, they exploit a convexity property of the best-response ex interim utility to provide an upper bound of the utility loss for all valuations between the grid points. However, their theoretical analysis does not account for the approximation errors introduced by the sampling procedure and the best-response approximation. In contrast, our error bounds encompass all approximations performed. Furthermore, their analysis is restricted to auctions with independent prior distributions. - first work to give eps-BNE guarantees for general auctions that hold for the continuous game - however, they make specific assumption on the strategy profiles for non-unit demand auctions - these strategies need to be piece-wise constant - therefore, they need access to the equilibrium strategies and distort them, so that they are piece-wise constant - known equilibrium strategies do not fall into this category - this in itself decreases the precision of the strategies - in general, their main application is find and verify any equilibrium strategy for a given auction - while our approach can be used for this as well, we can provide guarantees by observing samples only, without explicit access to the strategies or distorting them - that is, our bounds apply to the utility loss an agent suffers under its currently applied strategy. - finally, their results only hold for independent prior valuations, whereas we present the first guarantees for interdependent priors as well. - they use Monte-Carlo approximation to estimate the ex interim utility loss. While they point out that this leads to only an approximation of the ex interim utility, they do not include this error into their further analysis. Their final approximation error is of a probabilistic statement as well. - they determine the utility loss for a certain number of grid points. - then, they leverage a convexity property of the best-response ex interim utility to give an upper bound of the utility loss for all valuations in between the grid points. - however, this assumes piece-wise constant strategies. - additionally, the reported bounds are approximations. - the best-response approximation is a local search method and they do not consider error bounds made by it. - while their reported error is very low, they do not give theoretical approximation guarantees. - additionally, they do not answer the question how many samples are needed to give good estimates. - finally, their results do not hold for the case of interdependent priors. Ex post incentive-compatible mechanism design via deep learning - the field of automated mechanism design often aims to compute an optimal mechanism that is incentive compatible for a given setting with the assumption that the auctioneer knows the prior distribution - earlier methods focused on integer programming techniques, so that a certain level of incentive compatibility could be verified <cit.> - however, this approach is inherently restricted in scalability. In recent years, deep learning approaches to design auction mechanisms have received significant attention <cit.>. These efforts aim to design mechanisms that are nearly incentive compatible by incorporating constraints into the deep learning optimization problem. These constraints enforce the mechanism to be ex post incentive compatible over a set of buyer values sampled from the prior distribution. Essentially, they seek to identify a mechanism where a bidder has approximately no incentive to conceal its valuation, regardless of the reported valuations of the opponents—a property that does not hold for most mechanisms used in practice. <cit.> offer a concentration bound to empirically assess the violation of incentive compatibility. However, this bound presumes that the ex post violation can be precisely determined, an assumption not met by their methodology. <cit.> address this issue by linearizing the learned neural network, effectively reducing the problem to an integer program that allows for an accurate estimation of the error. <cit.> use deep learning to learn auction mechanisms within randomized affine maximizer auctions, a class within which each mechanism is exactly incentive compatible. The concept of ex post incentive compatibility is a worst-case, distribution-independent notion focused on the utility gain from truthful bidding. This contrasts with our objectives, as we aim to provide ex interim and ex ante guarantees, where agents have no incentive to deviate from their current strategy—which might not be truthful—averaged over the opponents' type distribution. § PRELIMINARIES This section introduces the formal model and results from learning theory that are useful for our purposes. §.§ The model We model an auction as a Bayesian game G=(n, 𝒜, Θ, 𝒪, u, F ). Here n ∈ℕ denotes the number of agents. F denotes an atomless prior distribution over the agents' observations 𝒪 = _i ∈ [n]𝒪_i and valuations Θ = _i ∈ [n]Θ_i, and is assumed to be common knowledge. We denote its marginals by F_θ_i, F_o_i, etc.; its conditionals by F_θ_i|o_i, etc. An agent i receives its private observation o_i ∈𝒪_i, and chooses an action or bid b_i ∈𝒜_i based on it. The joint bidding space is denoted by 𝒜 = _i ∈ [n]𝒜_i. The sets Θ_i denote an agent's “true”, but possibly unobserved valuation. This formulation allows to model interdependencies and correlations beyond purely private or common values. The vector u = (u_1, …, u_n) describes the individual (ex post) utility functions u_i: Θ_i ×𝒜→ℝ that map a valuation θ_i ∈Θ_i and bid profile b ∈𝒜 to a game outcome for each agent. The game consists of three distinct stages. During the ex-ante stage, that is, before the game, agents have only knowledge about F. In the ex interim stage, each agent observes o_i which provides information about its valuation θ_i. After submitting a bid b_i, an agent receives the ex post information about the game outcome u_i(θ_i, b). In the ex ante stage, an agent needs to reason about a strategy β_i:𝒪_i →𝒜_i that maps observations to bids. We denote agent i's pure strategy space by Σ_i = {β_iβ_i: 𝒪_i →𝒜_i}. The joint strategy space is then denoted by Σ = ∏_i ∈ [n]Σ_i. In this work, we are particularly concerned with the agents' bidding distributions. That is, the distribution of bids β_i(o_i^') for an agent i, where o_i^'∼ F_o_i and β_i ∈Σ_i. We denote the distribution with mapped bids under strategy profiles β_i, β_-i, β by F^β_i, F^β_-i, and F^β, respectively. We define the ex interim utility for agent i and observation o_i as û_i(o_i, b_i, β_-i) := 𝔼_θ_i, o_-i|o_i[ u_i(θ_i, b_i, β_-i(o_-i) ) ], and the ex ante utility as ũ_i(β_i, β_-i) := 𝔼_o_i[û_i(o_i, β_i(o_i), β_-i) ]. We focus on sealed-bid auctions involving m distinct items. In combinatorial auctions, this results in a set 𝒦 representing all possible bundles, with valuation and action spaces of size |𝒦| = 2^m. An auction's outcome, given a bid profile b, is determined by an auction mechanism M that decides on two things: the allocation x = x(b) = (x_1, …, x_n) with x_i ∈{0, 1}^|K|, dividing the m items among bidders, and the price vector p(b) ∈ℝ^n, indicating the cost for each bidder to claim their items. When considering a specific mechanism M, we denote the respective utility function for bidder i by u_i, M. In a typical risk-neutral model, bidders' utilities u_i, M are captured by quasilinear payoff functions u_i, M(θ_i, b) = x_i(b) ·θ_i - p_i(b). Furthermore, we assume[Our error bound increases by a multiplicative factor of H if the range of utility functions is [-H, H] instead of [-1, 1].] the utility functions (u_1, M, …, u_n, M ) map to the bounded interval [-1, 1]. We are interested in the game-theoretic solution concept of an approximate Bayesian Nash Equilibrium (BNE) <cit.>. Let M be a mechanism and G=(n, 𝒜, Θ, 𝒪, u_i, M, F ) a corresponding Bayesian game. Let ε≥0, then, a strategy profile (β_i^*, β_-i^*) is an ex ante ε-BNE if, for all i ∈ [n], sup_β_i^'∈Σ_iũ_i, M(β_i^', β_-i^*) - ũ_i, M(β_i^*, β_-i^*) ≤ε. A so-called ex interim ε-BNE is given if, for all i ∈ [n] and o_i ∈𝒪_i, sup_b_i ∈𝒜_iû_i, M(o_i, b_i, β_-i^*) - û_i, M(o_i, β_i^*(o_i), β_-i^*) ≤ε. Clearly, if we have an ex interim ε-BNE, then this also constitutes an ex ante ε-BNE. This may not hold the other way around, as it is only guaranteed that the utility gained through deviating to another strategy is bounded by ε in expectation. For some observations o_i, it may be strictly larger. The ex ante utility loss is a metric to measure the loss of an agent by playing β_i instead of a best-response to the opponents' strategies β_-i <cit.>. It expresses the distance to an approximate ex ante equilibrium and is defined as ℓ̃_i(β_i, β_-i) := sup_β^'_i ∈Σ_iũ_i, M(β^'_i, β_-i) - ũ_i, M(β_i, β_-i). Similarly, the ex interim utility loss measures the loss of agent i for observation o_i ∈𝒪_i of playing b_i instead of an ex interim best-response to β_-i. It is given by ℓ̂_i(o_i, b_i, β_-i) := sup_b_i^'∈𝒜_iû_i, M(o_i, b_i^', β_-i) - û_i, M(o_i, b_i, β_-i). We assume access to a dataset of independent samples from the unknown prior distribution F in the following form. We sample a dataset 𝒟, comprising ex interim and ex post data for each agent. 𝒟 consists of N ∈ℕ tuples of observations and valuations, 𝒟 = {(o^(j), θ^(j)) = ((o_1^(j), … o_n^(j)), (θ_1^(j), …, θ_n^(j)))o^(j)∈𝒪, θ^(j)∈Θ for 1≤ j ≤ N}. Either by accessing the strategy profile β or by observing the bids for each data point in 𝒟, we obtain the full dataset of observations, valuations, and bids. This dataset is denoted by 𝒟^β := {(o^(j), β(o^(j)), θ^(j))(o^(j), θ^(j)) ∈𝒟}. §.§ Concentration bounds from learning theory We introduce a distribution-independent concentration bound that allows us to approximate the expected utilities by an empirical mean over 𝒟^β. It is grounded in the learning theoretic concept of the pseudo-dimension. This concept captures the inherent complexity of a function class, essentially reflecting how challenging it is to learn. The pseudo-dimension is defined as follows: Let ℱ⊂{f: 𝒳→ [-1, 1]f measurable.} be an abstract class of functions. Further, let 𝒮 = {x^(1), …, x^(N)}⊂𝒳 and {z^(1), …, z^(N)}⊂ [-1, 1] be a set of targets. We say that {z^(1), …, z^(N)} witness the shattering of 𝒮 by ℱ if for all subsets T ⊂𝒮, there exists some function f_T ∈ℱ such that for all x^(j)∈ T, f_T(x^(j)) ≤ z^(j) and for all x^(j)∉ T, f_T(x^(j)) > z^(j). If there exists some vector 𝐳∈ [-1, 1]^N that witnesses the shattering of 𝒮 by ℱ, then we say that 𝒮 is shatterable by ℱ. Finally, the pseudo-dimension of ℱ, denoted by Pdim(ℱ), is the size of the largest set that is shatterable by ℱ. A standard result for an abstract generalization bound is provided by the next theorem. Let Φ be a distribution over 𝒳 and ℱ⊂{f: 𝒳→ [-1, 1]f measurable.}. Set d=Pdim(ℱ), then, with probability 1-δ over a draw x^(1), …, x^(N)∼Φ, for all f ∈ℱ, it holds that 1/N∑_j=1^Nf(x^(j)) - 𝔼_x∼Φ[f(x)] ≤ 2√(2d/Nlog( e N/d)) + √(2/Nlog(1/δ)). § CHALLENGES FOR SAMPLING-BASED EQUILIBRIUM VERIFICATION In this section, we outline our general approach and discuss some of the key challenges that must be overcome to articulate a statement of the following nature: For a strategy profile β=(β_i, β_-i), with probability 1 - δ over the draw of the dataset 𝒟^β, for any agent i ∈ [n], one can guarantee, (1) for all observations o_i ∈𝒪_i, ℓ̂_i(o_i, β_i(o_i), β_-i) ≤ε̂ or (2) ℓ̃_i(β_i, β_-i) ≤ε̃. We want to give upper bounds ε̂ or ε̃ that are as tight as possible to the true values of the utility losses. However, computing the utility losses ℓ̂_i and ℓ̃_i is intractable in general. This difficulty arises from two major challenges. First, one cannot evaluate the expected utilities for even a single instance due to the potentially intractable nature of the integrals involved. Second, computing the best-response utilities requires a search over an infinite space. To overcome these obstacles, our approach is twofold. We approximate the integrals by calculating the empirical mean of the ex post utility u_i, M over suitable subsets of the dataset 𝒟^β. This process is referred to as the simulation step. To address the issue of searching over an infinite space for the best-response utilities, we constrain this search to a finite set, a method we denote as the discretization step. To this end, let w>0. We then consider a so-called w-grid 𝒢_w ⊂𝒜_i. That is, 𝒢_w is a finite set such that for every p ∈ A there exists a p^'∈𝒢_w such that p - p^'_1 ≤ w. We illustrate our approach by detailing the steps involved in estimating agent i's ex interim utility loss ℓ̂_i(o_i, β_i(o_i), β-i) for an observation o_i and strategy profile β through an explicit example. Consider a first-price single-item auction with two bidders and independent priors. The two bidders receive their true valuations as observations, that is, o_i = θ_i. We set Θ_i = 𝒜_i=[0, 1]. The utility function for agent 1 is then given by u_1, M(θ_1, b_1, b_2) = 1_{b_1 > b_2}(θ_1 - b_1). The problem of determining the ex interim utility loss for a valuation θ_1 for agent 1 is the optimization problem ℓ̂_1(θ_1, β_1(θ_1), β_2) = sup_b_1 ∈ [0, 1]û_1, M(θ_1, b_1, β_2) - û_1, M(θ_1, β_1(θ_1), β_2) where agent 2 bids according to β_2. Further, consider the dataset of samples of bid queries 𝒟^β_2 = {β_2(θ_2^(1)), …, β_2(θ_2^(N)) } that can be extracted from 𝒟^β. Then, the estimator that searches a best-response over a finite set 𝒢_w ⊂ [0, 1] of the empirical mean as described above is given by sup_b_1 ∈𝒢_w1/N∑_j=1^N u_1, M(θ_1, b_1, β_2(θ_2^(j))) - u_1, M(θ_1, β_1(θ_1), β_2(θ_2^(j))). The challenge presented in the aforementioned example centers on mitigating the estimation error between Equation <ref> and its approximation through Equation <ref>. To address this, we propose to limit the approximation error associated with employing the empirical mean (simulation step) by applying a classic learning theoretic concentration bounds. However, controlling the error introduced during the discretization step–namely, restricting the search to a finite set–proves more challenging due to the discontinuous nature of the utility functions. A minor variation in the bid b_1 can affect the allocation outcome, thereby causing abrupt shifts in agent 1's utility. This discontinuity poses a particular problem when working with finite precision w, as it might prevent agent 1 from achieving substantial improvements in utility due to the granularity of the bid increments. We control this by considering the concept of dispersion. Let f_1, …, f_N: ℝ^d →ℝ be a set of functions where each f_i is piecewise Lipschitz with respect to the ℓ_1-norm over a partition 𝒫_i of ℝ^d. We say that 𝒫_i splits a set A ⊆ℝ^d if A intersects with at least two sets in 𝒫_i. The set of functions is (w, v)-dispersed if for every point p∈ℝ^d, the ball {p^'∈ℝ^d:p-p^'_1 ≤ w} is split by at most v of the partitions 𝒫_1, …, 𝒫_N Dispersion quantifies the number of discontinuities present within any given ball of width w. The larger the value of w and the smaller the value of v, the more “dispersed” the discontinuities of the functions are. For a (w, v)-dispersed set of N functions, at most v jump discontinuities occur within a ball of radius w. Thus, within any ball of radius w, at least N-v functions exhibit L-Lipschitz continuity, while at most v do not. Considering Example <ref>, assume the functions u_1, M(θ_i, ·, β_2(θ_2^(1))), …, u_1, M(θ_i, ·, β_2(θ_2^(N))) are (w, v)-dispersed. Then, for any b_1, b_1^'∈ [0, 1] with b_1 - b_1^'_1 ≤ w, we can bound the difference by 1/N∑_j=1^N u_1, M(θ_1, b_1, β_2(θ_2^(j))) - u_1, M(θ_1, b_1^', β_2(θ_2^(j)))≤N - v/NL_i w_i + 2v/Nu_1, M_∞. For sufficiently small v, the error is small. Therefore, if we can ensure that the discontinuities are sufficiently dispersed with high probability, the error from searching over 𝒢_w can be controlled. The approach we have outlined aligns with the work of <cit.>. However, in our scenario, an agent must reason about the opponents' bid distribution rather than the prior distribution directly. Next, we discuss the additional considerations necessary for this. §.§ Sufficient properties for strategies to be verifiable We address the question of identifying the kinds of bidding strategies that can be effectively verified using the approach outlined above. To achieve meaningful dispersion guarantees, we discuss specific sufficient conditions of regularity for strategies to be verifiable. We observed that the concept of dispersion hinges on a sufficient spread of discontinuities, implying in our context that the opponents' bidding distribution F^β_-i should not be too concentrated. Initially, we assume the prior distribution F to be κ-bounded, meaning it possesses a κ-bounded density function ϕ, that is, sup_x ϕ(x) ≤κ. However, the bidding distribution F^β_-i may still exhibit concentration even if the prior distributions do not. We demonstrate that a bounded prior distribution remains bounded under a bidding strategy if the bidding strategy is sufficiently smooth and changes with a minimal rate over the received observation. More specifically, we demand the bidding strategies to be bi-Lipschitz continuous. Let 𝒳, 𝒴⊂ℝ^m. A bijective function g: 𝒳→𝒴 is said to be (L_g, L_g^-1)-bi-Lipschitz if g is L_g-Lipschitz and its inverse g^-1 is L_g^-1-Lipschitz, that is, for all x_1, x_2 ∈𝒳 and y_1, y_2 ∈𝒴 g(x_1) - g(x_2)≤ L_g x_1 - x_2 and g^-1(y_1) - g^-1(y_2)≤ L_g^-1x_1 - x_2. The bi-Lipschitz continuity ensures that neither the function nor its inverse can change arbitrarily fast. More precisely, our approximation results in the following sections are valid for the set of bidding strategies for an agent i, defined as Σ̃_i := {β_i∈Σ_iβ_i is continuously differentiable and bi-Lipschitz continuous}. For a strategy β_i ∈Σ̃_i, we denote the bi-Lipschitz constants by L_β_i, L_β_i^-1. Further, define L_β^-1_max := max_t ∈ [n] L_β^-1_t. We leverage the properties of bi-Lipschitz functions to bound the density function of the bidding distribution. Let 𝒪_i ⊂ℝ^m for all i ∈ [n]. Denote with ϕ_F_o_i and ϕ_F_o_i, o_j the density functions for the marginal prior distributions F_o_i and F_o_i, o_j for any i, j ∈ [n]. Further assume that ϕ_F_o_i and ϕ_F_o_i, o_j are κ-bounded density functions for some κ > 0. Further, let (β_i, β_-i) ∈Σ̃ be a strategy profile of bi-Lipschitz continuous bidding strategies. Then, the probability density functions of the bidding distributions F^β_i_o_i and F^β_i, β_j_o_i, o_j satisfy sup_b_i∈β_i(𝒪_i)ϕ_F^β_i_o_i(b_i) ≤κ· L_β_i^-1^m sup_(b_i, b_j) ∈β_i(𝒪_i) ×β_j(𝒪_j)ϕ_F^β_i, β_j_o_i, o_j(b_i, b_j) ≤κ· L_β_i^-1^m · L_β_j^-1^m. By the definition of bi-Lipschitz continuity, the function β_i : 𝒪_i →β_i (𝒪_i) is invertible for any i ∈ [n]. We perform a change of variables and get for b_i ∈β_i(𝒪_i ) ϕ_F^β_i_o_i(b_i) = ϕ_F_o_i(β_i^-1(b_i) ) ·(𝒥β_i^-1(b_i))≤κ· L_β_i^-1^m, where m denotes the dimension of 𝒪_i. We used a well-known bound on a bi-Lipschitz mapping's Jacobian determinant in the second step. The case of two agents i, j ∈ [n] is similar and additionally leverages a property of the determinant for block matrices. The full proof is in Appendix <ref>. To illustrate the effect, Figure <ref> shows how the density function of a beta-distribution Beta(2, 5) is transformed under different strategies. Linear transformations such as θ_i ↦1/2θ_i restrict the bidding space to [0, 1/2], which compresses the density and leads to a higher maximum value. The mapping θ_i ↦θ_i^2 leads to an unbounded density, which can occur because its inverse is not Lipschitz continuous. However, the bidding strategy under the mapping θ_i ↦θ_i^3/2 remains bounded, even though the mapping itself is not bi-Lipschitz continuous. The prior density assigns a high mass to valuations close to zero, but the strategy increases rapidly enough to redistribute a significant amount of mass away from zero. These examples underscore that while our assumptions provide a sufficient condition for the verification of bidding strategies, our findings may extend to a broader class of bidding strategies and prior distributions. This raises the question of how restrictive it is to consider only strategies from Σ̃ for verification. Continuously differentiable is a common assumption met by most function approximation techniques, such as neural networks, making this restriction relatively mild. The restriction to bi-Lipschitz continuous strategies is more stringent. In the one-dimensional case, this translates to the bid strategy being strictly monotonic. Monotonicity is a very common assumption for strategies in auctions, where bids are assumed not to decrease with a rising valuation <cit.>. In the standard model that we consider here, strict monotonicity remains a reasonable assumption. Furthermore, known equilibrium strategy profiles commonly fall within this set Σ̃ <cit.>. Nevertheless, under model assumptions such as reserve prices or budget constraints, bidders may resort to constant bidding for a range of observations, so that their strategies do not lie within Σ̃. § VERIFYING APPROXIMATE EQUILIBRIUM UNDER INDEPENDENT PRIORS In this section, we give guarantees for the maximum ex interim utility loss for a given strategy profile β under the common assumption of independent prior valuations <cit.>. Specifically, we consider an auction G=(n, 𝒜, Θ, 𝒪, u, F ), simplifying several aspects. For all agents i ∈ [n], we assume 𝒜_i = 𝒪_i = Θ_i, and a drawn observation o_i equals the true valuation θ_i, allowing us to omit the observation space entirely. Furthermore, the prior distribution simplifies to a product distribution over the valuation spaces, that means, F = ∏_i ∈ [n] F_θ_i. A bidding strategy β_i is then a mapping from agent i's valuation space onto itself, β_i: Θ_i →Θ_i. In the ex interim stage, agent i must reason about the opponents' bid distribution F^β_-i given its valuation θ_i. We organize the dataset 𝒟^β as follows: denote with 𝒟^β_-i := {β_-i(θ_-i^(1)), …, β_-i(θ_-i^(N)) } the dataset of agent i's opponents' bids. Then 𝒟^β_-i consists of i.i.d. samples from F^β_-i. §.§ A sampling-based bound on the ex interim utility loss via grid search We start with the sampling step of our approach, presenting a result to estimate the ex interim utility loss by considering the empirical mean instead of the expectation. We use a classical PAC-learning result (Theorem <ref>) to bound the error incurred by taking the empirical mean compared to evaluating the integral, demonstrating that this error converges towards zero as the number of samples N increases. For mechanism M and agent i, define the class of functions that map opponent bids to utility by ℱ̂_i, M := {u_i, M(θ_i, θ̂_i, ): Θ_-i→ [-1, 1]θ_i, θ̂_̂î∈Θ_i}. Let δ > 0, M be a mechanism, and β∈Σ. Then, it holds with probability 1-δ for all agents i ∈ [n] over the draw of datasets 𝒟^β_-1, …, 𝒟^β_-n of valuation-bid queries, sup_θ_i ∈Θ_iℓ̂_i(θ_i, β_i(θ_i), β_-i) = sup_θ_i, θ̂_i ∈Θ_iû_i, M(θ_i, θ̂_i, β_-i) - û_i, M(θ_i, β_i(θ_i), β_-i) ≤sup_θ_i, θ̂_i ∈Θ_i1/N∑_j=1^N u_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i, β_i(θ_i), β_-i(θ_-i^(j))) + ε̂_i, Pdim(N, δ), ε̂_i, Pdim(N, δ) := 4√(2d_i/Nlog( e N/d_i)) + 2√(2/Nlog(2n/δ)), d_i=Pdim(ℱ̂_i, M). Fix an arbitrary agent i ∈ [n]. Then we have with 𝒟^β_-i := {β_-i(θ_-i^(1)), …, β_-i(θ_-i^(N)) } that β_-i(θ_-i^(j)) ∼ F^β_-i_θ_-i is i.i.d. for 1 ≤ j ≤ N. Therefore, by applying Theorem <ref>, we have with probability at least 1 - δ/2 for all u_i, M(θ_i, θ̂_i, ) ∈ℱ̂_i, M that 1/N∑_j=1^Nu_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - 𝔼_β_-i(θ_-i)∼ F^β_-i[u_i, M(θ_i, θ̂_i, β_-i(θ_-i))] ≤1/2ε̂_i, Pdim(N, nδ). We apply this to the pairs (θ_i, θ̂_̂î) and (θ_i, β_i(θ_i)). A union bound over the agents finishes the statement. The full proof is in Appendix <ref>. The statement is similar to Theorem 3.2 from <cit.>. The key difference lies in the observation that one can average over the opponents' bidding distribution F^β_-i instead of the opponents' prior distribution F_θ_-i. - the ex ante utility loss can be approximated by the empirical mean in the following form. Let δ > 0, M be a mechanism, G=(n, {Θ_i}_i ∈ [n], {Θ_i}_i ∈ [n], {u_i, M}_i ∈ [n], F ) be a corresponding auction with independent private valuations, β∈Σ a strategy profile, and dataset 𝒟^β with splits (𝒟^β_1, 𝒟^β_-1), …, (𝒟^β_n, 𝒟^β_-n) of valuation-bid queries. Then, it holds with probability 1-δ for all agents i ∈ [n], ℓ̃_i(β_i, β_-i) = sup_β^'_i ∈Σ_iũ_i, M(β^'_i, β_-i) - ũ_i, M(β_i, β_-i) ≤1/N∑_j=1^N (sup_b_i ∈Θ_i1/N∑_l=1^N u_i, M(θ_i^(j), b_i, β_-i(θ_-i^(l))) ) - u_i, M(θ_i^(j), β_i(θ_i^(j)), β_-i(θ_-i^(j))) + ε̃_i(N, δ), where ε̃_i(N, δ) = 2√(2d_i/Nlog( e N/d_i)) + 3√(2/Nlog(2n/δ)), and d_i=Pdim(ℱ_i, M) with ℱ_i, M := {u_i, M(, θ_i^', ): Θ_i ×Θ_-i→ [-1, 1]b_i ∈Θ_i}. content... We proceed with the discretization step of our procedure. For this purpose, we assume Θ_i = [0, 1]^m for some suitable m ∈ℕ. Let 𝒢_w ⊂Θ_i be a w-grid for w>0, where the largest distance between any point is bounded by w. To bound the error incurred by restricting the search to a finite grid, we assume a certain degree of dispersion, as discussed in Section <ref>. Suppose that for mechanism M and each agent i ∈[n], there exist L_i, w_i ∈ℝ and a function v_i: ℝ→ℝ, such that with probability 1-δ over the draw of the n sets 𝒟^β_-i := {β_-i(θ_-i^(1)), …, β_-i(θ_-i^(N)) }, the following conditions hold: * For any valuation θ_i ∈[0,1]^m, the functions u_i, M(θ_i, , β_-i(θ_-i^(1))), …, u_i, M(θ_i, , β_-i(θ_-i^(N))) are piecewise L_i-Lipschitz and (w_i, v_i(w_i ))-dispersed. * For any reported θ̂_i ∈[0,1]^m, the functions u_i, M(, θ̂_i, β_-i(θ_-i^(1))), …, u_i, M(, θ̂_i, β_-i(θ_-i^(N))) are piecewise L_i-Lipschitz and (w_i, v_i(w_i ))-dispersed. The constants w_i and v_i(w_i ) will be properties resulting from the interplay of the utilized mechanism M, the prior distribution F, the opponents' strategy profile β_-i, and the number of drawn samples. Under the assumption that the dispersion guarantees hold, we can provide the following guarantee on the ex interim utility loss. The full proof is in Appendix <ref>. Let δ > 0 and M be a mechanism. Furthermore, let β∈Σ̃ be a strategy profile. Given that Assumption <ref> holds for w_i >0, v_i(w_i), and v_i(L_β_iw_i), we have with probability at least 1 - 3δ over the draw of the datasets 𝒟^β_-1, …, 𝒟^β_-n for every agent i ∈ [n] sup_θ_i ∈Θ_iℓ̂_i(θ_i, β_i(θ_i), β_-i) = sup_θ_i, θ̂_i ∈Θ_iû_i, M(θ_i, θ̂_i, β_-i) - û_i, M(θ_i, β_i(θ_i), β_-i) ≤sup_θ_i, θ̂_i ∈𝒢_w_i1/N∑_j=1^N u_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i, β_i(θ_i), β_-i(θ_-i^(j)))+ε̂_i, ε̂_i:=4√(2d_i/Nlog( e N/d_i)) + 2√(2/Nlog(2n/δ)) + 3ε̂_i, disp(w_i) + ε̂_i, disp(L_β_iw_i), ε̂_i, disp(x) := N - v_i(x )/N L_i x + 2 v_i(x )/N, and d_i=Pdim(ℱ̂_i, M). Fix an agent i ∈ [n]. By the definition of dispersion, we have with probability at least 1 - δ, for all i ∈ [n], θ_i ∈Θ_i, and reported valuations θ̂_i, θ̂_i^'∈Θ_i with θ̂_i - θ̂_i^'_1 ≤ x, we have 1/N∑_j=1^N u_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i, θ̂_i^', β_-i(θ_-i^(j))) ≤ε̂_i, disp(x); and for all i ∈ [n], reported valuations θ̂_i ∈Θ_i, and θ_i, θ_i^'∈Θ_i with θ_i - θ_i^'_1 ≤ x, we have 1/N∑_j=1^N u_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i^', θ̂_i, β_-i(θ_-i^(j))) ≤ε̂_i, disp(x). For any θ_i ∈Θ_i, there exists a grid point p ∈𝒢_w_i such that θ_i - p_1 ≤ w_i and β_i(θ_i) - β_i(p)_1 ≤ L_β_iw_i. We apply Equations <ref> and <ref> for the grid width w_i and the stretched grid width L_β_iw_i. The statement follows with an application of Theorem <ref> and a suitable union bound. The above result is similar to Theorem 3.5 of <cit.>—which assumed truthful bidding—with the distinction that we need to ensure the dispersion of the utility functions under the opponents' bidding distribution. Additionally, it is necessary to consider the potential distortion of the grid 𝒢_w_i under the bidding strategies. §.§ Dispersion guarantees for independent prior distributions - we show how to give dispersion guarantees under strategic bidding, which constitutes the first of our main contributions. - To finish the section about independent prior distributions, we present the dispersion guarantees for the first-price single-item auction as an illustrative example on how to extend the dispersion results from <cit.> under strategic bidding. - this provides the constants and assumptions to verify an equilibrium strategy profile in a single-item first-price auction with bi-Lipschitz strategies and κ-bounded prior distributions - we refer to Section <ref> or better the Table? for the a full list of dispersion and pseudo-dimension guarantees that hold for interdependent prior distributions, as the independent prior setting constitutes a special case of these. §.§.§ Dispersion guarantees for the single-item first-price sealed-bid auction - in the first-price auction, the highest bidder wins the item and pays its bid. - in the case of independent prior distributions, each agent i has a valuation θ_i ∈ [0, 1] for the item and submits a bid b_i ∈ [0, 1] that may shade its valuation - the utility function for agent i is then given by u_i, M(θ_i, b_i, b_-i) = 1_{b_i > b_-i_∞}(θ_i - b_i). Assume every agent i ∈ [n] has a κ-bounded marginal prior distribution F_θ_i. Let β∈Σ̃ be a strategy profile of bi-Lipschitz bidding strategies, where L_β_i^-1 denotes the Lipschitz constant of β_i^-1. With probability 1 - δ for all agents i ∈ [n] over the draw of the n datasets 𝒟^β_-i := {β_-i(θ_-i^(1)), …, β_-i(θ_-i^(N)) }, * For any θ_i∈ [0, 1], the functions u_i, M(θ_i, , β_-i(θ_-i^(1))), …, u_i, M(θ_i, , β_-i(θ_-i^(N))) are piecewise 1-Lipschitz and (w_i, v_i(w_i ))-dispersed with v_i(w_i ) := (n-1)w_i N κmax_t ∈ [n] ∖{i} L_β_t^-1 + (n-1) √(2N log(2n(n-1)/δ)) + 4(n-1) √(N log(eN/2)). * For any b_i ∈ [0, 1] and b_-i∈ [0, 1]^n-1, the function u_i, M(, b_i, b_-i) is 1-Lipschitz continuous. We start with the first part of the statement. Consider i ∈ [n] and β_-i(θ_-i^(j)) ∈𝒟^β_-i arbitrary. For any θ_i ∈Θ_i and bid b_i ∈Θ_i, we have by the definition of the first-price auction u_i, M(θ_i, b_i, β_-i(θ_-i^(j))) = 1_{b_i > β_-i(θ_-i^(j))_∞}(θ_i - b_i). Therefore, if b_i ≤β_-i(θ_-i^(j))_∞, then u_i, M(θ_i, b_i, β_-i(θ_-i^(j))) is a constant function in b_i. On the other hand, if b_i > β_-i(θ_-i^(j))_∞, then u_i, M(θ_i, b_i, β_-i(θ_-i^(j))) is linear in b_i with slope of -1. Consequently, we have for all θ_i ∈Θ_i and β_-i(θ_-i^(j)) ∈𝒟^β_-i, the function u_i, M(θ_i, , β_-i(θ_-i^(j))) is piecewise 1-Lipschitz continuous with a discontinuity at β_-i(θ_-i^(j))_∞. We proceed with the dispersion constants (w_i, v_i(w_i )). As seen above, the function u_i, M(θ_i, , β_-i(θ_-i^(j))) can only have a discontinuity at a point in the set {β_l(θ_l^(j)) }_{l ∈ [n]∖{i}}. Therefore, it is sufficient to guarantee with probability 1 - δ/n, at most v_i(w_i ) points in the set 𝒞 := ⋃_j=1^N {β_l(θ_l^(j)) }_{l ∈ [n]∖{i}} fall within an interval of width w_i. The statement then follows over a union bound over the n bidders. We apply Lemma <ref> in Appendix <ref> to show this statement. For l ∈ [n]∖{i}, define 𝒞_l := {β_l(θ_l^(j)) }_{j ∈ [N]}. Then, within each 𝒞_l, the samples are independently drawn from the bidding distribution F^β_l_θ_l. Per assumption, the marginal prior F_θ_l is a κ-bounded distribution. By Theorem <ref>, the bidding distribution's density function satisfies ϕ_F^β_l_θ_l_∞≤ L_β_l^-1·κ≤max_t ∈ [n] ∖{i} L_β_t^-1·κ. Therefore, the samples β_l(θ_l^(j)) are drawn from a κmax_t ∈ [n] ∖{i} L_β_t^-1-bounded distribution. Therefore, with probability at most 1 - δ/n every interval of width w_i contains at most v_i(w_i ) = (n-1) w_i κmax_t ∈ [n] ∖{i} L_β_t^-1 N + (n-1) √(2N log(2n(n-1)/δ)) + 4(n-1) √(N log( eN/2)). The second statement can be seen as follows. For any given bids b_i, b_-i, the allocation is fixed. Therefore, u_i, M(, b_i, b_-i) is either constant if b_i ≤b_-i_∞ or linear with slope 1 if b_i > b_-i_∞. § VERIFYING APPROXIMATE EQUILIBRIUM UNDER INTERDEPENDENT PRIORS We present the first, to our knowledge, sampling-based results to verify approximate equilibrium with interdependent prior distributions. We limit our focus to ex ante guarantees. In this setting, from agent i's perspective, for two distinct received observations o_i and o_i^', he must consider two different conditional prior distributions F_θ, o_-io_i^β_-i and F_θ, o_-io_i^'^β_-i. For j ∈ [N], a sample (o^(j), θ^(j), β(o^(j)) ) from 𝒟^β can be interpreted as a draw (o_-i^(j), θ_i^(j), β_-i(o_-i^(j)) ) ∼ F_θ, o_-io_i^(j)^β_-i. However, the probability that there is another l ≠ j such that o_i^(l) = o_i^(j) is zero. Therefore, we cannot implement the sampling step in the same manner as we did in Section <ref>. We address this challenge by considering a partition ℬ_i={B_1, …, B_N_ℬ_i} of 𝒪_i for each agent i ∈ [n]. Denote the maximum number of elements in anz partition by N_ℬ_max := max_i ∈ [n] N_ℬ_i. We demonstrate that it is sufficient to assume a constant best-response for each B_k ∈ℬ_i if the conditional distribution F_θ, o_-io_i does not vary too strongly for o_i ∈ B_k, according to an appropriate distance measure over the space of probability distributions. With this premise, we establish that one can group the samples based on o_i ∈ B_k. Subsequently, we present our upper bound ε̃ for the ex ante guarantee by conducting the sampling and discretization step for each B_k ∈ℬ_i. §.§ Bounding best-response utility differences with constant best-responses For a B_k from partition ℬ_i, we want to bound the error incurred when limiting bidding to a constant best response for all o_i ∈ B_k. To achieve this, it is necessary to limit distance between conditional prior distributions F_θ_i, o-io_i and F_θ_i, o-io_i^' according to some distance for o_i, o_i^'∈ B_k. In contrast to finite-dimensional Euclidean spaces, different distance functions can induce vastly different topologies on the space of probability distributions over continuous spaces <cit.>. Therefore, the selection of an appropriate distance measure for this purpose is crucial. Common choices in the machine learning literature for measuring distances between probability distributions include the Wasserstein metric d_W (also known as the earth mover's distance or Kantorovich metric), the total variation metric d_TV, and the Kullback-Leibler divergence d_KL (also referred to as relative entropy). Let μ and ν denote two probability measures over agent i's observation space 𝒪_i. We have the following relationship between these distance measures: d_W(μ, ν) ≤diam(𝒪_i) · d_TV(μ, ν) ≤diam(𝒪_i) ·√(1/2· d_KL(μ, ν)), where diam(𝒪_i) denotes 𝒪_i's diameter. The above inequalities can be strict, and there are no constants so that they may hold in the other direction in general <cit.>. The objective is to furnish guarantees using the weakest possible distance measure. Unfortunately, the Wasserstein metric, seems too weak to provide sufficient guarantees for discontinuous utility functions <cit.>. The Kullback-Leibler divergence, despite its appealing properties, can be unbounded, which poses a limitation for establishing practical guarantees. On the other hand, the total variation distance has the advantage of being upper bounded by one, making it a more suitable choice for our purposes. Therefore, we opt for the total variation distance as the measure to base our guarantees upon. Let μ and ν be two probability measures over ℝ^m and Λ be the Borel-σ-algebra. Then the total variation distance between μ and ν is given by d_TV(μ, ν) := sup_A ∈Λμ(A) - ν(A). - the total variation distance is in general hard to determine - however, in our case, we assume that the prior distribution has a density function, so that we have the following well-known equality Let μ and ν be two probability measures over ℝ^m that are absolutely continuous with regard to the Borel-measure λ. Then, the total variation distance satisfies d_TV(μ, ν) = 1/2∫_ℝ^mϕ_μ(x) - ϕ_ν(x) d λ(x) = 1/2ϕ_μ - ϕ_ν_1, where ϕ_μ and ϕ_ν denote the probability density functions of μ and ν, respectively. We leverage a well-known fact that the distance between two integrals over different probability measures can be bounded by the total variation of these measures <cit.>. This principle enables us to bound differences in the ex interim utility function for different observations. For the sake of completeness, we provide a proof for this statement. Let A ⊂ℝ^m and g: A →ℝ be a bounded function. Furthermore, let μ and ν be probability measures over A with density functions ϕ_μ and ϕ_ν. Then, we have ∫_A g(x) d μ(x) - ∫_A g(x) d ν(x)≤ 2 g_∞· d_TV(μ, ν). The total variation distance is equal to one half of the L^1-distance between the density functions <cit.>, that is, d_TV(μ, ν) = 1/2ϕ_μ - ϕ_ν_1. Therefore, we have ∫_A g(x) d μ(x) - ∫_A g(x) d ν(x)≤g_∞∫_A ϕ_μ(x) - ϕ_ν(x) d λ(x) = 2 g_∞· d_TV(μ, ν). We show next that the error incurred by assuming a constant best-response for all observations from B_k can be controlled, provided the distance between conditional prior distributions F_θ_i, o_-io_i and F_θ_i, o_-io_i^' is small enough in terms of the total variation distance for o_i, o_i^'∈ B_k. The full proof is in Appendix <ref>. Let ℬ_i = {B_1, …, B_N_ℬ_i} be a partition of 𝒪_i. The difference between a best-response utility over function space to best-responses that are constant for every B_k satisfies sup_β_i^'∈Σ_iũ_i, M(β_i^', β_-i) - sup_b ∈𝒜_i^N_ℬ_iũ_i, M(∑_k=1^N_ℬ_i b_k 1_B_k, β_-i) ≤ 2 ∑_k=1^N_ℬ_i P(o_i ∈ B_k) τ_i, B_k, with τ_i, B_k := sup_ô_i, ô_i^'∈ B_k d_TV(F_θ_i, o_-iô_i, F_θ_i, o_-iô_i^'). If there exists a constant L_B_k > 0 such that d_TV(F_θ_i, o_-iô_i, F_θ_i, o_-iô_i^') ≤ L_B_ko_i - o_i^' for o_i, o_i^'∈ B_k, then τ_i, B_k≤ L_B_kdiam(B_k), where diam(B_k) denotes B_k's diameter. Fix an agent i ∈ [n] and B_k ∈ℬ_i. We leverage Teorem <ref> to establish a bound of the interim utilities for any o_i, o_i^'∈ B_k sup_b_i ∈𝒜_iû_i, M(o_i, b_i, β_-i) - sup_b_i^'∈𝒜_iû_i, M(o_i, b_i^', β_-i)≤ 2 u_i, M_∞ d_TV(F_θ_i, o_-iô_i, F_θ_i, o_-iô_i^'). We extend this relation to constant best-responses for all o_i^'∈ B_k for one of these terms, establishing the bound for a single B_k ∈ℬ_i. We apply the low of total expectation to formulate this relation for step functions of the form ∑_k=1^N_ℬ_ib_k 1_B_k. For each B_k, a meaningful upper bound can be established if τ_i, B_k is sufficiently small. A weaker, but potentially easier to determine, bound can be given if there exists an L_B_k > 0 such that d_TV(g_B_k(o_i), g_B_k(o_i^') ) ≤ L_B_ko_i - o_i^' for o_i, o_i^'∈ B_k. This term is directly related to the diameter of B_k, and thus, to the number of elements in the partition ℬ_i. However, importantly, even if for some B_l ∈ℬ_i the value τ_i, B_l does not have a bound below one, a non-trivial ex ante upper bound may still be achievable if it does hold for sufficiently many B_k ∈ℬ_i. This makes our results applicable to a wide variety of settings. However, while there are several closed-form solutions available for calculating the total variation distance between continuous probability distributions, determining this distance remains hard in general, marking a limitation of our approach. Nevertheless, the growing interest in the total variation distance for applications within machine learning has spurred recent research efforts. For instance, <cit.> proposes methods for upper bounding the total variation distance, offering potential pathways to overcome this challenge. Discussion about sampling from the conditional prior distribution directly. - one could argue that instead of computing a best-response over a whole partition B_k, one chooses a grid of points o_i^' over 𝒪_i and collects a dataset from F|o_i^' for each of those grid points. - However, even if we assume to have access to the prior distribution F, it may be intractable / computationally hard to sample from F|o_i^' directly. Cite and more detailed discussion. - therefore, this approach is infeasible in most practical settings. §.§ A sampling-based bound on the ex ante utility loss via finite precision step functions Our goal is to determine an ε̃_i > 0 for a strategy profile β=(β_i, β_-i) with high confidence such that it gives an upper bound to the ex ante utility loss ℓ̃_i(β). Guaranteeing this for every agent i, gives a high confidence certificate for (β_i, β_-i) being an ε̃-BNE, with ε̃:=max_i ∈ [n]ε̃_i. - This comes from two sources. The first is, that one cannot directly evaluate the expected utility for a given strategy profile. - the second, is that the search for a best-response is over the infinite set of bidding strategies Σ_i - in full generality, there is no finite time procedure to determine the best-responses (cite). We tackle these issues with two simplifications to estimate the utility loss. First, instead of evaluating the expectation directly, we approximate the distance to the utility via a Monte-Carlo sampling procedure. Second, instead of searching for a maximum utility loss over the infinite set Σ_i, we search over a finite set of step-functions, that is specified below. - we assume access to the unknown prior distribution F in the following form. In this section, we derive sampling-based estimation bounds ε̃ for the ex ante utility loss. Theorem <ref> established that finding a constant best-response for all observations from each element B_k ∈ℬ_i is sufficient. Therefore, we execute the sampling and discretization step for each B_k ∈ℬ_i. Starting with the sampling step, we categorize the dataset 𝒟^β according to the partition ℬ_i, for each agent i. For each 1 ≤ k ≤ N_ℬ_i, we define the conditional samples by 𝒟^β(B_k) := {(o^(j), β(o^(j)), θ^(j)) ∈𝒟^βo^(j)∈ B_k}. Then, 𝒟^β(B_k) constitutes a dataset of draws from F^β|{o_i ∈ B_k }. Denote the complete separation of 𝒟^β according to partition ℬ_i by 𝒟^β(ℬ_i) := {𝒟^β(B_k) 1 ≤ k ≤ N_ℬ_i}. - that is, given a B_k ∈ℬ_i, we want to find the best bid b_i ∈𝒜_i over the ex ante utility conditioned on the event {o_i ∈ B_k}. - However, evaluating the ex ante utility loss is, in general, intractable - instead, we use the dataset 𝒟^β of N independent samples of observations, valuations, and bids. 𝒟^β := {(o^(j), β(o^(j)), θ^(j))(o^(j), θ^(j)) ∈𝒟}. - for a datasample j, in the ex interim stage, an agent i has access to its observation o_i^(j), and needs to reason about which bid performs best considering the conditional prior distribution F^β_-i| o_i^(j). The tuple (β_-i(o_-i^(j)), θ_i^(j)) constitutes a random draw from F^β_-i| o_i^(j). One advantage of providing ex ante guarantees, as opposed to ex interim guarantees, is the ability to separate the estimation of the best-response utility sup_β^'_i ∈Σ_iũ_i, M(β_i^', β_-i) from the estimation of the ex ante utility ũ_i, M(β_i, β_-i). Therefore, conveniently, we can estimate the ex ante utility using the distribution-independent Hoeffding inequality, eliminating the need to rely on complex concepts such as the pseudo-dimension or partitioning the dataset 𝒟^β. The full proof is in Appendix <ref>. Let β∈Σ be a strategy profile. With probability 1 - δ over the draw of the dataset 𝒟^β, we have for every agent i ∈ [n] ũ_i, M(β_i, β_-i) - 1/N∑_j=1^N u_i, M(θ_i^(j), β_i(o_i^(j)), β_-i(o_-i^(j))) ≤√(2/Nlog(2 n/δ)). We fix an agent i ∈ [n] and apply Theorem <ref> to u_i, M(θ_i, β_i(o_i), β_-i(o_-i)) with (θ_i, β_i(o_i), β_-i(o_-i) ) ∼ F^β. A union bound over the n agents finishes the statement. It remains to estimate the best-response utility. For this purpose, we continue with the sampling step of our approach. For mechanism M and agent i, define the class of functions that map valuations and opponent bids to utility by ℱ̃_i, M:= {u_i, M(, b_i, ): Θ_i ×𝒜_-i→ℝ b_i ∈𝒜_i}. The proof of the following theorem is in Appendix <ref>. With probability 1 - δ over the draw of the n sets 𝒟^β(ℬ_1), …, 𝒟^β(ℬ_n), for partitions ℬ_i = {B_1, …, B_N_ℬ_i} of 𝒪_i for every agent i ∈ [n], we have sup_b_i ∈𝒜_i𝔼_o_i, o_-i, θ_i{o_i ∈ B_k }[u_i, M(θ_i, b_i, β_-i(o_-i) ) ] - sup_b_i ∈𝒜_i1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), b_i, β_-i(o_-i^(j))) ≤ε̃_i, Pdim(N_B_k), ε̃_i, Pdim(N_B_k) := 2√(2 d_i/N_B_klog(e N_B_k/d_i) ) + √(2/N_B_klog(n N_ℬ_max/δ)), d_i := Pdim(ℱ̃_i, M). We proceed with the discretization step to identify a constant best-response for each B_k ∈ℬ_i over the bidding space 𝒜_i. To this end, we assume 𝒜_i = [0, 1]^m for a suitable m ∈ℕ. For a w > 0, denote with 𝒢_w⊂ [0, 1]^m a finite w-grid. We make the following assumption. Suppose that for mechanism M, each agent i ∈[n], and segment B_k ∈ℬ_i, there exist L_i, w_i∈ℝ and a function v_i, B_k: ℝ→ℝ, such that with probability 1-δ over the draw of the sets {𝒟^β(B_k)B_k ∈ℬ_i, i ∈ [n]}, the functions u_i, M(θ_i^(1), , β_-i(θ_-i^(1))), …, u_i, M(θ_i^(N_B_k), , β_-i(θ_-i^(N_B_k))) are piecewise L_i-Lipschitz and ( w_i, v_i, B_k(w_i))-dispersed. Under this assumption, we can provide the following approximation bounds by approximating a best-response over a finite subset of the action space. The proof of the following lemma is conceptually similar to the one of Theorem <ref> and can be found in Appendix <ref>. Let δ > 0, β∈Σ̃ be a strategy profile, and M be a mechanism. Suppose that for each agent i ∈ [n] and segment B_k ∈ℬ_i, Assumption <ref> holds for w_i >0 and v_i(w_i). Then, with probability 1- δ over the draw of the sets {𝒟^β(ℬ_i)i ∈ [n]}, agents i ∈ [n], and segments B_k ∈ℬ_i, sup_b_i ∈𝒜_i1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), b_i, β_-i(o_-i^(j))) - max_b_i ∈𝒢_w1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), b_i, β_-i(o_-i^(j))) ≤N_B_k - v_i, B_k(w_i)/N_B_k L_i w_i + 2v_i, B_k(w_i)/N_B_k =: ε̃_i, disp(N_B_k). With this foundation, we can present our main theorem, which combines this section's results to establish an approximation bound on the ex ante utility loss. The proof combines Theorems <ref>, <ref>, and Lemma <ref> and can be found in Appendix <ref>. Let δ > 0 and β∈Σ̃ be a strategy profile. Suppose that for each agent i ∈ [n] and segment B_k ∈ℬ_i, Assumption <ref> holds. Then, with probability 1- 4δ over the draw of the sets {𝒟^β(ℬ_i)i ∈ [n]}, agents i ∈ [n], and segments B_k ∈ℬ_i, ℓ̃_i(β_i, β_-i) = sup_β^'_i ∈Σ_iũ_i, M(β^'_i, β_-i) - ũ_i, M(β_i, β_-i) ≤∑_k=1^N_ℬ_iN_B_k/Nmax_b_i ∈𝒢_w_i1/N_B_k∑_j=1^N_B_k u_i, M (θ_i^(j), b_i, β_-i(o_-i^(j))) - 1/N∑_l=1^N u_i, M (θ_i^(l), β_i(o_i^(l)), β_-i(o_-i^(l))) + 2 √(2/Nlog(2n/δ)) + ∑_k=1^N_ℬ_iN_B_k/Nmin{1, (τ_i, B_k + ε̃_i, Pdim(N_B_k) + ε̃_i, disp(N_B_k) )}, where τ_i, B_k, ε̃_i, Pdim(N_B_k), and ε̃_i, disp(N_B_k) are the constants defined in Theorems <ref>, <ref>, and Lemma <ref>. with τ_i, B_k = 2 sup_ô_i, ô_i^'∈ B d_TV(F_θ_i, o_-iô_i, F_θ_i, o_-iô_i^') and ε̃_i, Pdim(N_B_k) = 2√(2 d_i/N_B_klog(e N_B_k/d_i) ) + √(2/N_B_klog(2n N_ℬ_max/δ)), ε̃_i, disp(N_B_k) := N_B_k - v_i, B_k(w_i)/N_B_k L_i w_i + 2v_i, B_k(w_i)/N_B_k. § DISPERSION GUARANTEES FOR INTERDEPENDENT PRIOR DISTRIBUTIONS - we now give dispersion guarantees in combination with strategic bidding for interdependent prior distributions - the provided guarantees for the interdependent prior case also hold for independent priors - this can be seen when considering two things. - first, one can use the trivial partition ℬ_i = {𝒪_i}, so that the conditioning on B_1 becomes void. - secondly, we show the dispersion guarantees for the interdependent prior case for a stronger assumption than Assumptions <ref> and <ref>, so that these are also guaranteed to hold. - additionally, for completeness, we report the pseudo-dimension guarantees for the individual mechanisms from <cit.>, that hold without adaptations for our considered problems. argue why the F's have the same pseudo dims. - therefore, all results reported in this section are also valid for the bounds presented in Section <ref>. - Table <ref> gives an overview of all dispersion and pseudo-dimension guarantees. - we restrict ourselves to presenting the precise results for the first-price single-item auction due to space restrictions. - the remaining theorems are in Appendix <ref> Let the mechanism M be the first-price single-item auction. Let (β_i, β_-i) ∈Σ̃ be a strategy profile. Assume that for each agent i ∈ [n] and segment B_k ∈ℬ_i, there exists κ_i, B_k>0, such that the conditional marginal prior distribution F_o_j{o_i ∈ B_k } is κ_i, B_k bounded. Then, for w_i>0, with probability at least 1 - δ over the draw of the sets {𝒟^β_-i(B_k)B_k ∈ℬ_i, i ∈ [n]} for every i ∈ [n] and B_k ∈ℬ_i, the functions u_i, M(θ_i^(1), ·, β_-i(o_-i^(1)) ), …, u_i, M(θ_i^(N_B_k), ·, β_-i(o_-i^(N_B_k)) ) are piecewise 1-Lipschitz and (w_i, v_i, B_k(w_i) )-dispersed, with v_i, B_k(w_i) := (n-1)w_i κ_i, B_kmax_t ∈ [n] ∖{i}L_β_t^-1· N_B_k + (n-1) √(2 N_B_klog(2n(n-1) N_ℬ_max/δ)) + 4(n-1) √(N_B_klog(e N_B_k/2)). The first part of the proof is analogues to the proof of Theorem <ref>. We repeat this here for clarity. We start with the first part of the statement. Consider i ∈ [n] and β_-i(θ_-i^(j)) ∈𝒟^β_-i(B_k) arbitrary. For any θ_i ∈Θ_i and bid b_i ∈Θ_i, we have by the definition of the first-price auction u_i, M(θ_i, b_i, β_-i(θ_-i^(j))) = 1_{b_i > β_-i(θ_-i^(j))_∞}(θ_i - b_i). Therefore, if b_i ≤β_-i(θ_-i^(j))_∞, then u_i, M(θ_i, b_i, β_-i(θ_-i^(j))) is a constant function in b_i. On the other hand, if b_i > β_-i(θ_-i^(j))_∞, then u_i, M(θ_i, b_i, β_-i(θ_-i^(j))) is linear in b_i with slope of -1. Consequently, we have for all (θ_i^(j), β_-i(θ_-i^(j)) ) ∈𝒟^β_-i, the function u_i, M(θ_i^(j), , β_-i(θ_-i^(j))) is piecewise 1-Lipschitz continuous with a discontinuity at β_-i(θ_-i^(j))_∞. Fix agent i ∈ [n] and B_k ∈ℬ_i. For any θ_i ∈Θ_i and b_-i∈𝒜_-i, the function u_i, M(θ_i, , b_-i) can only have a discontinuity at a point in the set {b_l }_l ∈ [n] ∖{i}. Therefore, it is sufficient to guarantee with probability at least 1 - δ/n N_ℬ_max, at most v_i, B_k(w_i) points in the set C := ⋃_j=1^N_B_k{β_l(o_l^(j)) }_l ∈ [n] ∖{i} fall within any interval of width w_i. The statement then follows over a union bound over the n-bidders and up to N_ℬ_max segments. We apply Lemma <ref>. For l ∈ [n] ∖{i} define C_l := {β_l(o_l^(j)) }_j ∈ [N_B_k]. Within each C_l, the samples are independently drawn from the marginal conditional bidding distribution F_o_l{o_i ∈ B_k}^β_l. Per assumption, F_o_l{o_i ∈ B_k}^β_l is a κ_i, B_k-bounded distribution. By Theorem <ref>, the conditional bidding distribution's density function satisfies ϕ_F_o_l{o_i ∈ B_k}^β_l_∞≤ L_β^-1_l≤max_t ∈ [n] ∖{i} L_β^-1_t·κ_i, B_k. The samples β_l(o_l^(j)) for 1 ≤ j ≤ N_B_k are drawn from a max_t ∈ [n] ∖{i} L_β^-1_t·κ_i, B_k-bounded distribution. Therefore, with probability at most 1- δ/n N_ℬ_max any interval of width w_i contains at most v_i, B_k(w_i) := (n-1)w_i κ_i, B_kmax_t ∈ [n] ∖{i}L_β_t^-1· N_B_k + (n-1) √(2 N_B_klog(2n(n-1) N_ℬ_max/δ)) + 4(n-1) √(N_B_klog(e N_B_k/2)). §.§ Verifying an equilibrium with common values - in this section, we apply our results to a setting with correlated prior distributions and verify an equilibrium candidate - we consider a so-called common values setting. That is, each bidder has the same unobserved valuation for the item but independently makes a noisy assessment of its value - this is also known as the “mineral rights” setting - its properties were studied extensively in literature <cit.> - we consider the following specific instance <cit.> - the common valuation θ is drawn uniformly over [0, 1], i.e., θ_i=θ for all i ∈ [n]. - each bidder's observation o_i is then independently drawn uniformly from [0, 2 θ]. - an equilibrium strategy is known under the second-price auction <cit.> - however, we are not aware of an explicitly known equilibrium strategy for this model under the first-price auction. - we proceed by proposing and verifying an equilibrium strategy for this case. §.§.§ Determining the maximum total variation distance - the first component of the error bound consists of the maximum total variation distance of the opponents prior distribution over a segment B_k ⊂𝒪_i=[0, 2] for each agent i ∈ [n]. - that is, we want to determine sup_o_i, õ_i ∈ B_k d_TV(F_θ, o_-io_i, F_θ, o_-iõ_i). - by Lemma <ref>, the total variation equals the L^1-distance between the distributions density functions - therefore, we start by deriving the conditional density function ϕ_F_θ, o_-io_i: Θ_i ×𝒪_-i→ℝ. - to abbreviate the expression, we abuse notation and denote the density function's value given observation o_i ∈𝒪_i for valuation θ and opponents' observations o_-i by ϕ(θ, o_-io_i). We proceed similarly for other densities Clean this up by putting in two theorems with proofs instead of one long description. - the bidders' observations are drawn independently from [0, 2θ] for a common valuation θ. Therefore, the observations o_i and o_j for any two agents i, j ∈ [n] are conditionally independent given the common valuation θ. - with this and by the definition of a conditional density <cit.>, we get ϕ(θ, o_-io_i) = ϕ(o_-iθ, o_i) ·ϕ(θo_i) = ϕ(o_-iθ) ·ϕ(θo_i) - as the first term is independent of agent i's observation, we get the following for two observations o_i, õ_i ∈𝒪_i 2 d_TV(F_θ, o_-io_i, F_θ, o_-iõ_i) = ∫_𝒪_-i×Θ_iϕ(θ, o_-io_i) - ϕ(θ, o_-iõ_i)(θ, o_-i) = ∫_Θ_i∫_𝒪_-iϕ(o_-iθ) ·ϕ(θo_i) - ϕ(o_-iθ) ·ϕ(θõ_i) o_-iθ = ∫_Θ_i∫_𝒪_-iϕ(o_-iθ) o_-i ϕ(θo_i) - ϕ(θõ_i) θ = ∫_Θ_iϕ(θo_i) - ϕ(θõ_i) θ = ϕ(o_i) - ϕ(õ_i)_1. - therefore, it is sufficient to determine the density function ϕ(o_i) for the common valuation θ given a bidder's observation o_i. - to derive ϕ(θo_i) we rely on Bayes' theorem. - observe that the conditional density for a observation o_i given a common valuation θ satisfies ϕ(o_iθ) = 1/2 θ, as it is uniformly distributed over [0, 2θ]. - the marginal density for a observation o_i is then given by ϕ(o_i) = ∫_Θ_iϕ(o_iθ) ϕ(θ) θ = ∫_o_i/2^11/2 θ· 1 θ = -log(o_i/2)/2 over the interval (0, 2] and zero else. - given a observation o_i, ϕ(θo_i) = ϕ(o_iθ) ·ϕ(θ) ·1/ϕ(o_i) = - 1/θlog(o_i/2), on the interval (o_i/2, 1] and zero else. - let o_i, õ_i ∈𝒪_i with o_i ≤õ_i, then d_TV(F_θ, o_-io_i, F_θ, o_-iõ_i) = 1/2∫_0^1 - 1/θlog(o_i/2)1_(o_i/2, 1](θ) + 1/θlog(õ_i/2)1_(õ_i/2, 1](θ)θ = 1/2(∫_o_i/2^õ_i/2 - 1/θlog(o_i/2)θ + ∫_õ_i/2^11/θlog(õ_i/2) - 1/θlog(o_i/2)θ) = 1/2(-1/log(o_i/2)(log(õ_i/2) - log(o_i/2) ) + (1/log(õ_i/2) - 1/log(o_i/2)) log(õ_i/2)) = 1/2(- log(õ_i/2)/log(o_i/2) + 1 + 1 - log(õ_i/2)/log(o_i/2)) = 1 - log(õ_i/2)/log(o_i/2). - due to symmetry, the total variation distance satisfies g(o_i, õ_i) := d_TV(F_θ, o_-io_i, F_θ, o_-iõ_i) = 1 - log(õ_i/2)/log(o_i/2), o_i ≤õ_i 1 - log(o_i/2)/log(õ_i/2), else. - for a given segment B_k ⊂ [a, b], we show that the function g is bounded by g(a, b). - that is, the total variation distance is maximized over a segment B_k by considering the conditional distributions given the observations that are furthest apart. - to see this, consider g's gradient for o_i < õ_i, given by ∇ g(o_i, õ_i) = ([ log(õ_i/2)/o_i ·log(o_i/2)^2; ; - 1/õ_i ·log(o_i/2) ]). - the first term is non-positive for all õ_i > o_i. In this case, the function g(o_i, õ_i) is monotonically decreasing in its first argument. - on the other hand, the second term is non-negative for all õ_i > o_i. That is, the function g is monotonically increasing in its first argument. - due to symmetry, it is precisely the other way around for õ_i < o_i. - all in all, for B_k ⊂ [a, b] sup_o_i, õ_i ∈ B_k d_TV(F_θ, o_-io_i, F_θ, o_-iõ_i) = g(a, b) = g(b, a) = 1 - log(b/2)/log(a/2). §.§.§ An equilibrium candidate - we follow the work by <cit.> to compute an equilibrium candidate strategy. - more precisely, we simulate the single-item auction and let each agent train a neural network via the Reinforce algorithm <cit.>. - the training procedure does not enforce the strategies to be bi-Lipschitz. Therefore, we cannot simply determine the Lipschitz constants of its inverse. - for simplicity, we use a piecewise linear approximation of this strategy, so that we can easily determine the Lipschitz constants. - the equilibrium candidate is a symmetric strategy, i.e., all agents play the same strategy - Figure <ref> shows the symmetric strategy and its linear approximation. §.§.§ Verifying the equilibrium candidate - the marginal density function for a observation o_i is given by ϕ(o_i) = -log(o_i/2)/2. - the probability that a observation o_i lies within an interval (a, b) ⊂ [0, 2] is then given by P(o_i ∈ (a, b)) = ∫_a^b ϕ(o_i) o_i = -1/2∫_a^b log(o_i/2) o_i = - 1/2[o_i( log(o_i/2) - 1) ]_a^b = 1/2(a(log(a/2)) - b(log(b/2)) ) - § GUARANTEES ON DISPERSION AND PSEUDO-DIMENSION FOR FOUR MECHANISMS In this section, we report dispersion and pseudo-dimension guarantees for various mechanisms. These enable us to instantiate the bounds from the previous two sections, thus allowing us to assess the degree to which the empirical utility loss estimates correspond to the true utility losses. We build upon the work of <cit.>, which offers dispersion and pseudo-dimension guarantees for a range of mechanisms. We demonstrate how to adapt their guarantees to our context—namely strategic bidding and not just for independent priors but also for interdependent priors. We study some of our settings in the body and the rest in the appendix; a detailed summary of all our guarantees can be found in Table <ref>. §.§ Dispersion guarantees under strategic bidding To adapt the dispersion guarantees from <cit.> to our context, two significant modifications are required. First, in situations involving interdependent priors, it is necessary to focus on the conditional prior distribution. Second, one needs to reason about the (conditional) bidding distribution F^β instead of the prior distribution F. To address the first challenge, we extend the assumption of κ-bounded distributions to the conditional prior distribution. We then apply Theorem <ref> to tackle the second challenge. As a result, a κ-bounded prior distribution transforms into a κ L_β^-1_max-bounded bidding distribution. By making these adjustments, we can apply the dispersion guarantees from <cit.> to our specific situation with small modifications required in the original proofs. We illustrate how to formulate and extend the dispersion guarantees for the first-price single-item auction. For the detailed statements on other mechanisms, see Appendix <ref>. §.§ First-price single-item auction In the first-price single-item auction, the item is awarded to the highest bidder, who then pays the amount of its bid. Each agent i has a valuation θ_i ∈ [0, 1] for the item and submits a bid b_i ∈ [0, 1]. The utility function for agent i is given by u_i, M(θ_i, b_i, b_-i) = 1_{b_i > b_-i_∞}(θ_i - b_i). We limit ourselves to present the statement for the interdependent prior case, as it incorporates both changes described above. For the statement with independent prior distributions, see Appendix <ref>. The following theorem asserts Assumption <ref> is valid for the first-price auction with interdependent prior distributions (Section <ref>). The full proof is in Appendix <ref>. Let (β_i, β_-i) ∈Σ̃. Assume that for each agent i ∈ [n] and segment B_k ∈ℬ_i, there exists κ_i, B_k>0, such that the conditional marginal distributions F_o_j{o_i ∈ B_k } for j ∈ [n] ∖{i} are κ_i, B_k bounded. Then, for w_i>0, with probability at least 1 - δ over the draw of the sets {𝒟^β_-i(ℬ_i)i ∈ [n]} for every i ∈ [n] and B_k ∈ℬ_i, the functions u_i, M(θ_i^(1), ·, β_-i(o_-i^(1)) ), …, u_i, M(θ_i^(N_B_k), ·, β_-i(o_-i^(N_B_k)) ) are piecewise 1-Lipschitz and (w_i, v_i, B_k(w_i) )-dispersed, with v_i, B_k(w_i) := (n-1)w_i N_B_kκ_i, B_k L_β^-1_max + (n-1) √(2 N_B_klog(2n(n-1) N_ℬ_max/δ)) + 4(n-1) √(N_B_klog(e N_B_k/2)). For agent i ∈ [n], apply Theorem <ref> to the marginal bidding distribution F_o_j{o_i ∈ B_k }^β_i. Then, the κ_i, B_k-bounded density function for every agent j ∈ [n]∖{i } and B_k ∈ℬ_i transforms into a κ_i, B_k L_β^-1_j-bounded bidding distribution. Next, we determine that for a sample j, the discontinuity in the utility functions is located at the point β_-i(o_-i^(j))_∞. Following this, we apply standard dispersion results (as detailed in Appendix <ref>) to restrict the number of points {β_l(o_l^(j)) }_j ∈ [N_B_k], l ∈ [n] ∖{i} within any interval of width w_i with high probability. A suitable union bound finishes the statement. We provide dispersion guarantees for three other mechanisms in Appendix <ref>. A detailed summary of all our guarantees can be found in Table <ref>. §.§ Pseudo-dimension guarantees via delineability <cit.> build their pseudo-dimension guarantees on the concept of (m, t)-delineability <cit.>. If one can show that a function class is (m, t)-delineable, one can bound its pseudo-dimension. <cit.> show ℱ̂_i, M is (2m, t)-delineable for several auction mechanisms to derive their bounds. We extend their statements to ℱ̃_i, M by showing if ℱ̂_i, M is (2m, t)-delineable, then ℱ̃_i, M is (m, t)-delineable. This way, we can readily extend their pseudo-dimension guarantees. The concept of (m, t)-delineability is defined as follows. Let 𝒫⊂ℝ^m and 𝒳 a vector-space. A class of functions ℱ = {f(, p): 𝒳→ℝp∈𝒫} is (m, t)-dealineable if for any v∈𝒳, there is a set ℋ of t hyperplanes such that for any connected component 𝒫^' of 𝒫∖ℋ, f(v, p) is linear over 𝒫^'. The following theorem is similar to <cit.>'s main statement to bound the pseudo-dimension of an (m, t)-delineable function class. We slightly reformulated it to our setting. If a function class ℱ is (m, t)-dealineable, then Pdim(ℱ) = O(m log(mt) ). We now give our statement that extends the pseudo-dimension results from ℱ̂_i, M to ℱ̃_i, M. Let M be a mechanism and i ∈ [n]. Suppose the function class ℱ̂_i, M is (2m, t)-delineable, then ℱ̃_i, M is (m, t)-delineable. For a b_-i∈𝒜_-i, let C^Θ_i× C^𝒜_i⊂Θ_i ×𝒜_i = [0, 1]^m × [0, 1]^m be an open subset such that u_i, M(θ_i, b_i, b_-i) = x_i(b_i, b_-i) ·θ_i - p_i(b_i, b_-i) is linear in (θ_i, b_i) over C^Θ_i× C^𝒜_i. As the allocation x_i(b_i, b_-i) ∈{0, 1}^m and the price p_i(b_i, b_-i are independent of θ_i, the allocation x_i has to be constant for all b_i ∈ C^𝒜_i, otherwise there would be a jump for a changing θ_i. Therefore, u_i, M(θ_i, b_i, b_-i) is linear in θ_i ∈Θ_i for b_i ∈ C^𝒜_i. Let (θ_i, b_-i) ∈Θ_i ×𝒜_-i. As ℱ̂_i, M is (2m, t)-delineable, for b_-i, there exists a set ℋ̂ of t hyperplanes such that for any connected component C^Θ_i_l × C^𝒜_i_l of Θ_i ×𝒜_i ∖ℋ̂ the utility u_i, M(θ_i^', b_i, b_-i) is linear for θ_i^'∈ C^Θ_i_l and b_i ∈ C^𝒜_i_l. Denote with {C^Θ_i_l × C^𝒜_i_l }_l ∈ [N_t] the set of connected components of Θ_i ×𝒜_i ∖ℋ̂, where N_t is the number of connected components. For b_-i, we need at most t hyperplanes ℋ̃ so that 𝒜_i ∖ℋ̃ = ⋃_l ∈ [N_t] C^𝒜_i_l. By the argument above, the allocation is fixed for b_i ∈ C^𝒜_𝒾_l for every l ∈ [N_t] and u_i, M(θ_i^', b_i, b_-i) is linear in θ_i^'∈Θ_i. Therefore, u_i, M(θ_i, b_i, b_-i) is linear in b_i ∈ C^𝒜_i_l. Therefore, ℱ̃_i, M is (m, t)-delineable. As ℱ̂_i, M is (2m, t)-delineable, for a fixed b_-i, there exists a set ℋ̂ of t hyperplanes such that u_i, M(θ_i^', b_i, b_-i) is linear over the connected components of Θ_i ×𝒜_i ∖ℋ̂. As u_i, M(θ_i, b_i, b_-i) = x_i(b_i, b_-i) ·θ_i - p_i(b_i, b_-i) is always linear in θ_i, there exists, for a fixed (θ_i^', b_-i), a set of hyperplanes ℋ̃ with ℋ̃≤ t and u_i, M(θ_i, b_i, b_-i) being linear in b_i over the connected components of 𝒜_i ∖ℋ̃. Due to space restrictions, we direct readers to Appendix <ref> for thorough descriptions of the mechanisms and the detailed guarantees derived by the approach described above. § CONCLUSIONS AND FUTURE RESEARCH We introduced sampling-based methods for estimating the distance of a strategy profile from an ex interim or ex ante Bayesian Nash equilibrium. Our approach significantly broadens the scope of approximate equilibrium verification compared to prior methods, which rely on narrow assumptions like truthful bidding, single-item auctions, and/or complete knowledge of the prior. Notably, we enhance the sampling method proposed by <cit.> by extending it to allow strategic bidding, and correcting their prior assertion regarding its applicability to interdependent priors in the ex ante scenario. Our key contribution is the development of an empirical estimator for the utility loss, which—intuitively speaking—measures the maximum utility an agent can gain by deviating from its current strategy. We have effectively bounded the error between this empirical estimate and the true utility loss by employing a mixture of learning theory tools such as dispersion and pseudo-dimension. We established sufficient conditions for strategy profiles and a closeness criterion for conditional distributions that ensure that utility gains estimated through our finite subset of the strategy space closely approximate the maximum gains. We thus derived strong guarantees for a broad class of auctions with independent or interdependent priors, including the first-price single-item and combinatorial auction, discriminatory auction, and uniform-price auction. In related research, we discussed several promising techniques to computationally determine equilibrium candidates in complex auctions. To better understand the implications of our results, a natural next step is to combine equilibrium computation with our method of verification to analyze practically relevant settings. However, it is important to note that our current bounds on the utility loss scale exponentially with the complexity inherent in general combinatorial auctions. Recognizing this limitation, a valuable avenue for future research involves exploiting the unique structural characteristics of certain combinatorial auctions, such as those involving items that are substitutes or complements. By doing so, there is potential to derive bounds that scale polynomially rather than exponentially with the number of items. This could significantly enhance the efficiency and feasibility of applying our methods to a broader range of auction formats, thereby extending their practical applicability. unsrtnat § AUXILIARY LEMMAS AND RESULTS In this section, we introduce some helpful concepts to proof our results. §.§ Bi-Lipschitz continuous functions We revisit some well-established results from existing literature. Formally, the restrictions on the rate of change for a bi-Lipschitz mapping are captured by the following bounds on the determinant of its Jacobian matrix. This is presented in the following lemma. Let g: 𝒳⊂ℝ^m →𝒴 be a (L_g, L_g^-1)-bi-Lipschitz function. Then, for all x ∈𝒳 it holds that 1/L_g^-1^m≤(𝒥 g (x))≤ L_g^m and 1/L_g^m≤(𝒥 g^-1 (x))≤ L_g^-1^m. The change of variables formula serves as a foundational tool in our analysis, permitting the expression of the density function of a probability measure under a mapping that exhibits sufficient regularity, such as bi-Lipschitz maps. We consider the following version of the well-known change of variables formula. Let 𝒳, 𝒴⊂ℝ^m be open, bounded, and connected subsets. Let μ_0, μ_1 be two probability measures on 𝒳 and 𝒴, respectively, that are absolutely continuous with respect to the Borel-measure λ. Let T: 𝒳→𝒴 be an injective, locally Lipschitz function such that μ_1 is the pushforward measure of μ_0 under T, that is, T_#μ_0 = μ_1. Then, it holds that ϕ_μ_0(x) = ϕ_μ_1(T(x)) 𝒥 T(x), where ϕ_μ_0, ϕ_μ_1 denote the density functions of μ_0 and μ_1, respectively, and 𝒥 T denotes the Jacobian matrix of T. The following well-known statement directly follows from Theorem <ref> and Lemma <ref>. Let 𝒳, 𝒴⊂ℝ^m and g: 𝒳→𝒴 be a (L_g, L_g^-1)-bi-Lipschitz function. Furthermore, let μ be a probability measure over 𝒳 with a κ-bounded density function ϕ_μ, i.e., sup_x ∈𝒳ϕ_μ(x) ≤κ. Then, the push-forward probability measure g_#μ has a κ· L_g^-1^m-bounded density function sup_y ∈𝒴ϕ_g_#μ(y) ≤κ· L_g^-1^m. We start by using the change of variables formula from Theorem <ref>. Let μ_0 := g_#μ and μ_1 := g^-1_#(g_#μ) = μ. Then, we get for T=g^-1 and y ∈𝒴 ϕ_μ_0(y) = ϕ_g_#μ(y) Thm<ref>=ϕ_μ(g^-1(y)) ·(𝒥 g^-1(y))≤κ·(𝒥 g^-1(y))Lemma <ref>≤κ· L_g^-1^m. §.§ Generic dispersion statements We present several generic dispersion lemmas based on the work by <cit.>, refining some of their statements to provide explicit guarantees rather than presenting results in big O notation. This refinement requires minor adjustments to their proofs. However, first we introduce the Hoeffding inequality, which is another well-known concentration bound. It provides a distribution-independent concentration bound, enabling an accurate sampling-based estimation of the expectation of a single random variable. Let X=X^(1), …, X^(N) be i.i.d. random variables over [-1, 1]. Then, with probability at least 1 - δ, 1/N∑_j=1^N X^(j) - 𝔼[X] ≤√(2/Nlog(2/δ)). We restate a well-known folklore lemma next, providing explicit bounds for uniform convergence for non-identical random variables. This is supported by well-established results regarding Rademacher complexity and the VC-dimension <cit.>. Let S = {z_1, …, z_r}⊂ℝ be a set of random variables where z_i ∼ p_i. For any δ > 0, with probability at least 1 - δ over the draw of the set S, sup_a,b ∈ℝ, a < b( | ∑_i=1^r1_z_i ∈ (a,b) - 𝔼_S'[ ∑_i=1^r1_z'_i ∈ (a,b)] | ) ≤√(2r log(2/δ)) + 4√(r log(e r/2)), where S' = {z'_1, …, z'_r} is another sample drawn from p_1, …, p_r. Let σ be an r-dimensional vector of Rademacher random variables. The empirical Rademacher complexity is given by R̂_S(G) := 𝔼_σ[sup_a,b ∈ℝ, a < b1/r∑_i=1^rσ_i 1_z_i ∈ (a, b)], where G denotes the set of indicator functions over intervals. The empirical Rademacher complexity can be bounded via the VC-dimension d = VCdim(G) by R̂_S(G) ≤√(2 log(e r/d)/r), which uses Corollary 3.1 and 3.3 by <cit.>, and that we can bound the empirical Rademacher complexity by the Rademacher complexity for distribution-independent bounds. Therefore, r R̂_S(G) ≤ 2 √(r log(e r/d)). Following the proof by <cit.>, we derive sup_a,b ∈ℝ, a < b( ∑_i=1^r1_z_i ∈ (a, b) - 𝔼_S^'[ ∑_i=1^r1_z^'_i ∈ (a, b)] ) ≤ 2 𝔼_σ, S[sup_a,b ∈ℝ, a < b∑_i=1^rσ_i 1_z_i ∈ (a, b)], and 𝔼_σ[sup_a,b ∈ℝ, a < b∑_i=1^rσ_i 1_z_i ∈ (a, b)] - 𝔼_σ, S[sup_a,b ∈ℝ, a < b∑_i=1^rσ_i 1_z_i ∈ (a, b)]≤√(r/2log(2/δ) ). Combining the results up until now results in sup_a,b ∈ℝ, a < b( | ∑_i=1^r 1_z_i ∈ (a,b) - 𝔼_S'[ ∑_i=1^r 1_z'_i ∈ (a,b)] | ) Equ. <ref>≤ 2 𝔼_σ, S[sup_a,b ∈ℝ, a < b∑_i=1^rσ_i 1_z_i ∈ (a, b)] ≤ 2 𝔼_σ[sup_a,b ∈ℝ, a < b∑_i=1^rσ_i 1_z_i ∈ (a, b)] - 𝔼_σ, S[sup_a,b ∈ℝ, a < b∑_i=1^rσ_i 1_z_i ∈ (a, b)] + 2𝔼_σ[sup_a,b ∈ℝ, a < b∑_i=1^rσ_i 1_z_i ∈ (a, b)] Equ. <ref> and <ref>≤√(2r log(2/δ)) + 4√(r log(e r/2)). To prove dispersion we will use the following probabilistic lemma, showing that samples from κ-bounded distributions do not tightly concentrate. Let S = {z_1, …, z_r}⊂ℝ be a collection of samples where each z_i is drawn from a κ-bounded distribution with density function p_i. For any δ≥ 0, the following statements hold with probability at least 1 - δ: * If the z_i are independent, then every interval of width w contains at most k = rwκ + √(2r log(2/δ)) + 4√(r log(e r/2)) samples. * If the samples can be partitioned into P buckets S_1, …, S_P such that each S_i contains independent samples and |S_i| ≤ M, then every interval of width w contains at most k = Pw κ M + P√(2M log(2P/δ)) + 4P√(M log(e M/2)) samples. We consider Part 1 first. The expected number of samples that land in an interval (a, b) of width w is at most w κ r, since the probability that z_i ∈ (a,b) is bounded by w κ. By Lemma <ref>, we have that with probability at least 1 - δ over the draw of the set S, sup_a,b ∈ℝ, a < b( | ∑_i=1^r 1_z_i ∈ (a,b) - 𝔼_S'[ ∑_i=1^r 1_z'_i ∈ (a,b)] | ) ≤√(2r log(2/δ)) + 4√(r log(e r/2)), where S^' = z^'_1, …, z_r^' is another sample from p_1, …, p_r. The number of elements in an interval (a, b) satisfies ∑_i=1^r 1_z_i ∈ (a, b)≤𝔼_S^'[∑_i=1^r 1_z^'_i ∈ (a, b)] + ∑_i=1^r 1_z_i ∈ (a, b) - 𝔼_S^'[∑_i=1^r 1_z^'_i ∈ (a, b)]. Combining Equations <ref> and <ref> implies that with probability of at least 1 - δ, every interval (a, b) of width w satisfies S ∩ (a, b)≤ rwκ + √(2r log(2/δ)) + 4√(r log(e r/2)). Part 2 follows by applying Part 1 to each bucket S_i and taking a union bound over the buckets. § DISPERSION AND PSEUDO-DIMENSION GUARANTEES We provide detailed statements regarding the dispersion and pseudo-dimension guarantees for several mechanisms. The descriptions of these market mechanisms are adapted from <cit.>. We start this section with stating the proof of our extension theorem to extend the pseudo-dimension guarantees from ℱ̂_i, M to ℱ̃_i, M. Subsequently, we give detailed statements regarding the dispersion and pseudo-dimension guarantees for several mechanisms. The descriptions of these market mechanisms are adapted from <cit.>. 7.4 Let M be a mechanism and i ∈ [n]. Suppose the function class ℱ̂_i, M is (2m, t)-delineable, then ℱ̃_i, M is (m, t)-delineable. The function classes ℱ̂_i, M and ℱ̃_i, M are defined as ℱ̂_i, M := {u_i, M(θ_i, b_i, ): 𝒜_-i→ [-1, 1]θ_i ∈Θ_i, b_i ∈𝒜_i}, ℱ̃_i, M := {u_i, M(, b_i, ): Θ_i ×𝒜_-i→ [-1, 1]b_i ∈𝒜_i}. For a b_-i∈𝒜_-i, let C^Θ_i× C^𝒜_i⊂Θ_i ×𝒜_i = [0, 1]^m × [0, 1]^m be an open subset such that u_i, M(θ_i, b_i, b_-i) = x_i(b_i, b_-i) ·θ_i - p_i(b_i, b_-i) is linear in (θ_i, b_i) over C^Θ_i× C^𝒜_i. As x_i(b_i, b_-i) ∈{0, 1}^m and p_i is independent of θ_i, the allocation x_i has to be constant for all b_i ∈ C^𝒜_i, otherwise there would be a jump for a changing θ_i. Therefore, u_i, M(θ_i, b_i, b_-i) is linear in θ_i ∈Θ_i for b_i ∈ C^𝒜_i. Let (θ_i, b_-i) ∈Θ_i ×𝒜_-i. As ℱ̂_i, M is (2m, t)-delineable, for b_-i, there exists a set ℋ̂ of t hyperplanes such that for any connected component C^Θ_i_l × C^𝒜_i_l of Θ_i ×𝒜_i ∖ℋ̂ the utility u_i, M(θ_i^', b_i, b_-i) is linear for θ_i^'∈ C^Θ_i_l and b_i ∈ C^𝒜_i_l. Denote with {C^Θ_i_l × C^𝒜_i_l }_l ∈ [N_t] the set of connected components of Θ_i ×𝒜_i ∖ℋ̂, where N_t is the number of connected components. For b_-i, we need at most t hyperplanes ℋ̃ so that 𝒜_i ∖ℋ̃ = ⋃_l ∈ [N_t] C^𝒜_i_l. By the argument above, the allocation is fixed for b_i ∈ C^𝒜_𝒾_l for every l ∈ [N_t] and u_i, M(θ_i^', b_i, b_-i) is linear in θ_i^'∈Θ_i. Therefore, u_i, M(θ_i, b_i, b_-i) is linear in b_i ∈ C^𝒜_i_l. Therefore, ℱ̃_i, M is (m, t)-delineable. We present dispersion and pseudo-dimension guarantees for several mechanisms next. The descriptions of these market mechanisms are adapted from <cit.>. §.§ First-price single-item auction In the first-price auction, the item is awarded to the highest bidder, who then pays the amount of its bid. Each agent i has a valuation θ_i ∈ [0, 1] for the item and submits a bid b_i ∈ [0, 1]. The utility function for agent i is given by u_i, M(θ_i, b_i, b_-i) = 1_{b_i > b_-i_∞}(θ_i - b_i), where b-i_∞ denotes the highest bid among the other bidders. In the context of independent prior distributions (Section <ref>), we show Assumption <ref> is satisfied with the following statement. Assume every agent i ∈ [n] has a κ-bounded marginal prior distribution F_θ_i. Let β∈Σ̃ be a strategy profile of bi-Lipschitz bidding strategies. With probability 1 - δ for all agents i ∈ [n] over the draw of the n datasets 𝒟^β_-i := {β_-i(θ_-i^(1)), …, β_-i(θ_-i^(N)) }, * For any θ_i∈ [0, 1], the functions u_i, M(θ_i, , β_-i(θ_-i^(1))), …, u_i, M(θ_i, , β_-i(θ_-i^(N))) are piecewise 1-Lipschitz and (w_i, v_i(w_i ))-dispersed with v_i(w_i ) := (n-1)w_i N κ L_β^-1_max + (n-1) √(2N log(2n(n-1)/δ)) + 4(n-1) √(N log(eN/2)). * For any b_i ∈ [0, 1] and b_-i∈ [0, 1]^n-1, the function u_i, M(, b_i, b_-i) is 1-Lipschitz continuous. We start with the first part of the statement. Consider i ∈ [n] and β_-i(θ_-i^(j)) ∈𝒟^β_-i arbitrary. For any θ_i ∈Θ_i and bid b_i ∈Θ_i, we have u_i, M(θ_i, b_i, β_-i(θ_-i^(j))) = 1_{b_i > β_-i(θ_-i^(j))_∞}(θ_i - b_i). Therefore, if b_i ≤β_-i(θ_-i^(j))_∞, then u_i, M(θ_i, b_i, β_-i(θ_-i^(j))) is a constant function in b_i. On the other hand, if b_i > β_-i(θ_-i^(j))_∞, then u_i, M(θ_i, b_i, β_-i(θ_-i^(j))) is linear in b_i with slope of -1. Consequently, we have for all θ_i ∈Θ_i and β_-i(θ_-i^(j)) ∈𝒟^β_-i, the function u_i, M(θ_i, , β_-i(θ_-i^(j))) is piecewise 1-Lipschitz continuous with a discontinuity at β_-i(θ_-i^(j))_∞. We proceed with the dispersion constants (w_i, v_i(w_i )). As discussed previously, the function u_i, M(θ_i, , β_-i(θ_-i^(j))) can only have a discontinuity at a point in the set {β_l(θ_l^(j)) }_{l ∈ [n]∖{i}}. Therefore, it is sufficient to guarantee with probability 1 - δ/n, at most v_i(w_i ) points in the set 𝒞 := ⋃_j=1^N {β_l(θ_l^(j)) }_{l ∈ [n]∖{i}} fall within an interval of width w_i. The statement then follows over a union bound over the n bidders. We apply Lemma <ref> in Appendix <ref> to show this statement. For l ∈ [n]∖{i}, define 𝒞_l := {β_l(θ_l^(j)) }_{j ∈ [N]}. Then, within each 𝒞_l, the samples are independently drawn from the bidding distribution F^β_l_θ_l. Per assumption, the marginal prior F_θ_l is a κ-bounded distribution. By Theorem <ref>, the bidding distribution's density function satisfies ϕ_F^β_l_θ_l_∞≤ L_β_l^-1·κ≤ L_β^-1_max·κ. Therefore, the samples β_l(θ_l^(j)) are drawn from a κ L_β^-1_max-bounded distribution. Therefore, with probability at most 1 - δ/n every interval of width w_i contains at most v_i(w_i ) = (n-1) w_i κ L_β^-1_max N + (n-1) √(2N log(2n(n-1)/δ)) + 4(n-1) √(N log( eN/2)). The second statement can be seen as follows. For any given bids b_i, b_-i, the allocation is fixed. Therefore, u_i, M(, b_i, b_-i) is either constant if b_i ≤b_-i_∞ or linear with slope 1 if b_i > b_-i_∞. 7.1 Let (β_i, β_-i) ∈Σ̃. Assume that for each agent i ∈ [n] and segment B_k ∈ℬ_i, there exists κ_i, B_k>0, such that the conditional marginal distributions F_o_j{o_i ∈ B_k } for j ∈ [n] ∖{i} are κ_i, B_k bounded. Then, for w_i>0, with probability at least 1 - δ over the draw of the sets {𝒟^β_-i(ℬ_i)i ∈ [n]} for every i ∈ [n] and B_k ∈ℬ_i, the functions u_i, M(θ_i^(1), ·, β_-i(o_-i^(1)) ), …, u_i, M(θ_i^(N_B_k), ·, β_-i(o_-i^(N_B_k)) ) are piecewise 1-Lipschitz and (w_i, v_i, B_k(w_i) )-dispersed, with v_i, B_k(w_i) := (n-1)w_i N_B_kκ_i, B_k L_β^-1_max + (n-1) √(2 N_B_klog(2n(n-1) N_ℬ_max/δ)) + 4(n-1) √(N_B_klog(e N_B_k/2)). We start with the first part of the statement. Consider i ∈ [n] and β_-i(o_-i^(j)) ∈𝒟^β_-i(B_k) arbitrary. For any θ_i ∈Θ_i and bid b_i ∈Θ_i, we have u_i, M(θ_i, b_i, β_-i(o_-i^(j))) = 1_{b_i > β_-i(o_-i^(j))_∞}(θ_i - b_i). Therefore, if b_i ≤β_-i(o_-i^(j))_∞, then u_i, M(θ_i, b_i, β_-i(o_-i^(j))) is a constant function in b_i. On the other hand, if b_i > β_-i(o_-i^(j))_∞, then u_i, M(θ_i, b_i, β_-i(o_-i^(j))) is linear in b_i with slope of -1. Consequently, we have for all (θ_i^(j), β_-i(o_-i^(j)) ) ∈𝒟^β_-i, the function u_i, M(θ_i^(j), , β_-i(o_-i^(j))) is piecewise 1-Lipschitz continuous with a discontinuity at β_-i(o_-i^(j))_∞. Fix agent i ∈ [n] and B_k ∈ℬ_i. For any θ_i ∈Θ_i and b_-i∈𝒜_-i, the function u_i, M(θ_i, , b_-i) can only have a discontinuity at a point in the set {b_l }_l ∈ [n] ∖{i}. Therefore, it is sufficient to guarantee with probability at least 1 - δ/n N_ℬ_max, at most v_i, B_k(w_i) points in the set C := ⋃_j=1^N_B_k{β_l(o_l^(j)) }_l ∈ [n] ∖{i} fall within any interval of width w_i. The statement then follows over a union bound over the n-bidders and up to N_ℬ_max segments. We apply Lemma <ref>. For l ∈ [n] ∖{i} define C_l := {β_l(o_l^(j)) }_j ∈ [N_B_k]. Within each C_l, the samples are independently drawn from the marginal conditional bidding distribution F_o_l{o_i ∈ B_k}^β_l. Per assumption, F_o_l{o_i ∈ B_k} is a κ_i, B_k-bounded distribution. By Theorem <ref>, the conditional bidding distribution's density function satisfies ϕ_F_o_l{o_i ∈ B_k}^β_l_∞≤ L_β^-1_l·κ_i, B_k≤ L_β^-1_max·κ_i, B_k. The samples β_l(o_l^(j)) for 1 ≤ j ≤ N_B_k are drawn from a L_β^-1_max·κ_i, B_k-bounded distribution. Therefore, with probability at most 1- δ/n N_ℬ_max any interval of width w_i contains at most v_i, B_k(w_i) := (n-1)w_i κ_i, B_k L_β^-1_max· N_B_k + (n-1) √(2 N_B_klog(2n(n-1) N_ℬ_max/δ)) + 4(n-1) √(N_B_klog(e N_B_k/2)) samples. Pdim(ℱ̂_i, M) = 2 for all i ∈ [n]. §.§ First-price combinatorial auction There are l items for sale. An agent's valuation space is represented by Θ_i = [0, 1]^2^l, indicating its value for each possible bundle a ⊂ [l]. The valuation and bid for a bundle a are denoted by θ_i[a] and b_i[a], respectively. The allocation x_i(b_i, b_-i) ∈0, 1^2^l is determined as the solution to the winner determination problem: maximize ∑_i ∈ [n] x_i · b_i subject to x_i · x_j = 0 for all i, j ∈ [n], i ≠ j. The price for agent i is then given by p_i(b_i, b_-i) = b_i · x_i(b_i, b_-i). We start with the dispersion guarantees. Let (β_i, β_-i) ∈Σ̃. Assume that for each pair of agents i, j ∈ [n] and each pair of bundles a, a^'⊂ [l], the joint marginal prior distribution F_θ_i[a], θ_j[a^'] is κ-bounded. With probability 1 - δ for all agents i ∈ [n] over the draw of the n datasets 𝒟^β_-i := {β_-i(θ_-i^(1)), …, β_-i(θ_-i^(N)) }, * For any θ_i∈ [0, 1]^2^l, the functions u_i, M(θ_i, , β_-i(θ_-i^(1))), …, u_i, M(θ_i, , β_-i(θ_-i^(N))) are piecewise 1-Lipschitz and (O(1 / (κ L^2^l+1_β^-1_max√(N)) ), Õ((n+1)^2l√(N · l)))-dispersed. * For any b_i ∈ [0, 1]^2^l and b_-i∈ [0, 1]^(n-1)2^l, the function u_i, M(, b_i, b_-i) is 1-Lipschitz continuous. For the first statement, apply Theorem <ref> to the joint marginal bidding distribution F_θ_i[a], θ_j[a^']^β_i, j. Then, the κ-bounded density function for every pair of agents i, j ∈ [n] and for all bundles a, a^'⊂ [l] transforms into a κ L_β^-1_i^2^l L_β^-1_j^2^l-bounded bidding distribution. Form this point onward, the proof for the first statement follows analogously to the proof of Theorem 3.10 of <cit.>. The second statement is a direct consequence of Theorem 3.11 from <cit.>. Let (β_i, β_-i) ∈Σ̃. Assume that for each agent i ∈ [n] and each pair of agents j, j^'∈ [n]∖{i}, each pair of bundles a, a^'⊂ [l], and segment B_k ∈ℬ_i, the joint marginal prior distribution F_o_j(a), o_j^'(a^'){o_i ∈ B_k } is κ_i, B_k-bounded. Then, with probability at least 1 - δ over the draw of the sets {𝒟^β_-i(B_k)B_k ∈ℬ_i, i ∈ [n]} for every i ∈ [n] and B_k ∈ℬ_i, the functions u_i, M(θ_i^(1), ·, β_-i(o_-i^(1)) ), …, u_i, M(θ_i^(N_B_k), ·, β_-i(o_-i^(N_B_k)) ) are piecewise 1-Lipschitz and (O(1 / (κ_i, B_k L^2^l+1_β^-1_max√(N_B_k)) ), Õ((n+1)^2l√(N_B_kl)))-dispersed. For agent i ∈ [n], we apply Theorem <ref> to the joint marginal bidding distribution F_o_j[a], o_j^'[a^']{o_i ∈ B_k }^β_i, j. Then, the κ_i, B_k-bounded density function for every pair of agents j, j^'∈ [n] ∖{i }, for all bundles a, a^'⊂ [l], and B_k ∈ℬ_i, transforms into a κ_i, B_k L_β^-1_j^2^l L_β^-1_j^'^2^l-bounded bidding distribution. From this point onward, the proof follows analogously to the proof of Theorem 3.10 of <cit.>. For any agent i ∈ [n], the pseudo-dimension of the function class ℱ̂_i, M is O(l 2^l log(n)). For any agent i ∈ [n], the pseudo-dimension of the function class ℱ̃_i, M is O(l 2^l log(n)). <cit.> established in the proof of Theorem <ref> that for every i ∈ [n], the function class ℱ̂_i, M is (2^l+1, (n+1)^2l))-delineable. By applying Theorem <ref>, we have ℱ̃_i, M is (2^l, (n+1)^2l))-delineable. Subsequently, with an application of Theorem <ref>, we find that the pseudo-dimension of ℱ̃_i, M is O(l 2^l log(n). §.§ Generalized second-price auction - in a generalized second-price auction, m advertising slots are distributed to a set of n>m agents. - for each slot s, there is a probability α_s, i of it being clicked if agent i's advertisement is in that slot. - we assume that the mechanism designer knows α_s, i. - let r ∈ℕ. - the mechanism designer gives each agent a weight w_i ∈ (0, 1] ∈{1 / r, 2 / r, …, (r-1)/r, 1 } - then, we denote the mechanism with this weight-set by M_r. - an agent's valuation for a click is denoted by θ_i ∈ [0, 1] and he submits a bid b_i ∈ [0, 1] - the allocation follows the ordering of the highest weighted bid w_i b_i. That is, the first slot is allocated to the highest weighted bid, the second to the second highest, and so on. - Denote with π(s) ∈ [n] the agent which was allocated slot s. - if slot s is clicked on, agent π(s) pays the lowest amount so that he would have received slot s, which is given by w_π(s+1) b_π(s+1)/ w_π(s). - the utility function for agent π(s) is then u_π(s), M(θ_π(s), b_π(s), b_- π(s) = α_s, π(s)(θ_π(s) - w_π(s+1) b_π(s+1) / w_π(s)) see Theorems 3.13 and 3.14 by <cit.> Dispersion guarantees - κ-bounded etc. For any agent i ∈ [n] and r ∈ℕ, Pdim(ℱ̂_i, M_r) is O(n log(n)). For any agent i ∈ [n] and r ∈ℕ, the pseudo-dimension of the function class ℱ̃_i, M_r is O(n log(n)). - <cit.> showed for the proof of Theorem <ref> that for every i ∈ [n], ℱ̂_i, M is (2^l+1, (n+1)^2l))-delineable. - By applying Theorem <ref>, we get ℱ̃_i, M is (2^l, (n+1)^2l))-delineable. - with an application of Theorem <ref>, we get Pdim(ℱ̃_i, M) is O(l 2^l log(n). §.§ Discriminatory auction In the discriminatory auction model, m identical units of an item are for sale, with each agent i ∈ [n] having a valuation vector θ_i ∈ [0, 1]^m, indicating its willingness to pay for each additional unit. The valuation decreases with each additional unit, implying θ_i[1] ≥θ_i[2] ≥⋯≥θ_i[m]. In total nm bids b_i[μ] for i ∈ [n] and μ∈ [m] are submitted to the auctioneer. If m_i of agent i's bids are among the m highest, it receives the units at its bid price, paying a cumulative amount based on the quantity awarded, i.e., p_i = ∑_μ = 1^m_i b_i[μ]. Let (β_i, β_-i) ∈Σ̃. Assume that for each agent i∈ [n] and unit l ∈ [m], the marginal prior distribution F_θ_i[l] is κ-bounded. With probability 1 - δ for all agents i ∈ [n] over the draw of the n datasets 𝒟^β_-i := {β_-i(θ_-i^(1)), …, β_-i(θ_-i^(N)) }, * For any θ_i∈ [0, 1]^m, the functions u_i, M(θ_i, , β_-i(θ_-i^(1))), …, u_i, M(θ_i, , β_-i(θ_-i^(N))) are piecewise 1-Lipschitz and (O(1 / (κ L_β^-1_max√(N)) ), Õ(n m^2 √(N)))-dispersed. * For any b_i ∈ [0, 1]^m and b_-i∈ [0, 1]^(n-1)m, the function u_i, M(, b_i, b_-i) is 1-Lipschitz continuous. For the first statement, we apply Theorem <ref> to the marginal bidding distribution F_θ_i[l]^β_i. Then, the κ-bounded density function for agent i ∈ [n] and for units l ∈ [l] transforms into a κ L_β^-1_i-bounded bidding distribution. Form this point onward, the proof for the first statement follows analogously to the proof of Theorem 3.16 of <cit.>. The second statement is a direct consequence of Theorem 3.17 from <cit.>. Let (β_i, β_-i) ∈Σ̃. Assume that for each agent i ∈ [n], agent j ∈ [n]∖{i}, unit l ∈ [m], and segment B_k ∈ℬ_i, the marginal prior distribution F_o_j[l]{o_i ∈ B_k } is κ_i, B_k-bounded. Then, with probability at least 1 - δ over the draw of the sets {𝒟^β_-i(B_k)B_k ∈ℬ_i, i ∈ [n]} for every i ∈ [n] and B_k ∈ℬ_i, the functions u_i, M(θ_i^(1), ·, β_-i(o_-i^(1)) ), …, u_i, M(θ_i^(N_B_k), ·, β_-i(o_-i^(N_B_k)) ) are piecewise 1-Lipschitz and (O(1 / (κ_i, B_k L_β^-1_max√(N_B_k)) ), Õ(n m^2 √(N_B_kl)))-dispersed. For agent i ∈ [n], apply Theorem <ref> to the marginal bidding distribution F_o_j[l]{o_i ∈ B_k }^β_i. Then, the κ_i, B_k-bounded density function for every agent j ∈ [n]∖{i }, every unit l ∈ [m], and B_k ∈ℬ_i transforms into a κ_i, B_k L_β^-1_j-bounded bidding distribution. Form this point onward, the proof for the first statement follows analogously to the proof of Theorem 3.16 of <cit.>. For any agent i ∈ [n], we have Pdim(ℱ̂_i, M) is O(m log(nm)). For any agent i ∈ [n], the pseudo-dimension of the function class ℱ̃_i, M is O(m log(nm)). <cit.> established in the proof of Theorem <ref> that for every i ∈ [n], the function class ℱ̂_i, M is (2m, m^2(n-1))-delineable. By applying Theorem <ref>, we have ℱ̃_i, M is (m, m^2(n-1))-delineable. Subsequently, with an application of Theorem <ref>, we find that the pseudo-dimension of ℱ̃_i, M is O(m log(nm)). §.§ Uniform-price auction In the uniform-price auction model, the allocation mechanism parallels that of the discriminatory auction (Section <ref>). The uniform-price auction sells all m units at a market-clearing price, with demand meeting supply. Following the principle that the market-clearing price is the highest bid not resulting in a sale <cit.>, we define c_-i∈ℝ^m as the array of the top m competing bids b_-i against agent i, ordered in descending value. This means c_-i[1] = b_-i_∞ is the highest of the opponents' bids, c_-i[2] is the second-highest, and so on. Agent i secures exactly one unit if and only if its highest bid surpasses the lowest winning bid and its second-highest bid does not exceed the second-lowest winning bid, i.e., b_i[1] > c_-i[m] and b_i[2] < c_-i[m - 1]. This condition extends to multiple units where agent i wins exactly m_i≥ 0 units if its m_ith bid exceeds the corresponding winning bid and the next highest bid does not. The market-clearing price is set to p = max{ b_i[m_i + 1], c_-i[m - m_i + 1] }, which is the maximum of the lowest winning bid and the highest losing bid. The final payment by agent i is m_i · p. Let (β_i, β_-i) ∈Σ̃. Assume that for each agent i∈ [n] and unit l ∈ [m], the marginal prior distribution F_θ_i[l] is κ-bounded. With probability 1 - δ for all agents i ∈ [n] over the draw of the n datasets 𝒟^β_-i := {β_-i(θ_-i^(1)), …, β_-i(θ_-i^(N)) }, * For any θ_i∈ [0, 1]^m, the functions u_i, M(θ_i, , β_-i(θ_-i^(1))), …, u_i, M(θ_i, , β_-i(θ_-i^(N))) are piecewise 1-Lipschitz and (O(1 / (κ L_β^-1_max√(N)) ), Õ(n m^2 √(N)))-dispersed. * For any b_i ∈ [0, 1]^m and b_-i∈ [0, 1]^(n-1)m, the function u_i, M(, b_i, b_-i) is 1-Lipschitz continuous. For the first statement, apply Theorem <ref> to the marginal bidding distribution F_θ_i[l]^β_i. Then, the κ-bounded density function for agent i ∈ [n] and for units l ∈ [l] transforms into a κ L_β^-1_i-bounded bidding distribution. Form this point onward, the proof for the first statement follows analogously to the remaining proof of Theorem D.5 of <cit.>. The second statement is a direct consequence of Theorem D.6 from <cit.>. Let (β_i, β_-i) ∈Σ̃. Assume that for each agent i ∈ [n], agent j ∈ [n]∖{i}, unit l ∈ [m], and segment B_k ∈ℬ_i, the marginal prior distribution F_o_j[l]{o_i ∈ B_k } is κ_i, B_k-bounded. Then, with probability at least 1 - δ over the draw of the sets {𝒟^β_-i(B_k)B_k ∈ℬ_i, i ∈ [n]} for every i ∈ [n] and B_k ∈ℬ_i, the functions u_i, M(θ_i^(1), ·, β_-i(o_-i^(1)) ), …, u_i, M(θ_i^(N_B_k), ·, β_-i(o_-i^(N_B_k)) ) are piecewise 1-Lipschitz and (O(1 / (κ_i, B_k L_β^-1_max√(N_B_k)) ), Õ(n m^2 √(N_B_kl)))-dispersed. For agent i ∈ [n], apply Theorem <ref> to the marginal bidding distribution F_o_j[l]{o_i ∈ B_k }^β_i. Then, the κ_i, B_k-bounded density function for every agent j ∈ [n]∖{i }, every unit l ∈ [m], and B_k ∈ℬ_i transforms into a κ_i, B_k L_β^-1_j-bounded bidding distribution. Form this point onward, the proof for the first statement follows analogously to the proof of Theorem D.5 of <cit.>. For any agent i ∈ [n], Pdim(ℱ̂_i, M) is O(m log(nm)). For any agent i ∈ [n], the pseudo-dimension of the function class ℱ̃_i, M is O(m log(nm)). <cit.> established in the proof of Theorem <ref> that for every i ∈ [n], the function class ℱ̂_i, M is (2m, m^2(n-1))-delineable. By applying Theorem <ref>, we have ℱ̃_i, M is (m, m^2(n-1))-delineable. Subsequently, with an application of Theorem <ref>, we find that the pseudo-dimension of ℱ̃_i, M is O(m log(nm)). §.§ Second-price auction with spiteful bidders We study a scenario with spiteful agents <cit.> next. Each bidder's utility increases when his surplus increases, but also decreases when the other bidders' surpluses increase. Formally, given a spite parameter α_i ∈ [0, 1], agent i's utility under the second-price mechanism M is u_i,M (θ_i, b_i, b_-i) = α_i 1_{ b_i - b_-i_∞} (θ_i - b_-i_∞) - (1 - α_i) ∑_i' ≠ i1_{b_i' > b_-i_∞} (θ_i' - b_-i_∞). The lower α_i is, the more spiteful bidder i. § PROOFS TO LIMIT CONCENTRATION OF BIDDING DISTRIBUTIONS SECTION <REF> 4.4 Denote with ϕ_F_o_i and ϕ_F_o_i, o_j the density functions for the marginal prior distributions F_o_i and F_o_i, o_j for any i, j ∈ [n]. Further assume that ϕ_F_o_i and ϕ_F_o_i, o_j are κ-bounded density functions for some κ > 0. Further, let (β_i, β_-i) ∈Σ̃ be a strategy profile of bi-Lipschitz continuous bidding strategies. Then, the probability density functions of the bidding distributions F^β_i_o_i and F^β_i, β_j_o_i, o_j satisfy sup_b_i∈β_i(𝒪_i)ϕ_F^β_i_o_i(b_i) ≤κ· L_β_i^-1^m sup_(b_i, b_j) ∈β_i(𝒪_i) ×β_j(𝒪_j)ϕ_F^β_i, β_j_o_i, o_j(b_i, b_j) ≤κ· L_β_i^-1^m · L_β_j^-1^m where m denotes the dimension of 𝒪_i. By the definition of bi-Lipschitz continuity, the function β_i : 𝒪_i →β_i (𝒪_i) is invertible for any i ∈ [n]. We perform a change of variables (Theorem <ref>) with μ_0 := (β_i)_#F_o_i and μ_1 := (β_i^-1)_#( (β_i)_#F_o_i) = F_o_i. Then, we have for b_i ∈β_i(𝒪_i ) ϕ_F^β_i_o_i(b_i) Theorem <ref>=ϕ_F_o_i(β_i^-1(b_i) ) ·(𝒥β_i^-1(b_i))Lemma <ref>≤κ· L_β_i^-1^m, where m denotes the dimension of 𝒪_i. We used a well-known bound on a bi-Lipschitz mapping's Jacobian determinant in the last step. For i, j ∈ [n] with i ≠ j, the functions β_i and β_j are independent from one another. That is, β_i: 𝒪_i →𝒜_i and β_j: 𝒪_j →𝒜_j. The same holds for their inverses, so that the Jacobian matrix of β_i, j^-1 = (β_i^-1, β_j^-1) is a block matrix. That is, for b_i ∈β_i (𝒪_i ) and b_j ∈β_j (𝒪_j ) (𝒥β_i, j^-1) (b_i, b_j ) = ([ 𝒥β_i^-1(b_i) 0; 0 𝒥β_j^-1(b_j) ]) A well-known fact about the determinant of a block-matrix is that it equals the product of the blocks' determinants. By another application of the change of variables formula, we have ϕ_F^β_i, β_j_o_i, o_j(b_i, b_j) Theorem <ref>=ϕ_F_o_i, o_j( (β_i, j^-1)^-1(b_i, b_j ) ) ·( 𝒥(β_i, j^-1)^-1(b_i, b_j ) ) ≤κ(𝒥β_i^-1(b_i)) ·(𝒥β_j^-1(b_j))Lemma <ref>≤κ· L_β_i^-1^m · L_β_j^-1^m. § PROOFS FOR INDEPENDENT PRIOR DISTRIBUTIONS SECTION <REF> 5.1 Let δ > 0, M be a mechanism, and β∈Σ a strategy profile. Then, it holds with probability 1-δ for all agents i ∈ [n] over the draw of datasets 𝒟^β_-1, …, 𝒟^β_-n of valuation-bid queries, sup_θ_i ∈Θ_iℓ̂_i(θ_i, β_i(θ_i), β_-i) = sup_θ_i, θ̂_i ∈Θ_iû_i, M(θ_i, θ̂_i, β_-i) - û_i, M(θ_i, β_i(θ_i), β_-i) ≤sup_θ_i, θ̂_i ∈Θ_i1/N∑_j=1^N u_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i, β_i(θ_i), β_-i(θ_-i^(j))) + ε̂_i, Pdim(N, δ), ε̂_i, Pdim(N, δ) := 4√(2d_i/Nlog( e N/d_i)) + 2√(2/Nlog(2n/δ)), d_i=Pdim(ℱ̂_i, M). Fix an arbitrary agent i ∈ [n]. Then we have with 𝒟^β_-i := {β_-i(θ_-i^(1)), …, β_-i(θ_-i^(N)) } that β_-i(θ_-i^(j)) ∼ F^β_-i_θ_-i is i.i.d. for 1 ≤ j ≤ N. Therefore, by applying Theorem <ref>, we have with probability at least 1 - δ/2 for all u_i, M(θ_i, θ̂_i, ) ∈ℱ̂_i, M that 1/N∑_j=1^Nu_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - 𝔼_β_-i(θ_-i)∼ F^β_-i[u_i, M(θ_i, θ̂_i, β_-i(θ_-i))] ≤ 2√(2d_i/Nlog( e N/d_i)) + √(2/Nlog(2/δ)) = 1/2ε̂_i, Pdim(N, nδ). As this holds for all θ_i, θ̂_i ∈Θ_i, we have with probability 1 - δ sup_θ_i, θ̂_i ∈Θ_i𝔼_β_-i(θ_-i)∼ F^β_-i[u_i, M(θ_i, θ̂_i, β_-i(θ_-i))] - 𝔼_β_-i(θ_-i)∼ F^β_-i[u_i, M(θ_i, β_i(θ_i), β_-i(θ_-i))] ≤sup_θ_i, θ̂_i ∈Θ_i𝔼_β_-i(θ_-i)∼ F^β_-i[u_i, M(θ_i, θ̂_i, β_-i(θ_-i))] - 1/N∑_j=1^Nu_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) + 1/N∑_j=1^Nu_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i, β_i(θ_i), β_-i(θ_-i^(j))) + 1/N∑_j=1^Nu_i, M(θ_i, β_i(θ_i), β_-i(θ_-i^(j))) - 𝔼_β_-i(θ_-i)∼ F^β_-i[u_i, M(θ_i, β_i(θ_i), β_-i(θ_-i))] ≤sup_θ_i, θ̂_i ∈Θ_i1/N∑_j=1^Nu_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i, β_i(θ_i), β_-i(θ_-i^(j))) + ε̂_i, Pdim(N, nδ). Denote the event that the previous inequalities hold for agent i by A_i(δ). Then, we have shown P(A_i(δ)) ≥ 1 - δ so far. It remains to show the bounds hold for all agents. We apply a union bound to the events A_i(δ/n), which gives P(⋂_i=1^n A_i(δ/n) ) = P( (⋃_i=1^n A_i(δ/n)^∁)^∁) = 1 - P( ⋃_i=1^n A_i(δ/n)^∁) ≥ 1 - ∑_i=1^n P(A_i(δ/n)^∁) ≥ 1 - n δ/n = 1 - δ. 5.3 Let δ > 0 and M be a mechanism. Furthermore, let β∈Σ̃ be a strategy profile. Given that Assumption <ref> holds for w_i >0, v_i(w_i), and v_i(L_β_iw_i), we have with probability at least 1 - 3δ over the draw of the datasets 𝒟^β_-1, …, 𝒟^β_-n for every agent i ∈ [n] sup_θ_i ∈Θ_iℓ̂_i(θ_i, β_i(θ_i), β_-i) = sup_θ_i, θ̂_i ∈Θ_iû_i, M(θ_i, θ̂_i, β_-i) - û_i, M(θ_i, β_i(θ_i), β_-i) ≤sup_θ_i, θ̂_i ∈𝒢_w_i1/N∑_j=1^N u_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i, β_i(θ_i), β_-i(θ_-i^(j)))+ε̂_i, ε̂_i:=4√(2d_i/Nlog( e N/d_i)) + 2√(2/Nlog(2n/δ)) + 3ε̂_i, disp(w_i) + ε̂_i, disp(L_β_iw_i), ε̂_i, disp(x) := N - v_i(x )/N L_i x + 2 v_i(x )/N, and d_i=Pdim(ℱ̂_i, M). For a (w, v)-dispersed set of N functions, with probability 1 - δ, at most v jump discontinuities fall within a ball of radius w. Therefore, within any ball of radius w, at least N-v functions are Lipschitz continuous, and at most v are not. Let w_i > 0 and v_i(w_i ) be the function from the dispersion guarantees from Assumption <ref>. Define ε̂_i, disp(w_i) := N - v_i(w_i )/N L_i w_i + 2 v_i(w_i )/N. Then, with probability at least 1 - δ, the following conditions hold: * For all i ∈ [n], valuations θ_i ∈Θ_i, and reported valuations θ̂_i, θ̂_i^'∈Θ_i with θ̂_i - θ̂_i^'_1 ≤ w_i, we have 1/N∑_j=1^N u_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i, θ̂_i^', β_-i(θ_-i^(j))) ≤ε̂_i, disp(w_i) * For all i ∈ [n], reported valuations θ̂_i ∈Θ_i, and valuations θ_i, θ_i^'∈Θ_i with θ_i - θ_i^'_1 ≤ w_i, we have 1/N∑_j=1^N u_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i^', θ̂_i, β_-i(θ_-i^(j))) ≤ε̂_i, disp(w_i). Let θ_i, θ̂_i ∈Θ_i. By the definition of 𝒢_w_i, there exist points p, p̂∈𝒢_w_i such that θ_i - p_1 ≤ w_i and θ̂_i - p̂_1 ≤ w_i. Equation <ref> results in 1/N∑_j=1^N u_i, M(θ_i, p̂, β_-i(θ_-i^(j))) - u_i, M(p, p̂, β_-i(θ_-i^(j))) ≤ε̂_i, disp(w_i), and 1/N∑_j=1^N u_i, M(p, β_i(p), β_-i(θ_-i^(j))) - u_i, M(θ_i, β_i(p), β_-i(θ_-i^(j))) ≤ε̂_i, disp(w_i). Equation <ref> gives 1/N∑_j=1^N u_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i, p̂_i, β_-i(θ_-i^(j))) ≤ε̂_i, disp(w_i). Due to the Lipschitz continuity of β_i, we have β_i(θ_i) - β_i(p)_1 ≤ L_β_i w_i. An additional application of Assumption <ref> and Equation <ref> gives with probability at least 1 - δ 1/N∑_j=1^N u_i, M(θ_i, β_i(p), β_-i(θ_-i^(j))) - u_i, M(θ_i, β_i, β_-i(θ_-i^(j))) ≤ε̂_i, disp(L_β_i w_i). Therefore, combining these statements, we have with probability at least 1 - 2δ 1/N∑_j=1^N u_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i, β_i(θ_i), β_-i(θ_-i^(j))) ≤1/N∑_j=1^N u_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i, p̂, β_-i(θ_-i^(j))) + 1/N∑_j=1^N u_i, M(θ_i, p̂, β_-i(θ_-i^(j))) - u_i, M(p, p̂, β_-i(θ_-i^(j))) + 1/N∑_j=1^N u_i, M(p, p̂, β_-i(θ_-i^(j))) - u_i, M(p, β_i(p), β_-i(θ_-i^(j))) + 1/N∑_j=1^N u_i, M(p, β_i(p), β_-i(θ_-i^(j))) - u_i, M(θ_i, β_i(p), β_-i(θ_-i^(j))) + 1/N∑_j=1^N u_i, M(θ_i, β_i(p), β_-i(θ_-i^(j))) - u_i, M(θ_i, β_i(θ_i), β_-i(θ_-i^(j))) ≤1/N∑_j=1^N u_i, M(p, p̂, β_-i(θ_-i^(j))) - u_i, M(p, β_i(p), β_-i(θ_-i^(j))) + 3ε̂_i, disp(w_i) + ε̂_i, disp(L_β_i w_i). The statement is complete with an additional application of Theorem <ref>. That is, in total, three different events with probability 1 - δ need to hold. The first comes from the pseudo-dimension concentration bound of Theorem <ref>. The two other events are the dispersion guarantees from Assumption <ref> for balls of width w_i and L_β_i· w_i. The combination of these statements gives with probability at least 1-3δ sup_θ_i ∈Θ_iℓ̂_i(θ_i, β_i(θ_i), β_-i) = sup_θ_i, θ_i^'∈Θ_iû_i, M(θ_i, θ_i^', β_-i) - û_i, M(θ_i, β_i(θ_i), β_-i) ≤sup_θ_i, θ̂_i ∈𝒢_w_i1/N∑_j=1^N u_i, M(θ_i, θ̂_i, β_-i(θ_-i^(j))) - u_i, M(θ_i, β_i(θ_i), β_-i(θ_-i^(j))) + 4√(2d_i/Nlog( e N/d_i)) + 2√(2/Nlog(2n/δ)) + 3ε̂_i, disp(w_i) + ε̂_i, disp(L_β_iw_i). § PROOFS FOR INTERDEPENDENT PRIOR DISTRIBUTIONS SECTION <REF> This section provides the detailed proofs for the error bounds in approximating the ex ante utility loss for interdependent prior distributions. §.§ Proof of Theorem <ref> The partition ℬ_i determines which segments of the observation space 𝒪_i can be considered collectively. For each element B within ℬ_i, we identify a constant best-response. We show that the error made by this procedure can be bounded in terms of the total variation distance between prior distributions conditioned on observations from B. We show this for a single segment B ∈ℬ_i first. Let B ⊂𝒪_i and β_-i∈Σ_-i be an opponent strategy profile for agent i. Then one can bound the largest difference of the ex interim best-response utility and the utility of a constant best-response over B by sup_o_i ∈ Bsup_b_i ∈𝒜_i𝔼_o_-i, θ_i|o_i[u_i, M(θ_i, b_i, β_-i(o_-i)) ] - sup_b_i ∈𝒜_i𝔼_õ_i, o_-i, θ_i{o_i ∈ B}[u_i, M(θ_i, b_i, β_-i(o_-i) ) ] ≤ 2 u_i, M_∞·sup_ô_i, ô_i^'∈ B d_TV(F_θ_i, o_-iô_i, F_θ_i, o_-iô_i^'). Furthermore, the difference between a best-response over the set Σ_iB of bidding functions restricted to B and a constant best-response is bounded by sup_β_i^'∈Σ_iB𝔼_o_i, o_-i, θ_i{o_i ∈ B}[u_i, M(θ_i, β_i^'(o_i), β_-i(o_-i) ) ] - sup_b_i ∈𝒜_i𝔼_o_i, o_-i, θ_i{o_i ∈ B}[u_i, M(θ_i, b_i, β_-i(o_-i) ) ] ≤ 2 u_i, M_∞·sup_ô_i, ô_i^'∈ B d_TV(F_θ_i, o_-iô_i, F_θ_i, o_-iô_i^'). Let ϵ > 0 and o_i ∈ B. Choose b_i^* ∈𝒜_i such that it is within ϵ of the best-response utility, that is, sup_b_i ∈𝒜_i𝔼_o_-i, θ_i | o_i[u_i, M(θ, b_i, β_-i(o_-i)) ] - ϵ≤𝔼_o_-i, θ_i | o_i[u_i, M(θ, b_i^*, β_-i(o_-i)) ]. Then, sup_b_i ∈𝒜_i𝔼_o_-i, θ_i|o_i[u_i, M(θ_i, b_i, β_-i(o_-i)) ] - sup_b_i^'∈𝒜_i𝔼_õ_i, o_-i, θ_i{õ_i ∈ B}[u_i, M(θ_i, b_i^', β_-i(o_-i) ) ] Equ. <ref>≤𝔼_o_-i, θ_i|o_i[u_i, M(θ_i, b_i^*, β_-i(o_-i)) ] - sup_b_i^'∈𝒜_i𝔼_õ_i, o_-i, θ_i{õ_i ∈ B}[u_i, M(θ_i, b_i^', β_-i(o_-i) ) ] + ϵ ≤𝔼_o_-i, θ_i|o_i[u_i, M(θ_i, b_i^*, β_-i(o_-i)) ] - 𝔼_õ_i, o_-i, θ_i{õ_i ∈ B}[u_i, M(θ_i, b_i^*, β_-i(o_-i) ) ] + ϵ = 𝔼_õ_i{õ_i ∈ B}[𝔼_o_-i, θ_i|o_i[u_i, M(θ_i, b_i^*, β_-i(o_-i)) ] - 𝔼_o_-i, θ_i|õ_i[u_i, M(θ_i, b_i^*, β_-i(o_-i)) ] ] + ϵ Theorem <ref>≤𝔼_õ_i{õ_i ∈ B}[2 u_i, M_∞· d_TV(F_θ_i, o_-io_i, F_θ_i, o_-iõ_i) ] + ϵ ≤ 2 u_i, M_∞𝔼_õ_i{õ_i ∈ B}[ sup_ô_i, ô_i^'∈ B d_TV(F_θ_i, o_-iô_i, F_θ_i, o_-iô_i^') ] + ϵ = 2 u_i, M_∞·sup_ô_i, ô_i^'∈ B d_TV(F_θ_i, o_-iô_i, F_θ_i, o_-iô_i^') + ϵ. As ϵ and o_i were chosen arbitrarily, the first statement follows. For the second statement, observe that the best-response ex ante utility over B is bounded by the largest ex interim best-response utility over B. More specifically, sup_β_i^'∈Σ_iB𝔼_o_i, o_-i, θ_i{o_i ∈ B}[u_i, M(θ_i, β_i^'(o_i), β_-i(o_-i) ) ] = sup_β_i^'∈Σ_iB𝔼_o_i{o_i ∈ B}[ 𝔼_o_-i, θ_i | o_i[u_i, M(θ_i, β_i^'(o_i), β_-i(o_-i) ) ] ] ≤𝔼_o_i{o_i ∈ B}[ sup_β_i^'∈Σ_iB𝔼_o_-i, θ_i | o_i[u_i, M(θ_i, β_i^'(o_i), β_-i(o_-i) ) ] ] = 𝔼_o_i{o_i ∈ B}[ sup_b_i ∈𝒜_i𝔼_o_-i, θ_i | o_i[u_i, M(θ_i, b_i, β_-i(o_-i)) ] ] ≤sup_o_i ∈ Bsup_b_i ∈𝒜_i𝔼_o_-i, θ_i | o_i[u_i, M(θ_i, b_i, β_-i(o_-i)) ]. Therefore, using the first statement, we get sup_β_i^'∈Σ_iB𝔼_o_i, o_-i, θ_i{o_i ∈ B}[u_i, M(θ_i, β_i^'(o_i), β_-i(o_-i) ) ] - sup_b_i^'∈𝒜_i𝔼_õ_i, o_-i, θ_i{õ_i ∈ B}[u_i, M(θ_i, b_i^', β_-i(o_-i) ) ] Equ. <ref>≤sup_o_i ∈ Bsup_b_i ∈𝒜_i𝔼_o_-i, θ_i | o_i[u_i, M(θ_i, b_i, β_-i(o_-i)) ] - sup_b_i^'∈𝒜_i𝔼_õ_i, o_-i, θ_i{õ_i ∈ B}[u_i, M(θ_i, b_i^', β_-i(o_-i) ) ] Equ. <ref>≤ 2 u_i, M_∞·sup_ô_i, ô_i^'∈ B d_TV(F_θ_i, o_-iô_i, F_θ_i, o_-iô_i^'). The previous lemma indicates that the error incurred by employing a constant best-response, as opposed to a functional one over a set B, can be managed provided that the conditional distribution does not change too much. This simplifies the utility loss estimation process considerably, as the error introduced by constant best-responses can be bounded by the maximum total variation distance of the conditional distributions for observations from B. The following theorem expands upon this result, applying it across the entire partition ℬ_i of 𝒪_i. 6.3 Let ℬ_i = {B_1, …, B_N_ℬ_i} be a partition of 𝒪_i. The difference between a best-response utility over function space to best-responses that are constant for every B_k satisfies sup_β_i^'∈Σ_iũ_i, M(β_i^', β_-i) - sup_b ∈𝒜_i^N_ℬ_iũ_i, M(∑_k=1^N_ℬ_i b_k 1_B_k, β_-i) ≤ 2 ∑_k=1^N_ℬ_i P(o_i ∈ B_k) τ_i, B_k, with τ_i, B_k := sup_ô_i, ô_i^'∈ B_k d_TV(F_θ_i, o_-iô_i, F_θ_i, o_-iô_i^'). If there exists a constant L_B_k > 0 such that d_TV(F_θ_i, o_-iô_i, F_θ_i, o_-iô_i^') ≤ L_B_ko_i - o_i^' for o_i, o_i^'∈ B_k, then τ_i, B_k≤ L_B_kdiam(B_k), where diam(B_k) denotes B_k's diameter. Let o_i ∈𝒪_i. Then, there exists a unique B_k ∈ℬ_i such that o_i ∈ B_k. The error between the ex interim best-response utility and the constant best-response utility over B_k can be bounded by sup_b_i ∈𝒜_i𝔼_o_-i, θ_i|o_i[u_i, M(θ_i, b_i, β_-i(o_-i)) ] - sup_b_i ∈𝒜_i𝔼_õ_i, o_-i, θ_i{õ_i ∈ B_k }[u_i, M(θ_i, b_i, β_-i(o_-i) ) ] Lemma <ref>≤ 2 u_i, M_∞·sup_ô_i, ô_i^'∈ B_k d_TV(F_θ_i, o_-iô_i, F_θ_i, o_-iô_i^') = 2 u_i, M_∞τ_i, B_k. We rewrite the best-response ex ante utilities using the law of total expectation. For the first term follows sup_β_i^'∈Σ_iũ_i, M(β_i^', β_-i) = sup_β_i^'∈Σ_i𝔼_o_i, o_-i, θ_i[u_i, M(θ_i, β_i^'(o_i), β_-i(o_-i) ) ] = sup_β_i^'∈Σ_i∑_k=1^N_ℬ_i P (o_i ∈ B_k ) 𝔼_o_i, o_-i, θ_i | {o_i ∈ B_k }[u_i, M(θ_i, β_i^'(o_i), β_-i(o_-i) ) ] = ∑_k=1^N_ℬ_i P (o_i ∈ B_k ) sup_β_i^'∈Σ_iB_k𝔼_o_i, o_-i, θ_i | {o_i ∈ B_k }[u_i, M(θ_i, β_i^'(o_i), β_-i(o_-i) ) ], where Σ_iB_k denotes the restriction of the bidding strategies to B_k. We have for the second term sup_b ∈𝒜_i^N_ℬ_i𝔼_o_i, o_-i, θ_i[u_i, M(θ_i, ∑_k=1^N_ℬ_i b_k 1_B_k(o_i), β_-i(o_-i) ) ] = sup_b ∈𝒜_i^N_ℬ_i∑_k=1^N_ℬ_i P (o_i ∈ B_k ) 𝔼_o_i, o_-i, θ_i | {o_i ∈ B_k }[u_i, M(θ_i, b_k, β_-i(o_-i) ) ] = ∑_k=1^N_ℬ_i P (o_i ∈ B_k ) sup_b_k ∈𝒜_i𝔼_o_i, o_-i, θ_i | {o_i ∈ B_k }[u_i, M(θ_i, b_k, β_-i(o_-i) ) ]. Combing these two transformations gives sup_β_i^'∈Σ_i𝔼_o_i, o_-i, θ_i[u_i, M(θ_i, β_i^'(o_i), β_-i(o_-i) ) ] - sup_b ∈𝒜_i^N_ℬ_i𝔼_o_i, o_-i, θ_i[u_i, M(θ_i, ∑_k=1^N_ℬ_i b_k 1_B_k(o_i), β_-i(o_-i) ) ] ≤∑_k=1^N_ℬ_i P (o_i ∈ B_k ) (sup_β_i^'∈Σ_iB_k𝔼_o_i, o_-i, θ_i | {o_i ∈ B_k }[u_i, M(θ_i, β_i^'(o_i), β_-i(o_-i) ) ] . - . sup_b_k ∈𝒜_i𝔼_o_i, o_-i, θ_i | {o_i ∈ B_k }[u_i, M(θ_i, b_k, β_-i(o_-i) ) ] ) Lemma <ref>≤ 2 u_i, M_∞∑_k=1^N_ℬ_i P (o_i ∈ B_k ) τ_i, B_k. For arbitrary o_i, o_i^'∈ B_k, we have o_i - o_i^'≤diam(B_k). Therefore, if there exists a constant L_B_k > 0 such that d_TV(g_B_k(o_i), g_B_k(o_i^') ) ≤ L_B_ko_i - o_i^' for o_i, o_i^'∈ B_k, then τ_i, B_k≤ L_B_kdiam(B_k). §.§ Proof of Theorems <ref> and <ref> 6.4 Let β∈Σ be a strategy profile. With probability 1 - δ over the draw of the dataset 𝒟^β, we have for every agent i ∈ [n] ũ_i, M(β_i, β_-i) - 1/N∑_j=1^N u_i, M(θ_i^(j), β_i(o_i^(j)), β_-i(o_-i^(j))) ≤√(2/Nlog(2 n/δ)). Fix an agent i ∈ [n]. u_i, M(θ_i, β_i(o_i), β_-i(o_-i)) with (θ_i, β_i(o_i), β_-i(o_-i) ) ∼ F^β is a random variable with a distribution over [-1, 1]. The values u_i, M(θ_i^(1), β_i(o_i^(1)), β_-i(o_-i^(1))), …, u_i, M(θ_i^(N), β_i(o_i^(N)), β_-i(o_-i^(N))) are i.i.d. samples from this distribution with (θ_i^(j), β_i(o_i^(j)), β_-i(o_-i^(j))) coming from the dataset 𝒟^β for 1 ≤ j ≤ N. By applying Hoeffding's inequality (Theorem <ref>), we get with probability at least 1 - δ/n ũ_i, M(β_i, β_-i) - 1/N∑_j=1^N u_i, M(θ_i^(j), β_i(o_i^(j)), β_-i(o_-i^(j))) ≤√(2/Nlog(2 n/δ)). The statement follows by applying a union bound over the set of agents [n]. 6.5 With probability 1 - δ over the draw of the n sets 𝒟^β(ℬ_1), …, 𝒟^β(ℬ_n), for partitions ℬ_i = {B_1, …, B_N_ℬ_i} of 𝒪_i for every agent i ∈ [n], we have sup_b_i ∈𝒜_i𝔼_o_i, o_-i, θ_i{o_i ∈ B_k }[u_i, M(θ_i, b_i, β_-i(o_-i) ) ] - sup_b_i ∈𝒜_i1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), b_i, β_-i(o_-i^(j))) ≤ε̃_i, Pdim(N_B_k), ε̃_i, Pdim(N_B_k) := 2√(2 d_i/N_B_klog(e N_B_k/d_i) ) + √(2/N_B_klog(n N_ℬ_max/δ)), d_i := Pdim(ℱ̃_i, M). Fix an agent i ∈ [n] and a segment B_k ∈ℬ_i. Note that we can write the ex ante utility given the event {o_i ∈ B_k } and bid b_i ∈𝒜_i as 𝔼_o_i, o_-i, θ_i{o_i ∈ B_k }[u_i, M(θ_i, b_i, β_-i(o_-i) ) ] = 𝔼_o_i, β_-i(o_-i), θ_i ∼F^β_-i{o_i ∈ B_k }[u_i, M(θ_i, b_i, β_-i(o_-i) ) ]. By Theorem <ref>, we get that with probability at least 1 - δ/nN_ℬ_max over the draw of 𝒟^β_-i(B_k) for all b_i ∈𝒜_i, 𝔼_o_i, β_-i(o_-i), θ_i ∼F^β_-i{o_i ∈ B_k }[u_i, M(θ_i, b_i, β_-i(o_-i) ) ] - 1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), b_i, β_-i(o_-i^(j))) ≤ 2 √(2 d_i/N_B_klog(e N_B_k/d_i) ) + √(2/N_B_klog(n N_ℬ_max/δ)). Therefore, we have with probability at least 1 - δ/n N_ℬ_max, 𝔼_o_i, o_-i, θ_i{o_i ∈ B_k }[u_i, M(θ_i, b_i, β_-i(o_-i) ) ] - 𝔼_o_i, o_-i, θ_i{o_i ∈ B_k }[u_i, M(θ_i, β_i(o_i), β_-i(o_-i) ) ] =𝔼_o_i, o_-i, θ_i{o_i ∈ B_k }[u_i, M(θ_i, b_i, β_-i(o_-i) ) ] - 1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), b_i, β_-i(o_-i^(j))) + 1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), b_i, β_-i(o_-i^(j))) - 1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), β_i(o_i^(j)), β_-i(o_-i^(j))) + 1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), β_i(o_i^(j)), β_-i(o_-i^(j))) - 𝔼_o_i, o_-i, θ_i{o_i ∈ B_k }[u_i, M(θ_i, β_i(o_i), β_-i(o_-i) ) ] ≤1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), b_i, β_-i(o_-i^(j))) - u_i, M(θ_i^(j), β_i(o_i^(j)), β_-i(o_-i^(j))) + ε̃_i, Pdim(N_B_k). Since this holds for all b_i ∈𝒜_i, we get the statement for the choice of i and B_k. Taking a union bound over all i ∈ [n] and segments B_k ∈ℬ_i yields the final statement. §.§ Proof of Theorem <ref> 6.7 Let δ > 0, β∈Σ̃ be a strategy profile, and M be a mechanism. Suppose that for each agent i ∈ [n] and segment B_k ∈ℬ_i, Assumption <ref> holds for w_i >0 and v_i(w_i). Then, with probability 1- δ over the draw of the sets {𝒟^β(ℬ_i)i ∈ [n]}, agents i ∈ [n], and segments B_k ∈ℬ_i, sup_b_i ∈𝒜_i1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), b_i, β_-i(o_-i^(j))) - max_b_i ∈𝒢_w1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), b_i, β_-i(o_-i^(j))) ≤N_B_k - v_i, B_k(w_i)/N_B_k L_i w_i + 2v_i, B_k(w_i)/N_B_k =: ε̃_i, disp(N_B_k). Fix agent i ∈ [n] and B_k ∈ℬ_i. Let b_i, b_i^'∈𝒜_i = [0, 1]^m with b_i - b_i^'_1 ≤ w_i. When considering the following difference for a specific j ∈ [N_B_k] u_i, M(θ_i^(j), b_i, β_-i(o_-i^(j))) - u_i, M(θ_i^(j), b_i^', β_-i(o_-i^(j))), then, either u_i, M(θ_i^(j), ·, β_-i(o_-i^(j))) is L_i-Lipschitz continuous over [b_i, b_i^'] or there is a jump discontinuity. In the first case, we can bound the difference in Equation <ref> by L_ib_i - b_i^'_1, and in the second, we can bound it by 2u_i, M_∞. While the second bound is trivial, dispersion guarantees that with high probability this case can happen at most v_i, B_k(w_i)/N_B_k times. Therefore, by the definition of dispersion, we know that with probability 1-δ over the draw of the sets {𝒟^β(B_k)B_k ∈ℬ_i, i ∈ [n]}, for mechanism M, agents i ∈ [n], and segments B_k ∈ℬ_i, we have for all b_i, b_i^'∈𝒜_i = [0, 1]^m with b_i - b_i^'_1 ≤ w_i that 1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), b_i, β_-i(o_-i^(j))) - u_i, M(θ_i^(j), b_i^', β_-i(o_-i^(j))) ≤N_B_k - v_i, B_k(w_i)/N_B_k L_i w_i + v_i, B_k(w_i)/N_B_k2u_i, M_∞. Let b_i ∈𝒜_i be arbitrary. By the definition of 𝒢_w, there must be a point p ∈𝒢_w such that b_i - p_1 ≤ w_i. Therefore, with probability 1 - δ 1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), b_i, β_-i(o_-i^(j))) - 1/N_B_k∑_j=1^N_B_k u_i, M(θ_i^(j), p, β_-i(o_-i^(j))) ≤N_B_k - v_i, B_k(w_i)/N_B_k L_i w_i + v_i, B_k(w_i)/N_B_k2u_i, M_∞. Let for every agent i ∈ [n], ℬ_i = {B_1, …, B_N_ℬ_i} be a partition of 𝒪_i, a_1, …, a_N_ℬ_i∈ [0, 2], and δ∈ (0, 1). Then, with probability at least 1 - δ over the draw of the dataset 𝒟, we have for all agents i ∈ [n] ∑_k=1^N_ℬ_i(P(o_i ∈ B_k) - N_B_k/N) a_k≤√(2/Nlog( 2n/δ)). Fix an agent i ∈ [n]. Define the random variable Y_i:= ∑_k=1^N_ℬ_i1_B_k(o_i) (a_k - 1), where o_i ∼ F_o_i. As ℬ_i is a partition and a_k ∈ [0, 2], we know Y_i ∈ [0, 2]. We have with probability 1 - δ/n ∑_k=1^N_ℬ_i(P(o_i ∈ B_k) - N_B_k/N) a_k = ∑_k=1^N_ℬ_i𝔼_o_i[1_B_k(o_i) ] a_k - ∑_k=1^N_ℬ_iN_B_k/N a_k = 𝔼_o_i[∑_k=1^N_ℬ_i1_B_k(o_i) a_k ] - 1/N∑_j=1^N ∑_k=1^N_ℬ_i1_B_k(o_i^(j)) a_k Theorem <ref>≤√(2/Nlog( 2n/δ)), where we used the Hoeffding inequality on the for i.i.d. draws of the random variable Y_i. A union bound over the agents [n] completes the proof. 6.8 Let δ > 0 and β∈Σ̃ be a strategy profile. Suppose that for each agent i ∈ [n] and segment B_k ∈ℬ_i, Assumption <ref> holds. Then, with probability 1- 4δ over the draw of the sets {𝒟^β(ℬ_i)i ∈ [n]}, agents i ∈ [n], and segments B_k ∈ℬ_i, ℓ̃_i(β_i, β_-i) = sup_β^'_i ∈Σ_iũ_i, M(β^'_i, β_-i) - ũ_i, M(β_i, β_-i) ≤∑_k=1^N_ℬ_iN_B_k/Nmax_b_i ∈𝒢_w_i1/N_B_k∑_j=1^N_B_k u_i, M (θ_i^(j), b_i, β_-i(o_-i^(j))) - 1/N∑_l=1^N u_i, M (θ_i^(l), β_i(o_i^(l)), β_-i(o_-i^(l))) + 2 √(2/Nlog(2n/δ)) + ∑_k=1^N_ℬ_iN_B_k/Nmin{1, (τ_i, B_k + ε̃_i, Pdim(N_B_k) + ε̃_i, disp(N_B_k) )}, where τ_i, B_k, ε̃_i, Pdim(N_B_k), and ε̃_i, disp(N_B_k) are the constants defined in Theorems <ref>, <ref>, and Lemma <ref>. Fix i ∈ [n]. The ex ante utility loss consists of the best-response utility and the ex ante utility of the strategy profile β. We start by approximating the ex ante utility of β. By Theorem <ref>, with probability 1 - δ over the draw of the dataset 𝒟^β = {(θ^(l), o^(l), β(o^(l)) ): 1≤ l ≤ N } ũ_i, M(β_i, β_-i) - 1/N∑_l=1^N u_i, M (θ_i^(l), β_i(o_i^(l)), β_-i(o_-i^(l)))≤√(2/Nlog(2n/δ) ). Let's consider the estimation error to the best-response utility next. We can rewrite the best-response ex ante utility to sup_β_i^'∈Σ_i𝔼_o_i, o_-i, θ_i[u_i, M(θ_i, β_i^'(o_i), β_-i(o_-i) ) ] = ∑_k=1^N_ℬ_i P (o_i ∈ B_k ) sup_β_i^'∈Σ_iB_k𝔼_o_i, o_-i, θ_i | {o_i ∈ B_k }[u_i, M(θ_i, β_i^'(o_i), β_-i(o_-i) ) ]. The inner terms, i.e., the difference of the best-response ex ante utility to our estimator over each B_k ∈ℬ_i, can be bounded by sup_β_i^'∈Σ_iB_k𝔼_o_i, o_-i, θ_i | {o_i ∈ B_k }[u_i, M(θ_i, β_i^'(o_i), β_-i(o_-i) ) ] - max_b_i ∈𝒢_w_i1/N_B_k∑_j=1^N_B_k u_i, M (θ_i^(j), b_i, β_-i(o_-i^(j))) ≤ 1. This can be seen by noting that the estimator on the right is bounded below by zero because an agent can guarantee not to be worse off than not participating by bidding the minimal amount. Therefore, we can bound the estimation error for each B_k by one in the worst-case. A more meaningful upper bound to the estimation error for the best-response utility can be given by considering four approximation steps. The first one is to consider constant best-responses over a part of the observation space. By Theorem <ref>, we have sup_β_i^'∈Σ_iũ_i, M(β_i^', β_-i) - sup_b ∈𝒜_i^N_ℬ_iũ_i, M(∑_k=1^N_ℬ_i b_k 1_B_k, β_-i)≤∑_k=1^N_ℬ_i P(o_i ∈ B_k) τ_i, B_k. The second step is to maximize the empirical mean instead of the expectation. By Theorem <ref>, we have with probability 1 - δ over the draw of the datasets {𝒟^β(B_k)B_k ∈ℬ_i, i ∈ [n]}, for all agents i ∈ [n] and segments B_k ∈ℬ_i, sup_b ∈𝒜_i^N_ℬ_iũ_i, M(∑_k=1^N_ℬ_i b_k 1_B_k, β_-i) - ∑_k=1^N_ℬ_i P(o_i ∈ B_k) sup_b_i ∈𝒜_i1/N_B_k∑_j=1^N_B_k u_i, M (θ_i^(j), b_i, β_-i(o_-i^(j))) ≤∑_k=1^N_ℬ_i P(o_i ∈ B_k) ε̃_i, Pdim(N_B_k). The third step to enable a search for a best-response is to consider a finite grid 𝒢_w_i over 𝒜_i, leveraging the concept of dispersion for guarantees. By Lemma <ref>, we have with probability 1 - δ over the draw of the datasets {𝒟^β(B_k)B_k ∈ℬ_i, i ∈ [n]}, for all agents i ∈ [n] and segments B_k ∈ℬ_i, ∑_k=1^N_ℬ_i P(o_i ∈ B_k) ( sup_b_i ∈𝒜_i1/N_B_k∑_j=1^N_B_k u_i, M (θ_i^(j), b_i, β_-i(o_-i^(j))) - sup_b_i ∈𝒢_w_i1/N_B_k∑_j=1^N_B_k u_i, M (θ_i^(j), b_i, β_-i(o_-i^(j))) ) ≤∑_k=1^N_ℬ_i P(o_i ∈ B_k) ε̃_i, disp(N_B_k). The fourth approximation step bound the error made by estimation the marginal probabilities P(o_i ∈ B_k) by N_B_k/N. For this, define a_k := max_b_i ∈𝒢_w_i 1/N_B_k∑_j=1^N_B_k u_i, M (θ_i^(j), b_i, β_-i(o_-i^(j))) + min{1, τ_i, B_k + ε̃_i, Pdim(N_B_k) + ε̃_i, disp(N_B_k) } Then, we have a_k ∈ [0, 2] for every 1 ≤ k ≤ N_ℬ_i. By applying Lemma <ref>, we have with probability at least 1 - δ ∑_k=1^N_ℬ_i(P(o_i ∈ B_k) - N_B_k/N) a_k≤√(2/Nlog( 2n/δ)). We combine the above results to give the full statement. We apply Equation <ref> to estimate the ex ante utility under strategy profile β. To estimate the best-response utility for each B_k ∈ℬ_i, we either apply Equation <ref> for a trivial bound of one or combine Equations <ref> and <ref> for a potentially stronger upper bound. Finally, we use Equation <ref> to justify the estimation of the marginal probabilities P(o_i ∈ B_k) for every B_k ∈ℬ_i. In total, each of the four equations holds with probability 1 - δ. By applying a union bound, all four equations hold with probability 1 - 4δ. Therefore, by additionally applying Equation <ref>, we have with probability 1- 4δ over the draw of the sets {𝒟^β(B_k)B_k ∈ℬ_i, i ∈ [n]} and 𝒟^β, for mechanism M, agents i ∈ [n], and segments B_k ∈ℬ_i, ℓ̃_i(β_i, β_-i) = sup_β^'_i ∈Σ_iũ_i, M(β^'_i, β_-i) - ũ_i, M(β_i, β_-i) = sup_β^'_i ∈Σ_iũ_i, M(β^'_i, β_-i) - sup_b ∈𝒜_i^N_ℬ_iũ_i, M(∑_k=1^N_ℬ_i b_k 1_B_k, β_-i) + sup_b ∈𝒜_i^N_ℬ_iũ_i, M(∑_k=1^N_ℬ_i b_k 1_B_k, β_-i) - ∑_k=1^N_ℬ_i P(o_i ∈ B_k) sup_b_i ∈𝒜_i1/N_B_k∑_j=1^N_B_k u_i, M (θ_i^(j), b_i, β_-i(o_-i^(j))) + ∑_k=1^N_ℬ_i P(o_i ∈ B_k) ( sup_b_i ∈𝒜_i1/N_B_k∑_j=1^N_B_k u_i, M (θ_i^(j), b_i, β_-i(o_-i^(j))) - sup_b_i ∈𝒢_w_i1/N_B_k∑_j=1^N_B_k u_i, M (θ_i^(j), b_i, β_-i(o_-i^(j))) ) + ∑_k=1^N_ℬ_i P(o_i ∈ B_k) max_b_i ∈𝒢_w_i1/N_B_k∑_j=1^N_B_k u_i, M (θ_i^(j), b_i, β_-i(o_-i^(j))) - 1/N∑_l=1^N u_i, M (θ_i^(l), β_i(o_i^(l)), β_-i(o_-i^(l))) +1/N∑_l=1^N u_i, M (θ_i^(l), β_i(o_i^(l)), β_-i(o_-i^(l))) - ũ_i, M(β_i, β_-i) ≤∑_k=1^N_ℬ_i P(o_i ∈ B_k) max_b_i ∈𝒢_w_i1/N_B_k∑_j=1^N_B_k u_i, M (θ_i^(j), b_i, β_-i(o_-i^(j))) - 1/N∑_l=1^N u_i, M (θ_i^(l), β_i(o_i^(l)), β_-i(o_-i^(l))) + √(2/Nlog(2n/δ)) + ∑_k=1^N_ℬ_i P(o_i ∈ B_k) (τ_i, B_k + ε̃_i, Pdim(N_B_k) + ε̃_i, disp(N_B_k) ) = ∑_k=1^N_ℬ_i P(o_i ∈ B_k) a_k - 1/N∑_l=1^N u_i, M (θ_i^(l), β_i(o_i^(l)), β_-i(o_-i^(l))) + √(2/Nlog(2n/δ)) Equ. <ref>≤∑_k=1^N_ℬ_iN_B_k/Nmax_b_i ∈𝒢_w_i1/N_B_k∑_j=1^N_B_k u_i, M (θ_i^(j), b_i, β_-i(o_-i^(j))) - 1/N∑_l=1^N u_i, M (θ_i^(l), β_i(o_i^(l)), β_-i(o_-i^(l))) + 2 √(2/Nlog(2n/δ)) + ∑_k=1^N_ℬ_iN_B_k/Nmin{1, (τ_i, B_k + ε̃_i, Pdim(N_B_k) + ε̃_i, disp(N_B_k) )}.
http://arxiv.org/abs/2408.11434v1
20240821084421
Near-Field Signal Processing: Unleashing the Power of Proximity
[ "Ahmet M. Elbir", "Özlem Tuğfe Demir", "Kumar Vijay Mishra", "Symeon Chatzinotas", "Martin Haardt" ]
eess.SP
[ "eess.SP", "cs.IT", "cs.SD", "eess.AS", "math.IT" ]
Near-Field Signal Processing: Unleashing the Power of Proximity Ahmet M. Elbir, Senior Member, IEEE, Özlem Tuğfe Demir, Member, IEEE, Kumar Vijay Mishra, Senior Member, IEEE, Symeon Chatzinotas, Fellow, IEEE and Martin Haardt, Fellow, IEEE A. M. Elbir is with University of Luxembourg, Luxembourg and King Abdullah University of Science and Technology, Saudi Arabia (ahmetmelbir@ieee.org). Ö. T. Demir is with TOBB University of Economics and Technology, Turkey (ozlemtugfedemir@etu.edu.tr). K. V. M. is with United States DEVCOM Army Research Laboratory, Adelphi, USA (kvm@ieee.org). S. Chatzinotas is with University of Luxembourg, Luxembourg (symeon.chatzinotas@uni.lu). M. Haardt is with Ilmenau University of Technology, Ilmenau, Germany (martin.haardt@tu-ilmenau.de). August 26, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT After nearly a century of specialized applications in optics, remote sensing, and acoustics, the near-field (NF) electromagnetic propagation zone is experiencing a resurgence in research interest. This renewed attention is fueled by the emergence of promising applications in various fields such as wireless communications, holography, medical imaging, and quantum-inspired systems. Signal processing within NF sensing and wireless communications environments entails addressing issues related to extended scatterers, range-dependent beampatterns, spherical wavefronts, mutual coupling effects, and the presence of both reactive and radiative fields. Recent investigations have focused on these aspects in the context of extremely large arrays and wide bandwidths, giving rise to novel challenges in channel estimation, beamforming, beam training, sensing, and localization. While NF optics has a longstanding history, advancements in NF phase retrieval techniques and their applications have lately garnered significant research attention. Similarly, utilizing NF localization with acoustic arrays represents a contemporary extension of established principles in NF acoustic array signal processing. This article aims to provide an overview of state-of-the-art signal processing techniques within the NF domain, offering a comprehensive perspective on recent advances in diverse applications. § INTRODUCTION Electromagnetic (EM) fields generated by an antenna create distinct spatial regions, each with unique characteristics <cit.>. Closest to the antenna surface is the near-field (NF) region, which is divided into reactive and radiative sub-regions, as illustrated in Fig. <ref>. The reactive NF, immediately adjacent to the antenna, is dominated by fields that rapidly diminish with distance and become negligible beyond a boundary of approximately λ / π, where λ is the operating wavelength. Beyond this boundary lies the radiative NF, itself split into two regions. The first part, known as the aperture zone, extends from the reactive NF to approximately 0.62√(A^3/λ), where A is the antenna's maximum linear size. In this zone, the reactive fields are insignificant, the wavefront is quasi-planar, field strength remains nearly constant with distance, and the field amplitude distribution mirrors that of the antenna aperture. The second part of the radiative NF is the Fresnel region, stretching from 0.62√(A^3/λ) to 2 A^2 / λ. In this region, the radiation pattern begins to form, but its shape depends on the distance from the antenna. This is due to changing phase relationships and varying field amplitude ratios between different antenna elements as distance increases. As the observation point moves away from the antenna, the field amplitude initially oscillates before steadily decaying. At infinite distance, this attenuation becomes inversely proportional to the distance. Additionally, as distance increases, the phase and amplitude relationships between the fields from individual antenna elements gradually stabilize, and the angular distribution of the field becomes independent of distance. The outer boundary of the Fresnel zone, typically at 2 A^2 / λ, marks the start of the far-field (FF) region, where the field distribution depends solely on the observation angle, and the field strength diminishes proportionally with distance. In this region, the EM wave?s phase front is spherical, though it appears planar within small angles. The internal parameters of the antenna relate to both reactive and radiative fields, while external parameters are concerned solely with the radiated EM waves. For much of its history, signal processing focused on FF applications, particularly in telecommunications, radar, and broadcasting, where the signals of interest were typically assumed to propagate over long distances with relatively simple wavefront characteristics. However, the unique NF properties were known for quite some time. Heinrich Hertz was among the first to observe that the field of a radiating dipole decays inversely with the third power of range at very close distances, rather than the inverse linear dependence observed in the FF. As a result, the field close to the dipole is significantly stronger than what one might expect from a simple extrapolation of the FF value. In 1909, Arnold Sommerfeld calculated the impact of NF perturbations on the radiation characteristics of a dipole antenna situated near the ground. NF optics began to take shape in the late 1920s, driven by the need to surpass the diffraction limit in microscopy, enabling the visualization of structures at a nanometer scale <cit.>. This early work laid the groundwork for modern techniques such as NF scanning optical microscopy (NSOM) <cit.>, with more recent applications in Raman spectroscopic imaging <cit.> and computational imaging <cit.>. In the field of acoustics, the 1980s marked the beginning of a focused effort on NF applications, where sound waves interact with objects in close proximity, leading to advances in technologies like NF acoustic holography <cit.>, direction finding <cit.>, underwater 3-D localization <cit.>, NF sonar <cit.>, beamforming <cit.>, and sensor array optimization <cit.>. In ultrasound imaging, beamforming is usually performed via using wideband signals in NF <cit.>. In radar remote sensing, NF imaging has a rich research heritage dating back to the 1990s, particularly in applications like ground-penetrating radar, through-the-wall imaging, and synthetic aperture interferometry <cit.>. Recent NF radar signal processing applications focus on direction-of-arrival (DoA) estimation <cit.>, localization <cit.>, and correlated/coherent source localization <cit.>. One of the major issues has been to derive the performance bounds, e.g., Cramér-Rao bounds (CRB) for parameter estimation in NF scenarios <cit.>. The consideration of a NF signal model makes the parameter estimation more challenging especially when the received signal includes both NF and FF signals <cit.>. In order to estimate the DoA and the range information of the NF sources, a well-known technique is to employ a multiple signal classification (MUSIC) algorithm over both direction and range dimensions <cit.>. As MUSIC is a subspace-based approach, it fails when the signals are coherent, i.e., fully correlated. In such cases, maximum-likelihood (ML) <cit.> or compressed sensing (CS) <cit.> approaches can be employed. Relying on the high order statistics (HOS) of the array data, fourth order cumulants are also used <cit.>. More recently, the evolution of array signal processing in wireless communications is moving towards the use of small, densely packed sensors to create extremely large aperture arrays (ELAA), which significantly enhance angular resolution and beamforming gain <cit.>. Particularly, with the advent of sixth-generation (6G) networks, there is a notable adoption of ELAAs or surfaces, coupled with the exploitation of higher frequency bands like terahertz (THz) frequencies, shifting the EM diffraction field from the FF region to the NF. The extended array aperture and shorter wavelengths in the NF region, where the receiver is closer to the transmitter than the Fraunhofer distance <cit.>, result in non-planar signal wavefronts. With larger array apertures and frequencies, the NF range can extend tens to hundreds of meters, crucial for communication system design <cit.>. For instance, an antenna array of 1-m aperture operating at 30 GHz yields a NF region of up to 200 meters, which is critical for system design in communication environments <cit.>. In this context, spherical wavefront considerations become paramount in beamforming design, enabling signal focusing at specific 3-D locations rather than traditional FF beamsteering towards specific angles. The ELAA system facilitates not only directional signal reception but also transmitter localization, distinguishing it from conventional FF designs. However, exploiting the NF region presents both opportunities and challenges in communication <cit.> and sensing <cit.> applications. Beamforming in NF leverages depth information to enhance spatial multiplexing, catering to users at varying distances within the same direction. Additionally, high-resolution sensing and localization are imperative in the NF, necessitating innovative communication strategies. Furthermore, in large arrays operating at high bandwidths, the frequency-dependent array response poses challenges to conventional beamforming, necessitating adaptive strategies for optimized beamforming gains <cit.>. Communications channel estimation in NF involves the reconstruction of the channel via estimating the complex channel gain as well as the direction and the range information of the dominant signal paths. In contrast to the radar scenario, the received signal includes several non-line-of-sight (NLoS) reflections, which may be in the NF of the receiver <cit.>. This poses challenges to model the received signal for communications, wherein the aim is to improve the communication rate, rather than estimating the location of the user. In beamforming, the aim is to optimize the beamformer weights to maximize the communication rate. By taking into account the spherical-wave model of the received path signals, various techniques have been adopted for beamforming, e.g., CS <cit.> and orthogonal matching pursuit (OMP) <cit.>. Combined with radar, integrated sensing and communications (ISAC) is a new paradigm to effectively use the system resources on a common platform for simultaneously performing radar sensing of targets or users as well as communication with the users <cit.>. In this respect, recent research also includes NF signal processing techniques to provide both sensing and communication functionalities, such as beamforming <cit.> as well as user/target localization <cit.>. The intent of this article is to encapsulate the recent advancements in NF signal processing techniques. Notably, there is a distinct absence of surveys encapsulating signal processing strategies for the NF and acquainting with the nuances and challenges of this burgeoning field. Lately, literature on NF signal processing involves short/extensive surveys <cit.> mainly on wireless communications while there is a substantial gap for an overview of NF signal processing techniques in radar, acoustics, ultrasound, and optics (Fig. <ref>). This article aims to present a synopsis of the state-of-the-art signal processing techniques in the NF with a focus on array processing and its contemporary applications. § NF SIGNAL MODEL Consider a uniform linear array (ULA) with N antenna elements and inter-element spacing d that receives signals emitted from K sources. We define s_k(t)∈ℂ as the emitted signal from the k-th source with the direction of θ_k at time t. Then, the observation model at the n-th antenna element is given by y_n(t)= ∑_k= 1^K s_k(t)e^-j2π f_c τ_n,k + e_n(t), where f_c is the carrier frequency and e_n(t)∈ℂ denotes the additive white Gaussian noise, and normlized time delay τ_n,k is associated with the the k-th source signal propagation time among the antennas as τ_n,k = r_k/f_c λ(√( 1 + n^2 d^2/r_k^2 - 2 n d sinθ_k/r_k) -1 ), where λ is the signal wavelength and r_k denotes the range of the k-th source. Depending on propagation distance, the source location may be regarded in the reactive, radiative or FF of the array: * The reactive NF starts from the array to 0.62√(D^3/λ), where D denotes the array aperture, and it is defined as D = (N-1)d for a ULA <cit.>. * The radiative NF (i.e., Fresnel region) extends from the reactive NF to FF, i.e., 0.62√(D^3/λ) < r < 2D^2/λ <cit.>. It is in this region that the time delay τ_n,k can be approximated by using a Taylor series expansion as τ_n,k =-1/2π f_c(ω_kn + κ_k n^2+ O(d^2/r_k^2)), where ω_k = 2π d sinθ_k/λ and κ_k =- π d^2 cos ^2 θ_k/λ r_k. Then, by neglecting O(d^2/r_k^2) in the time delay expression, the observation model in (<ref>) becomes y_n(t) = ∑_k= 1^K s_k(t) e^j(ω_k n + κ_k n^2) + e_n(t). * The FF region starts after d_F = 2D^2/λ, which is called the Fraunhofer distance, and it covers r> 2D^2/λ, wherein the plane wavefront can be approximated as planar with a maximum spherical-induced phase-shift of π/8 across the antennas <cit.>. Fig. <ref> shows the mean-squared error (MSE) for the difference of NF and FF signal models for various frequencies and d_F. We can see that this MSE decreases as the propagation distance d increases and becomes negligible after d > d_F. In a compact form, the N× 1 observation vector 𝐲(t)∈ℂ^N can be expressed as 𝐲(t) = 𝐀𝐬(t) + 𝐞(t), where 𝐲(t) = [ y_1(t),⋯, y_N(t)]^, 𝐞(t) = [e_1(t),⋯, e_N(t)]^ and 𝐬(t) = [s_1(t),⋯, s_K(t)]^. Here, 𝐀 = [ 𝐚(r_1,θ_1), ⋯, 𝐚(r_K,θ_K)]∈ℂ^N× K denotes the array response, and 𝐚(r_k,θ_k)∈ℂ^N represents the steering vector corresponding to the k-th source as 𝐚(r_k,θ_k) = [1, e^j(ω_k + κ_k) ,⋯, e^j(ω_k (N-1) + κ_k (N-1)^2) ]^. § NF RADAR The localization of the radar signals in the proximity of antenna array has been extensively studied. This includes the estimation of the DoA angles as well as the range of the emitting source direction. Practical radar applications involve several challenging scenarios, which requires advanced signal processing techniques for accurate parameters estimation. Unlike FF scenario, the array response is range-dependent, which should be taken into account for source parameter estimation. Range-dependent beampattern is also observed in some far-field applications such as frequency diverse array (FDA) radars <cit.> and quantum Rydberg arrays <cit.>. However, the wavefront is not spherical in these applications. §.§ DoA Estimation and Localization Subspace-based techniques, e.g., the MUSIC algorithm <cit.>, have been widely used for NF DoA estimation and localization. Define 𝐑∈ℂ^N× N as the sample covariance matrix of the array output 𝐲(t) as 𝐑 = 1/T∑_t = 1^T𝐲(t) 𝐲^(t), where T is the number of data snapshots collected from the array. Then, the MUSIC algorithm for NF DoA and range estimation is performed by finding the peak values of the following MUSIC spectra <cit.>, i.e., P(θ,r) = 1/𝐚^(θ,r)𝐔_N𝐔_N^𝐚(θ,r) , where 𝐔_N∈ℂ^N×(N-K) denotes the noise subspace eigenvectors, which can be obtained from the eigenvalue decomposition of 𝐑 as 𝐑 = 𝐔Λ𝐔^, where Λ∈ℂ^K× K includes the eigenvalues of 𝐑 in descending order and 𝐔 = [𝐔_S, 𝐔_N]∈ℂ^N× N is composed of the eigenvectors corresponding to the signal and noise subspaces, respectively. §.§ Mixed NF and FF Source Localization In some practical applications, the signal received by the array is a mixture of FF and NF sources <cit.>. Fig. <ref>(a) shows the array beampattern for mixture of FF and NF targets. For example, in high frequency (HF) radar, the NF multipaths have strong effects compared to very/ultra high frequency (V/U-HF) bands such that the received signal is composed of a FF source signal as well as its NF reflections <cit.>. Assume that there exist K_F FF and K_N NF emitting sources, for which the array output is given by 𝐲 (t) = ∑_k = 1^K_F𝐚(θ_k)s_k (t) + ∑_k = K_F+1^K𝐚(θ_k,r_k)s_k(t) + 𝐞(t), where K = K_F + K_N and 𝐚(θ_k)∈ℂ^N denotes the FF steering vectors for k ∈{1,⋯, K_F}. In this scenario, the MUSIC algorithm can be employed in a two-stage scheme to sequentially estimate FF and NF parameters. In <cit.>, two different special cumulant matrices are designed for this purpose. Consider a ULA with N = 2N̅+1 elements and let 0-th antenna be phase reference of the array. Then, the fourth-order cumulant of the array outputs can be expressed as cum{y_p(t), y_-p^*(t), y_q(t),y_-q(t) } = ∑_k = 1^K c_s_k e^j(2pω_k - 2qω_k ) , where p,q∈ [-N̅,N̅], and c_s_k = cum{s_k(t), s_k^*(t), s_k^*(t),s_k(t) } is the kurtosis of the k-the signal <cit.>. Define p̅ = p + N̅+1 and q̅ = q + N̅+1. The (p̅,q̅)-th element of the cumulant matrix 𝐂_1∈ℂ^N× N is [𝐂_1]_p̅,q̅ = cum{y_p̅-N̅-1 (t), y_-p̅+N̅+1^*(t), y_q̅-N̅-1^*(t), y_-q̅+N̅+1(t) } = ∑_k = 1^K c_s_k e^j(2(p̅ - N̅ - 1)ω_k - 2(q̅ -N̅-1 )ω_k ) . We can write 𝐂_1 in a compact form as 𝐂_1 = 𝐁_1𝐂_s𝐁_1^, where the virtual steering matrix and vectors are defined as 𝐂_s = diag{c_s_1,⋯, c_s_K}∈ℂ^K× K, and 𝐁_1 = [𝐛_1(ω_1),⋯𝐛_1(ω_K)], where 𝐛_1(ω_k) = [e^-j2N̅ω_k ,⋯,e^j2N̅ω_k ]^∈ℂ^N for k = 1,⋯, K. By computing the MUSIC spectra based on the noise subspace eigenvectors of 𝐂_1, the DoA information ω_k for k ∈{1,⋯, K_F} can be obtained via P(ω) = 1/𝐛_1^(ω)𝐔_N_1𝐔_N_1^𝐛_1(ω) , where 𝐔_N_1∈ℂ^N× (N-K) denotes the noise subspace eigenvector matrix of 𝐂_1. In order to estimate κ_k, another cumulant matrix 𝐂_2∈ℂ^(4N̅+1)× (4N̅+1) is constructed as 𝐂_2 = [ [ 𝐂_2,1 𝐂_2,2; 𝐂_2,3 𝐂_2,4 ]], which can be written as 𝐂_2 = 𝐁_2𝐂_s𝐁_2^, where 𝐁_2 = [𝐛_2(ω_1,κ_1),⋯, 𝐛_2(ω_K,κ_K)]∈ℂ^(4N̅+1)× K, and 𝐛_2(ω_k,κ_k) = 𝐛_2(ω_k) 𝐛_2(κ_k)∈ℂ^4N̅+1 (See <cit.> for the computation of 𝐛_2(ω_k), 𝐛_2(κ_k) and 𝐂_2,i, i = 1,⋯,4). Then, by substituting the estimated ω_k into 𝐛_2(ω_k,κ_k), the following MUSIC spectra reveals the range-dependent source parameters κ_k for k ∈{K_F + 1,⋯, K_F+K_N}, i.e., P(κ) = 1/𝐛_2^(ω,κ)𝐔_N_2𝐔_N_2^𝐛_2(ω,κ) , where 𝐔_N_2∈ℂ^(4N̅+1)× (4N̅+1-K) denotes the noise subspace eigenvector matrix of 𝐂_2. The aforementioned technique require multiple steps to sequentially estimate the DoA and range parameters of the mixture signals. Instead, more efficient techniques have been introduced in the literature by exploiting the special geometry/structure of the antenna arrays, e.g., subarrayed ULA <cit.> and nested array <cit.>. Apart from these model-based approaches, data-driven techniques have also been devised <cit.>. §.§ Correlated/Coherent Signal Estimation In practice, the source signals are not always uncorrelated because of the received reflections from the objects which can be located in the FF or NF of the antenna array. In the most extreme scenario, the signal are coherent, i.e., fully correlated, such that the covariance matrix of the received signal turns out to be rank-deficient <cit.>. For example, in multipath scenario, the emitted signals from FF or NF sources are scattered from the objects in the vicinity of the antenna array as shown in Fig. <ref>(b). Consider a single FF source signal s_1(t) (i.e., K_F = 1), which is then reflected from K_N = K-1 reflection points in the NF of the array. Thus, the reflected signals are coherent with s_1(t), i.e, s_k(t) = ζ_k s_1(t) for k ∈{2,⋯, K} and ζ_k ∈ℂ. Then, the mixture model in (<ref>) is written for coherent signals as 𝐲_c (t) = 𝐚(θ_1)s_1 (t) + ∑_k = 2^K𝐚(θ_k,r_k) s_1(t) + 𝐞(t). Since the covariance matrix of 𝐲_c(t) is rank-1, the subspace-based methods fail to resolve source angle and ranges. In <cit.>, a calibration-based approach is presented to estimate the angle and range parameters of mixture signals. Specifically, the NF signal components are first treated as disturbance signals to be calibrated, and the FF DoA angle is estimated. Then, the NF parameters are found. By exploiting the coherent signal model, the observation in (<ref>) can be written as 𝐲 (t) = 𝐚(θ_1)s_1 (t) (1 + β_2 + β_3 + ⋯ + β_K) + 𝐞(t), where β_k = s_k(t)/s_1(t) (k ∈{2,…, K}) is a direction-dependent coefficient. Then, the array model for FF is 𝐲 (t)= Γ𝐚(θ_k) s_1(t) + 𝐞(t), where Γ = diag{γ_1, ⋯, γ_N }∈ℂ^N× N is a direction-dependent matrix representing the impact of NF signals as disturbances. As a result, the FF DoA angles can estimated with the knowledge of Γ, which can be designed via a calibration technique. Define 𝐀̃ =[𝐚̃(θ̃_1),⋯, 𝐚̃(θ̃_C) ] ∈ℂ^N× C as the set of measurements collected during calibration process for C different DoA angles. Assume that the calibration data is collected at the DoA angles Θ̃ = {θ̃_1,⋯, θ̃_C} uniformly in [-π/2, π/2]. Then, the elements of the calibration matrix Γ_c at the c-th calibration angle θ̃_c can be found as γ_n,c = [𝐚̃(θ̃_c )]_n /[𝐚(θ_c)]_n for c = 1,⋯, C and n = 1,⋯, N. Then, the FF DoA angle θ_1 can be estimated from the following MUSIC spectra evaluated at Θ̃ i.e., P(θ_c) = 1/𝐚^(θ_c)𝐔̃_N𝐔̃_N^𝐚(θ_c) , where 𝐔̃_N∈ℂ^N× (N-1) is the noise subspace eigenvector matrix for the covariance of 𝐲(t) in (<ref>). Once the FF signal are obtained, the array output in (<ref>) is transformed to the FF domain to perform angle estimation of the NF signals by employing a near-to-far transformation (NFT) matrix 𝐓∈ℂ^N× N, i.e., 𝐀_FF = 𝐓𝐀_NF, where 𝐀_FF∈ℂ^N× C and 𝐀_NF∈ℂ^N× C denote the collected FF and NF calibration measurements. Then, the transformed array output is given by 𝐲_nf(t) = 𝐓𝐲(t) = 𝐓𝐚(θ_1)s(t) + 𝐓𝐞(t)_distortions + ∑_k = 2^K𝐓𝐚(θ_k,r_k) s_1(t) _desired NF signal, where the FF signal behave like distortion. The transformed signal in (<ref>) is still rank-deficient. To alleviate this, forward-backward spatial smoothing (FBSS) technique <cit.> to construct a full-rank covariance matrix. Then, the DoA angles of the desired signals can be obtained via FF MUSIC algorithm based on the covariance of 𝐲_nf(t). Finally, the estimation of the ranges of the NF signals can be performed via sparsity-based techniques by substituting the estimated FF and NF DoA angles <cit.>. § NF WIRELESS COMMUNICATIONS NF signal processing is an emerging area of research in wireless communications, driven by advancements in mmWave <cit.> and THz <cit.> technologies. For instance, mmWave massive MIMO systems, which involve a very large number of antennas (e.g., >128) and operate at high frequencies (e.g., >30 GHz), can have a Fraunhofer distance extending up to hundreds of meters. In such cases, NF signal processing becomes crucial for performing key communication tasks such as channel estimation, beamforming, and resource allocation. §.§ Channel Estimation Consider K single-antenna users and L effective scatterers between each user and the BS. When all the effective scatterers between the BS and a particular user are located within the radiative NF of the BS, the channel between the BS and user k can be expressed as 𝐡_k= ∑_l=1^Lα_k,l𝐚(θ_k,l,r_k,l), where α_k,l is the complex gain of the l-th path. The θ_k,l and r_k,l are the corresponding angle and distance. To estimate the users' channels, the BS receives pilot sequences sent by the users during the training interval. To facilitate channel estimation and avoid interference, we assume that users are assigned orthogonal pilot sequences. This allows us to focus on a single user's received pilot signal (omitting the user index k), which is given by 𝐲 = √(ρ)𝐀𝐡+𝐧, where 𝐲 is obtained after correlating the received signals with the pilot sequence of the user of interest, and ρ is the pilot signal-to-noise ratio (SNR). The matrix 𝐀∈ℂ^N_ RF× N represents the analog combining matrix, where the BS employs a hybrid beamforming architecture with N_ RF RF chains. Each entry of 𝐀 has a modulus of 1/√(N) <cit.>. The independent additive noise is denoted by 𝐧∼𝒩_ℂ(0,𝐈_N). To leverage the sparse characteristics of high-frequency channels when L is relatively small, we can utilize OMP-type algorithms with codebooks specifically designed to account for the NF spherical wavefront <cit.>. For a good performance in the OMP algorithm, the dictionary matrix should exhibit low column coherence, meaning the magnitude of the maximum pairwise inner product between different columns is minimized. In FF scenarios, the dictionary matrix can be constructed using the columns of a discrete Fourier transform (DFT) matrix that correspond to FF array steering vectors. Conversely, for the NF case, a common approach to dictionary construction is the polar-domain design <cit.>, which aims to achieve low column coherence, which is given as μ=p≠ qmax|𝐚^( θ_p, r_p) 𝐚( θ_q, r_q)| , where p and q are the column indices of the dictionary. The absolute value of the inner product is |𝐚^( θ_p, r_p) 𝐚( θ_q, r_q)| = | ∑_n=-N̅^N̅ e^-j2π/λ(dn(sinθ_q-sinθ_p)+d^2n^2(cos^2 θ_p /2r_p - cos^2 θ_q /2r_q) )| , where we have considered an ULA with N = 2N̅+1 elements and 0-th antenna being phase reference of the array. In polar-domain dictionary design, both angular and distance sampling are performed. For angular sampling, we consider two arbitrary locations where cos^2θ/r=1/ϕ, with ϕ representing a constant corresponding to the distance ring ϕ. On this distance ring, the nulls are obtained by sampling the angles θ = arcsin( nλ/N d), n=0,± 1, ± 2, …, ±⌊Nd/λ⌋. For distance sampling, we focus on two arbitrary locations with the same angle. Using the Fresnel integral approximation of the summation in (<ref>), one can derive that <cit.>, the distances should be sampled according to r = 1/sN^2d^2cos^2 θ/2λε^2, s=0,1,2,…, where ε>0 is a parameter that can be adjusted to guarantee a certain column coherence. For instance, to guarantee at most 0.5 of the maximum inner product (i.e., N), ε should be selected larger than 1.6. The dictionary matrix 𝐖 is constructed using the array steering vectors 𝐚(θ, r), with angles and distances selected according to the previously described angular and distance sampling methods to achieve the desired column coherence. In Fig. <ref>(a), we compare the least squares (LS) channel estimator with the OMP algorithm using dictionaries designed with ε = 1 and ε = 2. The simulation parameters are as follows: N = 256, N_ RF = 8, and SNR = 10 dB. As shown in the figure, as the pilot length increases, the normalized mean-squared error (NMSE) of all estimators decreases, with the OMP algorithm demonstrating superior performance when the pilot length is sufficiently large. This is because the OMP algorithm, combined with the polar-domain dictionary, effectively exploits both sparsity and the NF array steering vector structure, unlike the non-parametric LS estimator. NF effects can also be observed in midband frequencies for extremely large-scale arrays <cit.>. At these frequencies, the sparsity may no longer exist. In such cases, the spatial correlation matrix can be utilized. When the full knowledge of the spatial correlation matrix is available, the optimal approach is to use the minimum mean-squared error (MMSE) estimator. However, acquiring the spatial correlation matrix becomes increasingly challenging, particularly as the array size grows. To improve upon the simplest LS alternative, one option for two-dimensional arrays is to leverage the low-rank characteristics of any plausible spatial correlation matrix, which is independent of specific user locations and only dependent on array geometry. The corresponding reduced-dimension LS (RS-LS) estimator can provide significantly better estimates than LS without requiring specific correlation matrix knowledge <cit.>. §.§ Beamforming vs. Beam-Focusing Suppose a user has a line-of-sight (LoS) channel to the BS. If the user is situated in the FF of the BS and the BS employs maximum ratio transmission (MRT) precoding by adjusting the precoding vector according to the user's FF array steering vector, the resulting beampattern exhibits a finite beamwidth across the angular domain. However, along the distance at the user's angle, the beampattern remains strong. On the other hand, if the user is located in the NF of the BS, there is both a finite beam width across the angular domain and a finite beam depth along the distance domain. This can be understood by analyzing the NF beampattern at a specific angle θ by computing the normalized array gain: G(θ,r)= 1/N | 𝐚^( θ, r_0)𝐚( θ, r) |, where r_0 is the range of the user <cit.>. It can be shown that G(θ,r) can be approximated using Fresnel integrals C(z)=∫_0^zcos(π/2x^2)dx and S(z)=∫_0^zsin(π/2x^2)dx by G(θ,r) ≈|C(z)+jS(z)/z| , where z= √(N^2d^2sin^2θ/2λ|1/r_0-1/r|). Since |C(z)+jS(z)/z| is a decreasing function for z∈[0,1.8] and it reduces to 0.5 when z≈ 1.6, there is a finite beam depth along the r axis, which defines the 3 dB beam depth. After some mathematical manipulations, it can be shown that if r<r_ BD where r_ BD=N^2d^2sin^2θ/2λ z_ 3dB^2, where z_ 3dB=1.6, the 3 dB beam depth (BD) is given as BD_ 3dB = 2r_0^2r_ BD/r_ BD^2-r_0^2. On the other hand, if r_0≥ r_ BD, then the beam depth becomes infinity as in the FF case. One of the significant implications of NF beam focusing is that multiple users can be simultaneously served by the BS, even if they are located along the same angle. This is not possible in the FF scenario, where nulling the interference between users is challenging. Due to the finite beam depth characteristics of beam-focusing, it is feasible to spatially multiplex many users, a concept known as massive spatial multiplexing <cit.>. To exemplify massive spatial multiplexing, we consider the uplink of a multi-user MIMO system with K single-antenna users. Following <cit.>, if one uses MMSE combining scheme, the spectral efficiency of the k-th user is given as SE_k = log_2(1+p_k𝐡_k^(∑_i=1,i≠ k^K p_i 𝐡_i𝐡_i^+σ^2𝐈_N )^-1𝐡_k), where, p_k denotes the transmit power of the k-th user, and σ^2 represents the noise variance. In Fig. <ref>(b), we present the average SE per user for N=512, f_c=30 GHz, p_k=0.2 W, and σ^2 = -87 dBm, which corresponds to a 100 MHz bandwidth <cit.>. K users are positioned at an angle of θ=0, with distances uniformly distributed between 20 and 500 meters. In the figure, the “exact” curve illustrates the SE derived using the precise NF LoS channels, while the “mismatched” curve represents the case where LoS channels are approximated by FF array steering vectors. As shown, approximating user channels as FF significantly reduces SE. This demonstrates that, despite identical path losses in both cases, utilizing NF channels leads to a considerable improvement in SE. §.§ Wideband Processing and Beam-Squint In wideband hybrid beamforming, the generated beam direction changes across the subcarriers compared to the central subcarrier. When the bandwidth is relatively large, e.g., in mmWave or THz designs, the beam direction squints especially at the high-end and low-end subcarriers. This phenomenon is called beam-squint <cit.>. For instance, consider two users/targets located in the FF and NF at the broadside direction. In the FF, the angular deviation due to beam-squint is roughly 6^∘ (0.4^∘) for 300 GHz with 30 GHz (60 GHz with 1 GHz) bandwidth, respectively. However, NF beam-squint has approximately (6^∘, 10 m ) angular and range deviation for a target/user located at 20 m distance for 300 GHz with 30 GHz bandwidth. Fig. <ref>(a) shows the array beampattern in the presence of NF beam-squint. Consider the array steering vector in (<ref>), which can be rewritten in terms of the distance between the k-th source and the n-th antenna, r_k^(n), as 𝐚(ϑ_k,r_k) = [e^- j2πd/λr_k^(1),⋯,e^- j2πd/λr_k^(N)]^, where ϑ_k = sinθ_k and r_k^(n)= √(r_k^2 + 2(n-1)^2 d^2 - 2 r_k(n-1) d ϑ_k), which can be approximated <cit.> as r_k^(n)≈ r_k - (n-1) d ϑ_k + (n-1)^2 d^2 Υ_k , where Υ_k = 1- ϑ_k^2/2 r_k. Then, we can rewrite (<ref>) as 𝐚(ϑ_k,r_k) ≈ e^- j2πf_c/c_0r_k𝐚̃(ϑ_k,r_k), where the n-th element of 𝐚̃(ϑ_k,r_k)∈ℂ^N is [𝐚̃(ϑ_k,r_k)]_n = e^j 2πf_c/c_0( (n-1)dϑ_k - (n-1)^2 d^2 Υ_k) . Due to beam-squint, the generated beam toward (ϑ_k,r_k) deviates to the spatial location (ϑ̅_m,k,r̅_m,k) at the m-th subcarrier in the beamspace. Then, the n-th entry of the deviated steering vector in (<ref>) for the spatial location is [𝐚̃(ϑ̅_m,k,r̅_m,k)]_n = e^j 2πf_m/c_0( (n-1)dϑ̅_m,k - (n-1)^2 d^2 Υ̅_m,k) , for which we can finally define the NF beam-squint in terms of DoAs and ranges as Δ(ϑ_k,m) = ϑ̅_m,k - ϑ_k = (η_m -1)ϑ_k, Δ(r_k,m) = r̅_m,k - r_k = (η_m -1)r_k = (η_m -1) 1 - η_m^2 ϑ_k^2/η_m(1 -ϑ_k^2)r_k, where η_m = f_c/f_m is the ratio of the central and m-th subcarrier frequencies. The observation of beam-squint differs when the receive array is in the NF, which is the area where the receive signal wavefront is spherical rather than plane-wave as in FF. In such a scenario, NF beam-squint causes the squint of the generated beam toward distinct locations rather than directions. As a result, handling beam-squint in the NF is even more challenging than that in the FF since the impact of beam-squint and the model mismatch due to the spherical wavefront are intertwined. Thus, NF beam-squint brings up new research challenges in both radar and communications for target/user DoA estimation <cit.>, beamforming <cit.>, waveform design, and resource allocation <cit.>. One direct solution, analogous to the FF methods, is to employ OMP <cit.> or MUSIC <cit.> algorithms with the dictionary of NF array responses for channel/DoA estimation and beamforming. Fig.<ref>(b-c) shows beam-squint-aware system performance for DoA estimation root mean-squared error (RMSE) and channel estimation NMSE to account for both radar and communication scenarios, respectively <cit.>. In both cases, it is clear that beam-squint should be accurately compensated as it leads to substantial performance loss in DoA/channel estimation. In comparison, the model mismatch, i.e., employing only FF model for the received signal, has relatively less impact on the performance. §.§ Integrated Sensing & Communications Until recently, radar sensing and communication systems have been exclusively operated in non-overlapping frequency bands. However, a common demand for ubiquitous connectivity in wireless communications and high resolution radar sensing has led to a joint design of both systems in a shared spectrum as an effective solution: ISAC paradigms to share spectrum between radar and communications  <cit.>. As the combination of both scenarios, NF ISAC design involves simultaneously generating multiple beams toward both users and targets which are located in the NF <cit.>. Fig. <ref> shows the NF ISAC beamforming performance in terms of spectral efficiency (SE) with respect to both SNR as well as the system bandwidth. We can see that the FF assumption leads to serious performance degradation in both hybrid beamforming design. It can also be seen NF beam-squint should be modeled accurately to maintain satisfactory performance over a large bandwidth in the ISAC system. § ACOUSTICS AND ULTRASOUND In acoustics and ultrasound, the source location is usually in the NF of the sensor (microphone/hydrophone for acoustics and transducer for ultrasound) array, thereby observing the wavefront curvature. Fourier near-field acoustic holography (NAH) has been very popular since 1980s <cit.>. This technique reconstructs a 3-D sound field from the 2-D hologram scanned above the source surface in planar, cylindrical, and spherical geometries. This classical technique has been investigated for acoustic source localization for decades. Some recent extensions of NAH are discussed below. §.§ Localization in Acoustics Localization of acoustic signals is a major task in air/underwater communications, security surveillance and sonar, wherein acoustic vector sensor (AVS) is employed as a measuring device <cit.>. An AVS is composed of four components: three orthogonal velocity sensors measuring the Cartesian components and an isotropic pressure sensor measuring the acoustic pressure. Specifically, the 4× 1 NF array manifold for a AVS is given by 𝐯 = [[ u; v; w; p ]] = [[ sinφcosθ; sinφsinθ; cosφ; 1/√(1 + (λ/2π r )^2 )e^jarctanλ/2π r ]], where θ, φ and r represent azimuth/elevation angles and the distance of the source. The array manifold in (<ref>) can be estimated via the standard subspace-based methods such as estimation of signal parameters via rotational invariance (ESPRIT) <cit.>. Then, the 3D source location can be estimated by taking into account the NF propagation path attenuation and the phase difference among the sensors <cit.>. §.§ Beamforming in Acoustics In acoustics, beamforming is an important technology in microphone array signal processing to steer, shape and focus the acoustic waves toward a desired direction. A widely used beamforming technique is minimum variance distortionless response (MVDR), which is also called Capon beamforming <cit.>. Define 𝐲_m(t)∈ℂ^N as the microphone array output, which are multiplied by the beamforming weights, i.e., w_1,⋯, w_N ∈ℂ. Then, the combined beamformer output is y_o(t) = 𝐰^𝐲_m(t), where 𝐲_m(t) = 𝐚(θ,r)s_m(t) + e_m denotes the array output for the source signal s_m(t)∈ℂ, and 𝐰 = [w_1,⋯, w_N]^ is the beamforming vector. For a NF acoustic source signal, the MVDR beamformer design problem is _𝐰𝐰^𝐑_m𝐰𝐰^𝐚(θ,r) = 1, where 𝐑_m = 1/T∑_t = 1^T𝐲_m(t)𝐲_m^(t) is the sample covariance matrix of the array output and 𝐚(θ,r) denotes the desired direction of interest of the beamformer. The optimal solution for (<ref>) is 𝐰_opt = (𝐚^(θ,r)𝐑_m^-1𝐚(θ,r))^-1𝐑_m^-1𝐚(θ,r), which requires the knowledge of 𝐚(θ,r). To relax this requirement, various beamforming techniques have been introduced for a more robust design <cit.>. Due to simplicity of the FF model, the transformation or calibration of the NF array output to FF have been widely adopted. Therefore, in order to deal with the NF effect in acoustics, a common approach is to employ a calibration technique by applying a gain/phase compensation to each array output so that the curved wavefront of the resulting signals appear as plane-wave <cit.>. It is also shown in <cit.> that the improved beamforming performance can be achieved by exploiting the a priori knowledge of the distance between the source and the array, especially when the source of interest lies in NF while the interfering sources are located in the FF. Besides, subspace-based techniques, e.g., MUSIC and ESPRIT, can be employed for joint angle and range estimation <cit.>. §.§ Beamforming in Ultrasound Imaging While beamforming has been widely used for narrowband signals in the FF, medical ultrasound imaging involves wideband signals originating in NF <cit.>. Unlike classical radar imaging that assumes targets in FF as point sources, ultrasound NF imaging encounters extended scatterers. Hence, DoA estimation algorithms designed for point sources are inapplicable here. Generally, spread source modeling approach may be employed but the presence of severe background noise precludes widespread usage of this method. Hence, MVDR beamforming is commonly used by casting the problem as spatial spectrum estimation. However, this method is very sensitive to the errors in the imaging system as it is SNR-dependent based on the data covariance matrix. In order to overcome these challenges, sidelobe pattern control techniques have been introduced based on array optimization and diagonal loading <cit.>. Alternatively, a full-scale electromagnetic or acoustic model of the scenario, including sensor/receiver characteristics and target environment, may be employed for DoA estimation. Define 𝐲_u(x_p,y_p,z_p)∈ℂ^N as the NF response vector at the field point (x_p,y_p,z_p) in Cartesian coordinate systems. Suppose that the the mainlobe of the array is steering to the focus point (x_f,y_f,z_f). Then, the combined beamformer output of the ultrasound transducer array from the acoustic point source at the (x_p,y_p,z_p) becomes B(x_p,y_p,z_p) = 𝐰^𝐲_u(x_p,y_p,z_p). The aim is to achieve maximum array gain at the desired location (x_f,y_f,z_f) while minimizing the sidelobes in the beampattern. Thus, the beamforming design problem for sidelobe control is given by _𝐰ξ 𝐰^𝐲_u(x_f,y_f,z_f) = 1, 𝐰≤ξ, |𝐰^𝐲_u(x_p,y_p,z_p) |≤δ, where (x_p,y_p,z_p) ∈Θ_u, and δ controls the sidelobe level in the sidelobe region Θ_u. § NF OPTICS Until the 1980s, NF optics remained confined to microscopy. However, with the advent of applications such as tomography <cit.>, cryogenic electron microscopy<cit.>, and x-ray coherent diffractive imaging<cit.>, NF techniques became highly diversified. In particular, infrared NF offers remarkable chemical sensitivity and nanoscopic spatial resolution, enabling the quantitative extraction of material properties from 3-D structured objects such as thin films and nanostructures <cit.>. A major advantage of NF imaging is its ability to surpass the resolution limits of conventional FF techniques, which are constrained by diffraction to about half the wavelength of the employed light. This limitation is a significant barrier to examining nanoscale objects, where resolutions in the range of 10-100 nm are required. In contrast, NF propagation enables resolutions that are unattainable with traditional methods, especially in the holographic regime, where small Fresnel numbers produce high contrast. More recently, there has been a focus on NF phase Retrieval (PR) for fields such as spanning holography<cit.>, ptychography<cit.>, and lens-less x-ray coherent diffractive imaging <cit.>. A general NF PR problem has the following setting. The spherical near-field samples are defined in the rotation space, which is a set of all possible rotations on the 3-D Euclidean space ℝ^3. This space is parameterized by three rotation angles: polarization angles, ϕ, χ∈[02 π] and azimuth angle θ∈[0 π]. Wigner D-functions are the orthonormal basis of this space, i.e., <cit.> D_l^k, n(θ, ϕ, χ)=N_l e^-j k ϕd_l^k, n(cosθ) e^-j n χ, where d_l^k, n(cosθ) is the Wigner D-function of band-limit degree 0 ≤ l ≤ B-1 and orders -l ≤ k, n ≤ l, and N_l=√(2 l+1/8 π^2) is the normalization factor. The spherical NF field h(θ, ϕ, χ) using a Wigner D-function expansion and bandwidth B is h(θ, ϕ, χ)=∑_l=0^B-1∑_k=-l^l ∑_n=-l^l α_l^k, nD_l^k, n(θ, ϕ, χ), where {α_l^n}_k, n=-l^l are spherical mode coefficients to be reconstructed. Spherically sampling {θ, ϕ, χ} in m points, the Wigner D expansion (<ref>) is rewritten in the form of following linear feasibility problem: 𝐡=𝐀_W α, where 𝐡=[h(θ_1, ϕ_1, χ_1), ⋯, h(θ_m, ϕ_m, χ_m)]^⊤, α∈ℂ^n is constructed by Wigner D coefficients {α_l^n}_k, n=-l^l, and the Wigner D-matrix 𝐀_W is 𝐀_W=([ D_0^0,0(θ_1, ϕ_1, χ_1) … D_B-1^B-1, B-1(θ_1, ϕ_1, χ_1) ; ⋮ ⋮ ⋱ ⋮; D_0^0,0(θ_m, ϕ_m, χ_m) … D_B-1^B-1, B-1(θ_m, ϕ_m, χ_m) ]). This matrix is a collection of m different samples of Wigner D-functions, where for each sample there exist Wigner D-functions related to its degree l and order |k|,|n|<B. When only the phaseless measurements of NF radiation are available, then recovering α from the embedded phaseless data given the knowledge of 𝐀_W is the optimization problem Find α subject to y = |𝐀_Wα|, where the non-convex constraint models the measurement process. This PR problem may also be generalized to mixed measurements from near-, middle-, and far-zone fields <cit.>. Since the problem results in a non-convex inverse problem, the recovery method to obtain the phase generally exploit a specific optical design or signal structure <cit.>. § SUMMARY AND FUTURE OUTLOOK NF signal processing, traditionally regarded as a specialized niche, has a rich and extensive history that has significantly contributed to the advancement of various scientific fields, including electromagnetics, acoustics, and medical imaging. Classical NF techniques laid the groundwork for understanding and manipulating electromagnetic fields at close ranges, enabling breakthroughs that have shaped modern technology. In recent years, however, there has been a remarkable resurgence of interest in NF signal processing, driven by advancements in emerging technologies such as ELAA and ISAC. Further, enhanced capabilities for conducting measurements across various optical regimes have reinvigorated NF optics, leading to a proliferation of specialized applications. Recent quantum-inspired techniques like those utilizing Rydberg atoms <cit.> have underscored the need to revisit and expand NF signal processing approaches. In summary, the ongoing exploration of NF theory and its applications holds great promise for addressing new and complex problems in signal processing. IEEEtran
http://arxiv.org/abs/2408.12168v1
20240822073100
FIRST: Teach A Reliable Large Language Model Through Efficient Trustworthy Distillation
[ "KaShun Shum", "Minrui Xu", "Jianshu Zhang", "Zixin Chen", "Shizhe Diao", "Hanze Dong", "Jipeng Zhang", "Muhammad Omer Raza" ]
cs.CL
[ "cs.CL" ]
Transientangelo: Few-Viewpoint Surface Reconstruction Using Single-Photon Lidar Weihan Luo^1 weihan.luo@utoronto.ca Anagh Malik^1,2 anagh@cs.toronto.edu David B. Lindell^1,2 lindell@cs.toronto.edu ^1University of Toronto ^2Vector Institute <https://weihan1.github.io/transientangelo/> August 26, 2024 ========================================================================================================================================================================================================================================== *Equal Contribution. footnote § ABSTRACT Large language models (LLMs) have become increasingly prevalent in our daily lives, leading to an expectation for LLMs to be trustworthy —- both accurate and well-calibrated (the prediction confidence should align with its ground truth correctness likelihood). Nowadays, fine-tuning has become the most popular method for adapting a model to practical usage by significantly increasing accuracy on downstream tasks. Despite the great accuracy it achieves, we found fine-tuning is still far away from satisfactory trustworthiness due to " tuning-induced mis-calibration". In this paper, we delve deeply into why and how mis-calibration exists in fine-tuned models, and how distillation can alleviate the issue. Then we further propose a brand new method named EFfIcient TRustworthy DiSTillation (FIRST), which utilizes a small portion of teacher's knowledge to obtain a reliable language model in a cost-efficient way. Specifically, we identify the "concentrated knowledge" phenomenon during distillation, which can significantly reduce the computational burden. Then we apply a "trustworthy maximization" process to optimize the utilization of this small portion of concentrated knowledge before transferring it to the student. Experimental results demonstrate the effectiveness of our method, where better accuracy (+2.3%) and less mis-calibration (-10%) are achieved on average across both in-domain and out-of-domain scenarios, indicating better trustworthiness. § INTRODUCTION With the rapid development of large language models (LLMs), many powerful models have been deployed into our daily lives for practical usage to help us make decisions <cit.>. This makes it urgent for us to know to what extent we can trust the outputs of the models. Calibration is one of the most important indicators beyond accuracy which provides a confidence measure to the model’s predictions <cit.>. In LLMs, confidence is exactly the probability for each generated token. Therefore, a well-calibrated model should align its prediction confidence with its ground-truth correctness likelihood. As an example, recent hallucination detection methods rely on model prediction confidence as a significant indicator of potential hallucination <cit.>. If the model is incapable of giving accurate confidence levels, people may fail to detect hallucinations due to the model's over-confidence, or people may falsely identify hallucinations due to the model's under-confidence. Mis-calibration brings significant challenges for the deployment of LLMs in real-world applications. Currently, there are two methods to obtain a language model for practical usage. First, fine-tuning, which fine-tunes pre-trained LLMs on specific datasets by matching each token entry with a target ground truth token. Although fine-tuning can consistently improve performance on downstream tasks <cit.>, we identify that the model obtained in this way exhibits a nature of "tuning-induced miscalibration". Second, distillation-based methods transfer knowledge (e.g., soft labels) from larger LLMs to smaller models <cit.>. Although distillation shows better calibration than fine-tuning as it matches each token entry with a probability distribution instead of a hard label, we find it is still biased because of the mis-calibration nature of teacher models. In addition, distillation faces the challenge of determining the optimal amount of knowledge to transfer. Transferring all the teacher's knowledge leads to high computational costs while transferring too little knowledge results in poor accuracy. Therefore, it is crucial to balance between trustworthiness (accuracy and well-calibration) and efficiency for distillation-based methods. To address the challenge of obtaining a trustworthy model, we propose eFfIcient tRustworthy disTillation (FIRST), aiming to efficiently utilize a relatively small amount of the teacher's knowledge. Specifically, we first identify the "concentrated knowledge" phenomenon, which shows that in the context of LLMs, the probability distribution of generated tokens is not uniform but rather concentrated on a few high-probability tokens. Based on this finding, we propose to use the top-5 tokens as the knowledge to balance the trade-off between storage space and the amount of knowledge transferred, achieving efficient distillation. Afterward, to eliminate the "tuning-induced mis-calibration" of the teacher model, we applied a "trustworthy maximization" to this portion of knowledge, ensuring that it maximizes the enhancement of the student model's accuracy while also guaranteeing its well-calibration. We first validate our method in in-domain scenarios, discovering that the models obtained by FIRST achieve excellent accuracy, even with the use of a relatively small amount of top-5 knowledge and the "trustworthy maximization" process can significantly enhance these models' calibration ability. Furthermore, we test our approach in out-of-domain settings, demonstrating that models obtained by FIRST still exhibit the best trustworthiness and hold generalization ability. This indicates that FIRST enables smaller models to genuinely learn the capability of being trustworthy, rather than being confined to in-domain scenarios. In summary, our key contributions include: * We discover that LLMs exhibit "concentrated knowledge" and "tuning-induced mis-calibration" phenomena, providing insights into obtaining trustworthy models. * We propose FIRST, which maximizes the effectiveness and trustworthiness of a relatively small portion of knowledge transferred from the teacher by "trustworthy maximization" to obtain a trustworthy student model. * Extensive experiments demonstrate that models obtained using FIRST consistently achieve the highest level of trustworthiness across different settings. § RELATED WORK §.§ Trustworthy Models The current evaluation of LLMs predominantly focuses on accuracy, overlooking whether the models truly know the answer or are merely guessing (i.e. trustworthy). Recent works <cit.> have demonstrated that accurate LLMs may not necessarily be "trustworthy" due to a significant calibration gap, so-called mis-calibration. This gap prevents us from trusting the output of the models, and it can further cause LLMs to generate harmful content, especially when subjected to adversarial attacks or jailbreak prompts <cit.>. Our work further reveals how mis-calibration exists in different tuning methods and proposes a new trustworthy evaluation metric that covers both accuracy and calibration. To achieve a well-calibrated LLM, recent work shows soft-label distillation shows better calibration ability <cit.>. However, it still suffers from biased labels due to the mis-calibration nature of the fine-tuned teacher model. Our work is an improvement on this line of work by applying "concentrated knowledge" and "trustworthy maximization", leading to better accuracy, efficiency, and trustworthy. §.§ Knowledge Distillation Knowledge Distillation is a form of transfer learning that facilitates the transfer of knowledge from a larger teacher model to a smaller student model. The goal is to reduce the model size while maintaining or even improving performance. Based on whether we can access prediction probability, the existing distillation methods can be categorized into two types: Black-box Distillation and White-box Distillation. Black-box Distillation refers to distillation from models that we are unable to access the weight and prediction logits such as PaLM <cit.>. Recent studies have attempted to distill reasoning ability from GPT <cit.> or some emergent ability such as chain-of-thought <cit.>. However, these methods may still be categorized as the genre of data-augmentation-and-then-fine-tuning approaches. White-box Distillation means the teacher models are either fully open-sourced such as Llama <cit.> or they can return partial probability distribution of the generated tokens, such as code-davinci-002. Instead of the hard token fine-tuning, white-box distillation typically uses more fine-grained signals by matching a distribution between teachers and students <cit.>. Further, in the field of white-box distillation, there are two different ways: online distillation and offline distillation. Onlin distillation <cit.> needs to keep both the teacher model and the student model on the GPU simultaneously during training. On the other hand, offline distillation typically involves obtaining knowledge from the teacher model beforehand. Our work is an extension of white-box offline distillation and focuses on how white-box offline distillation can be improved in terms of trustworthiness by re-calibrating the teacher distribution. § PRELIMINARIES §.§ Concentrated Knowledge In the process of searching for a suitable trade-off between the amount of knowledge to transfer from the teacher model and efficiency, we begin by visualizing the probability distribution for each token entry. As illustrated in Figure <ref>, the blue line with range describes how averaged accumulated probabilities increase when we select more tokens (ranked from highest probability to lowest probability in one entry). The trend clearly shows a few top-position tokens take most of the probability information of a token entry. To be specific, the accumulated probabilities of Top-5 tokens can occupy over 95% probabilities while the remaining 49995 (i.e. a model with vocab. size of 50k) tokens have nearly 0 probability. We named this phenomenon "Concentrated Knowledge" as almost full knowledge of a token entry is stored in its top-k tokens where the remaining tokens have negligible information. §.§ Tuning-Induced Mis-calibration In the context of LLMs, mis-calibration can be divided into two types: over-confidence and under-confidence. Over-confidence occurs when the predicted probability of a token is higher than its actual accuracy, while under-confidence takes place when the predicted probability is lower than the actual accuracy. During the fine-tuning process of LLMs, cross-entropy loss is commonly employed, which encourages the models to assign a probability of 1 to one token and 0 to all other tokens based on the ground-truth token. This training nature results in 1.) an over-estimation of the ground truth token's probability and 2.) an under-estimation of all other token's probability. As shown in Figure <ref> (a) and (b), it is observed that both fine-tuned LLMs exhibit over-confidence in their top-1 token predictions, while demonstrating under-confidence in the subsequent tokens. This phenomenon, which we call "tuning-induced calibration", highlights the untrustworthy nature of fine-tuned models. Since fine-tuned teacher models suffer from this tuning-induced mis-calibration, if the knowledge from the mis-calibrated teacher models is directly used in traditional distillation-based methods, the student models are very likely to inherit the same mis-calibration nature as depicted in Figure <ref> (c). Motivated by the tuning-induced mis-calibration, our proposed method incorporates a "trustworthy maximization" procedure to re-calibrate the knowledge derived from the teacher models. This enables us to obtain a genuinely trustworthy student model. §.§ Expected Calibration Error To measure calibration in the context of LLMs, we adapt the expected calibration error (ECE) to the free-text generation task by treating the generation of a single token as a classification task. In this adaptation, we restrict the model to generate only one token from a set of candidate choices (e.g., A/B/C/D). For each token, we obtain the highest probability choice using max_i∈ C P(i), where C represents the set of candidates. The probability of the chosen token is taken as the predicted confidence, and we calculate the accuracy by comparing the predicted choice to the ground truth. Then we utilize a total M probability interval as bins and categorize each chosen token into m-th bin according to the predicted confidence. The ECE can be computed as follows: ECE = ∑_m=1^M |B_m|/n| acc(B_m) - conf(B_m) | Here, M is the number of bins. B_m represents the set of predictions in bin m, |B_m| is the number of prediction instances in bin m, and n is the total number of predictions. acc(B_m) is the average accuracy of predictions in bin m, and conf(B_m) is the average confidence of predictions in bin m. A lower ECE value indicates that the model’s predicted probabilities are more consistent with actual outcomes, meaning the model is better calibrated. §.§ Trustworthy Score In evaluating the trustworthiness of a model, it is essential to consider both high accuracy and effective calibration. Existing benchmarks primarily focus on accuracy, assuming that higher accuracy implies greater trustworthiness. However, our discovery of the widespread issue of "tuning-induced mis-calibration" has highlighted the inadequacy of relying solely on accuracy for a comprehensive evaluation of model reliability. To address this limitation, we propose Trust Score metric to quantify a model's trustworthiness, which quantifies the trustworthiness of a model by considering two key aspects: its ability to provide accurate answers (measured by Acc) and its capacity to align predicted confidences with actual accuracies (measured by ECE). The Trust Score is defined as follows: Trust = Acc - ECE By incorporating the Trust Score, we achieve a more balanced evaluation of trustworthiness, taking into account both accuracy and calibration. § EFFICIENT TRUSTWORTHY DISTILLATION In this section, we introduce eFfIcient tRustworthy disTillation (FIRST), which can be divided into three parts. Firstly, we select Top-5 tokens as knowledge for transfer (Efficient Knowledge Selection) in Sec.<ref>. Then, we adjust the knowledge for trustworthiness to ensure that the subsequent smaller models can maximize its utility (Knowledge Trustworthy Maximization) in Sec.<ref>. Finally, we describe the learning process of the student model (Knowledge Matching) in Sec.<ref>. §.§ Efficient Knowledge Selection Transferring knowledge directly from teachers to students can be computationally costly and storage-intensive. For example, if we consider a vocabulary size of 50,000 tokens, retrieving the complete probability distribution from a dataset of 100,000 samples, with an average length of 2,048, would require a staggering 120 TB of storage, which is impractical. Based on the discovery of "concentrated knowledge" in teacher LLMs, we observe that the majority of knowledge is concentrated within a small portion of top-position tokens, as elaborated in Section <ref>. Therefore, considering that both computation and disk space increase linearly with the number of selected token entries, we argue that it is not necessary to use the complete probability distribution. Instead, by selecting a small amount of top-position tokens that contain majority of knowledge, we can strike the optimal balance between computational overhead and effectiveness. As depicted in figure <ref>, accumulated probability of Top-5 token entries occupy more than 95% probabilities while reducing storage from 120 TB to 1.2 GB. §.§ Trustworthy Maximization Once the top-5 tokens and their corresponding probabilities are collected from the teacher model, it is crucial to subject this knowledge to further processing to ensure proper calibration, as teacher models can also suffer from "tuning-induced Mis-calibration" due to fine-tuning (as we elaborate in Sec. <ref>). This additional calibration step ensures that the student model improves in both accuracy and trustworthiness. Label Smoothing. We first attempted to address tuning-induced mis-calibration" by applying a smoothing coefficient, denoted as δ, to mitigate the teacher model's over-confidence in its top-1 token predictions while alleviating under-confidence in other predicted tokens as follows: P_T(i):=P_T(i)-δ if i=1 P_T(i):=P_T(i)+δ/4 if 2≤ i≤ 5 Here, T denotes the teacher model, P_T(i) represents the probability of the i-th top token. While label smoothing can effectively mitigate over-confidence in top-1 token predictions, we have identified significant drawbacks associated with this approach. Firstly, directly applying label smoothing may compromise the preservation of token rankings, particularly between the top-1 and top-2 tokens. This can lead to a decline in model performance in certain cases. Secondly, label smoothing uses a constant probability, disregarding the varying levels of over-confidence or under-confidence in different token entries. Consequently, this can result in a transition from under-confidence to over-confidence among the top 2-5 tokens, making it challenging to achieve a balanced calibration across all of them. Temperature Scaling. Subsequently, we explore another approach using a temperature scaling technique to re-calibrate the probabilities: P_T(i) = exp(P_T(i)/c)/∑_j exp(P_T(j)/c) This method offers several advantages. First, it allows for a more fine-grained adjustment of the probability distribution by controlling the temperature scaling parameter c, which can be optimized to achieve the lowest ECE values. Second, unlike label smoothing, temperature scaling can effectively balance the confidence levels of both top-1 and subsequent tokens, reducing both over-confidence and under-confidence issues. This results in a more consistent and reliable calibration across all tokens, thereby enhancing the overall trustworthiness of the knowledge. Additionally, we find that selecting the optimal c parameter on the validation set to maximize the knowledge significantly enhances the effectiveness of transferring trustworthy knowledge. The knowledge processed by using this c yields the best results for the student model (detailed in Sec. <ref>). Due to the low cost of selecting c on the validation set, we can tailor different c values for different tasks. This demonstrates "temperature scaling" excellent scalability and flexibility. §.§ Knowledge Matching After obtaining the re-calibrated probability data P_T that contains P_T(1),P_T(2),…,P_T(5), we use the same training data to train the student model. Instead of utilizing language modeling loss on hard labels, the probabilities of the 5 tokens that correspond to the teacher's top-5 of the student model are retrieved as P_S which contains P_S(1), P_S(2),..., P_S(5). Kullback–Leibler divergence is then used to measure the loss between the teacher model and the student model: Loss(y_1:N) = ∑_t=1^N D_KL(P_T || P_S) § EXPERIMENT §.§ Experimental Settings Our experiments focus on both In-Domain and Out-of-Domain settings to ensure generalization abilities. In the In-Domain setting, we utilize CommonsenseQA (CSQA) <cit.> and BoolQ <cit.> for both training and testing. In the Out-of-Domain setting, we fine-tune and distill smaller models on a commonly used instruction-following dataset, Alpaca <cit.>, while, testing the models' performance over unseen task CommonsenseQA (CSQA) and OpenBook QA (OBQA) <cit.>. This approach allows us to assess the generalization abilities of the smaller models on unseen tasks, simulating real-world scenarios where these models need to perform on unfamiliar tasks. To ensure the practicality of our approach, we select three widely used model families for our experiments: Llama-1 <cit.>, Llama-2 <cit.>, and OpenLlama <cit.>. In our experiments, we test four types of smaller models obtained through different methods: 1) Fine-tune 7B: Obtained by using fine-tuning with hard labels. 2) Distill 7B: Obtained by distillation methods without "knowledge trustworthy maximization". For a fair comparison with our approach, we also use the top-5 tokens as knowledge in the latter comparison. 3) FIRST 7B w/TS: Obtained by our proposed method, primarily using temperature scaling (TS, see Eq. <ref>) within the trustworthy maximization phase. 4) Distill 7B w/ LS: We also explore the use of label smoothing (LS, see Eq. <ref>) to show why we ultimately adopt TS over LS in "knowledge trustworthy maximization". In the latter experiments, we pick up the popular smoothing coefficient 0.1 follow previous works <cit.>. Additionally, we also provide the performance of Teacher models. For further implementation details, please refer to the Appendix. §.§ Experiment Results Based on the results shown in Table <ref>, we draw the following conclusions: ∙ Fine-tuning lead to catastrophic mis-calibration: We observed that although fine-tuned smaller models achieve relatively high accuracy in both in-domain and out-of-domain settings, their ECE values are notably high, resulting in overall low trust scores and lower reliability. This mis-calibration phenomenon is particularly pronounced in out-of-domain scenarios. For instance, we observe that the ECE of the model fine-tuned on OpenLllama 7B in the out-of-domain CSQA task reaches 21.6%, while its accuracy is only 28.3%, indicating that smaller models obtained through fine-tuning tend to be unreliable on tasks they have not been trained on. In real-world scenarios, when smaller models are privately deployed, they will inevitably encounter tasks they have not been trained for. In such cases, there would be a mismatch between their confidence and true likelihood. They might confidently provide incorrect answers and even continuously emphasize their incorrect responses, thereby misleading users. This clearly does not meet the criteria of a trustworthy model. ∙ Distillation brings bad calibration as well: Furthermore, distilled models without "Knowledge Trustworthy Maximization" show relatively bad calibration ability. For in-domain tasks, the distilled Llama-1 7B and Llama-2 7B have ECE values of 9.4% and 10.9% on CSQA, a mis-calibration level similar to fine-tuned models. And distilled model of OpenLlama shows even worse calibration than fine-tuned models on BoolQ. While for accuracy, it generally has an improvement over standard fine-tuning, but on some settings such as Llama-1 on CSQA, it also shows worse performance than fine-tuning. This suggests that direct distillation without further process the knowledge does not consistently lead to better calibration and performance. ∙ Temperature Scaling outperforms Label Smoothing: Here, we compare the results of different methods used in the "Knowledge Trustworthy Maximization" phase. It is evident that FIRST7B w/ TS performs significantly better than Distill7B w/ LS. In the In-domain setting of BoolQ, the ECE values of FIRST7B w/ LS astonishingly reached 19.0%, significantly worse than Distill7B, which does not apply any additional processing to the knowledge. This highlights that LS cannot deliver stable performance across all scenarios. In contrast, FIRST7B w/ TS consistently achieves lower ECE in both in-domain and out-of-domain scenarios. Additionally, they attain better accuracy in most cases, resulting in the highest Trust scores. §.§ Reliability Analysis Reliability Diagrams. To enhance our analysis and facilitate better comparisons, we employ reliability diagrams in addition to metric-based evaluations. As depicted in Figure <ref>, the reliability diagrams are divided into 10 bins based on the model’s confidence. The bars represent the expected accuracy within each bin, and the colors indicate whether the model is under-confident (red) or over-confident (green) within each bin. A perfectly calibrated model would have a straight diagonal line from the bottom left to the top right of such a diagram, indicating that the confidence level is exactly consistent with expected accuracy. The Fine-tune7B model exhibits catastrophic mis-calibration, primarily characterized by over-confidence in its predictions. This means that the model tends to assign higher confidence levels to its predictions than what is justified by their actual accuracy. Although the Teacher33B model also suffers from over-confidence, its overall high accuracy results in a much higher trust score. Additionally, the Distill7B model demonstrates slightly improved calibration compared to the Fine-tune7B model. Remarkably, our FIRST7B model outperforms the other models, including the teacher model. It exhibits noticeably less under-confidence and over-confidence, as indicated by the smaller areas of the red and green bars, respectively, and its proximity to the perfect calibration line. §.§ Analysis of Top-5 Selection. Figure <ref> illustrates the disk space usage and cumulative probability coverage for knowledge selection ranging from the top-1 to the top-100 tokens. The blue line represents the average accumulated probabilities, while the shaded area indicates the range of probabilities. The green line shows the corresponding disk space required. The reasons we finally adopted top-5 are as follows: * Efficient Probability Coverage: The figure demonstrates that selecting the top-5 tokens covers over 95% of the total probability. This high coverage ensures that the majority of relevant knowledge is captured, making the distillation process effective. * Minimal Disk Space Usage: The green line indicates the disk space required for storing the selected tokens. By selecting only the top-5 tokens, we significantly reduce the storage requirements compared to selecting more tokens. This efficiency is crucial for offline distillation, where disk space can be a limiting factor. * Balancing Trade-offs: The Top-5 selection strikes a balance between maximizing probability coverage and minimizing disk space usage. This balance ensures that the distilled knowledge is both comprehensive and storage-efficient, enabling practical implementation in various scenarios. * Scalability: Our method exhibits strong scalability. It is naturally extendable to distillation from models such as the GPT-3 series (text-davinci-003), which can only return top-5 token probabilities. This increases the range of LLMs that can be used as teacher models, allowing student models to be effectively trained even in semi-black box scenarios. §.§ Temperature Scaling Parameter Analysis As described in the section on Knowledge Trustworthy Maximization (Sec. <ref>), we employ a temperature scaling parameter to optimize the ECE (Expected Calibration Error) value on the validation set, as illustrated in the left part of Figure <ref>. By employing grid search, we initially partition the range from 0 to 1 into increments of 0.1 and identify the temperature associated with the lowest ECE value, for instance, 0.3. A larger temperature results in all Top-5 tokens converging to the same probabilities, specifically 0.2 when the number of candidate choices is 5. When the temperature is set to 1, the probability of the top-1 token is dramatically compressed, while the probabilities of the other tokens are enlarged accordingly. Conversely, a temperature of 0.1 can even amplify the probabilities of over-confident tokens, leading to even worse calibration. To further refine the search for the optimal temperature, we narrow down the interval and use a smaller step size of 0.02. This allows us to pinpoint the best temperature more precisely. Additionally, we compare the performance of FIRST using the selected optimal temperature with other different temperatures as shown in the right part of Figure <ref>. FIRST with optimal temperature do outperform those with other levels of temperatures with a large margin, indicating the effectiveness of selecting such optimal temperature. § CONCLUSION In conclusion, our proposed method, eFfIcient tRustworthy diSTillation (FIRST), effectively enhances both accuracy and calibration in large language models. By applying "trustworthy maximization", FIRST efficiently transfers the minimal yet most effective knowledge from teacher to student models. Experimental results show that FIRST consistently improves trustworthiness across various scenarios, demonstrating its potential to create reliable language models for practical applications. § IMPACT STATEMENT This paper presents work whose goal is to advance the field of Machine Learning. We address the critical issue of catastrophic mis-calibration in current training pipelines (supervised fine-tuning and knowledge distillation) and propose a pipeline to efficiently obtain a more trustworthy model. There are many potential societal consequences of our work, none of which we feel must be specifically highlighted here. § LIMITATIONS It is shown that our efficient trustworthy distillation (FIRST) demonstrates superior calibration ability and performance over direct distillation and standard fine-tuning methods. However, despite these exciting results, there are still some limitations to our current work, as well as potential opportunities for future research. Extend to Large Teacher Model: Due to the resource limitation, our largest teacher model is Llama 33B which is not very large but already achieving exciting results by distillation to a 7B student model. We expect that employing a large teacher model such as 70B can lead to better calibration ability and performance since a large model learns a better distribution. However, we are unable to explore how very large teachers perform due to resource limitations. Top-K Chosen in Offline Distillation:Another limitation of this work is that it does not provide a rigorous study on how many token probabilities to choose for one entry is optimal for knowledge distillation in large language models. Currently, we consistently choose the top-5 token probability to retrieve because of the reasons stated in  <ref>. However, how much token probability to use is optimal could be an important area for further exploration and development. § DETAILED EXPERIMENTAL SETTING §.§ Implementation Details We train our models on 8 GPU (RTX A6000 48G) using the Adam optimizer with beta set to be [0.9, 0.999] and epsilon fixed to be 1e-6 and cosine annealing scheduler with a warm-up ratio of 0.03. For fine-tuning, we utilize LMFlow <cit.> package to obtain a well fine-tuned model by a standard 3-epoch training and control the batch size to be 32 on each GPU and the learning rate for teacher models to be 2e-5. Finally, for distillation, the batch size is set to 32 on each GPU and we train our model for 3 epochs, the last checkpoint is used for evaluation since it has the best performance. In addition, when implementing distillation without re-calibration, we use the following normalization function to normalize the top 5 distribution and prevent the probability to be 0. P_T(i)=P_T(j)+δ/∑_j(P_T(j)+δ) In our setting, i, j = 1, . . . , 5, representing the top-5 token probability and δ is a small shift amount that prevent the probability to be 0 after normalization. The δ is set to be 1e-6 to minimize the influence. §.§ Prompt and Data Format For question-answering tasks, we follow  <cit.>'s format and fine-tune the model in a zero-shot setting. For out-of-domain tasks, we directly follow Alpaca's <cit.> setting to obtain the fine-tuned model. In both settings, we make use of the next token strategy for inference and answer generation as shown in <ref> § ADDITIONAL ANALYSIS §.§ Case Study We further conduct three case studies to show that FIRST indeed helps mitigate mis-calibration in real-world question answering. As shown in Table <ref>, we ask the models of three different tuning methods on Alpaca to answer the question: The correct answer is and the wrong answer is . From the output confidence, we can see that standard fine-tuned models and direct distillation give high confidence in the wrong answer, which is far from satisfactory for trustworthy in real-world settings, especially when additional post-processing procedures were expected to be applied to filter wrong answers by identifying unconfident responses. In comparison, FIRST greatly mitigates this mis-calibration by producing a confidence of around 50% which indicates the model is not sure about the generated answer, allowing systems to filter those undesirable answers by a hard confidence threshold. In the third case, we follow the FalseQA <cit.>. In this case, all of the answer choices are expected to be wrong and models should output a confidence of 25% in the top-1 token to achieve minimal ECE value. That's why our FIRST shows best calibration in this case.
http://arxiv.org/abs/2408.12567v1
20240822173350
FRB 20121102A monitoring: updated periodicity at L-band
[ "C. A. Braga", "M. Cruces", "T. Cassanelli", "M. C. Espinoza-Dupouy", "L. Rodriguez", "L. G. Spitler", "J. Vera-Casanova", "P. Limaye" ]
astro-ph.HE
[ "astro-ph.HE" ]
Departament of Astronomy, Universidad de Chile, Camino El Observatorio 1515, Las Condes, Santiago, Chile Centre of Astro-Engineering, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, Santiago, Chile Joint ALMA Observatory, Alonso de Córdova 3107, Vitacura, Santiago, Chile European Southern Observatory, Alonso de Córdova 3107, Vitacura, Casilla 19001, Santiago de Chile, Chile Department of Electrical Engineering, Pontificia Universidad Católica de Chile, Av. Vicuña Mackenna 4860, Santiago, Chile Max-Planck-Institut für Radioastronomie, Auf dem Hügel 69, D-53121 Bonn, Germany Department of Electrical Engineering, Universidad de Chile, Av. Tupper 2007, Santiago 8370451, Chile Instituto de Astrofísica, Facultad de Física, Pontificia Universidad Católica de Chile, Casilla 306, Santiago 22, Chile Argelander Institute for Astronomy, 53121 Bonn FRB 20121102A was the first fast radio burst to be observed to repeat. Since then, thousands of bursts have been detected by multiple radio telescopes around the world. Previous work has shown an indication of a cyclic activity level with a periodicity around 160 days. Knowing when the source repeats is essential for planning multi-wavelength monitoring to constrain their emission extend and progenitor source. We report the monitoring of FRB 20121102A using the 100-m Effelsberg radio telescope at L-band and update the periodicity of the cyclic activity-level. We use the Lomb-Scargle periodogram on a sample of 272 observing epochs where 41 correspond to detections and 59 to non-detections. Our dataset is composed of the 7 epochs of our monitoring plus publicly available data. We investigate two methods, i) binary model, describing the observing epochs with 1 if there are detections and with 0 for non-detections. ii) normalised rates model: which considers the inferred detections rates. We report no detections in 12.5-hour observations down to a fluence of 0.29. The best period found for the cyclic activity window is 159.3 +- 0.8 days for the binary model and 159.3 +- 0.3 days for the normalised rates model. The activity phase is shown to be 53. The normalised rates shows a clear Gaussian-like behaviour for the activity level, where the number of detections peak at the centre of the activity window. The periodicity found through both methods is consistent for the L and S-band datasets implying it is intrinsic to the source. The activity phase in S-band however shows an indication of it ending before the L-band activity phase, supporting the idea of chromatic dependence of the activity window. The sample at C-band however is not large enough to further confirm this result. FRB 20121102A monitoring: updated periodicity at L-band. C. A. BragaEmail: cristobal.braga@ug.uchile.cl1,2 M. Cruces3,4,5,6,2 T. Cassanelli7 M.C. Espinoza-Dupouy1 L. Rodriguez 8 L. G. Spitler 6 J. Vera-Casanova 2,5 P. Limaye 6,9 Received September XX, XXXX; accepted March XX, XXXX ======================================================================================================================================================================================================================= § INTRODUCTION Fast radio bursts (FRBs) are highly energetic (e36e41) radio pulses with durations ranging from microseconds to milliseconds that come from extragalactic sources <cit.>. They are characterised by a frequency-dependent time delay in their arrival times, which is quantified by the dispersion measure (in units of ; DM) and corresponds to the column density of free electrons between the observer and the source. Based on their detection we can categorise them with two types: one-offs, which have been detected only once, and repeaters. Morphological studies of FRBs show that one-offs and repeaters have distinguished spectral and time structure <cit.> suggesting that they come from different populations and potentially having different origins. From the repeaters only two sources exhibit periodic activity windows: FRB 20180916B <cit.> with a periodicity of 16.35 days and FRB 20121102A with a periodicity of 157 days found in <cit.> and 161.3 days found later in <cit.>. FRB 20180916B has an active window (the phase where the bursts are detected) of ∼31 which means that for 5 days out of the 16 <cit.> the source is active. For FRB 20121102A the activity window is ∼60, which means that out of the 161 days the source is active for roughly 97 days <cit.>. The first repeater ever detected, FRB 20121102A, <cit.> was localised by Very Long Baseline Interferometric observations to its host, a low-metallicity, star-forming dwarf galaxy at redshift z = [separate-uncertainty=false]0.19273(8) <cit.>. One of the explanations for the periodic behavior of FRBs is a formation scenario involving a binary system, where the periodicity arises from the orbital period. Another possibility, pointed out for FRB 20180916B is that the periodicity is due to the precession of a magnetar <cit.>. Even though we still do not know the origin of these FRBs, their periodic behaviour makes follow-up observations and multi-wavelength campaigns easier, allowing us to characterise their emission extent and therefore constrain their progenitor source. In this paper, we report the follow-up observations of FRB 20121102A at L-band using the 100-m Effelsberg radio telescope and the updated periodicity results. In <ref>, we describe the setup of our observations. In <ref>, we present the techniques used to combine our dataset with all publicly available observations of the source. In <ref>, we present the results of the follow-up and the updated periodicity and activity window. In <ref>, we discuss the implications of periodicity for FRB 20121102A, and in <ref>, we present our final remarks and conclusions. § FOLLOW-UP The dataset consisted of 12.5 of observations of FRB 20121102A taken in September 2022, February 2023, June 2023, August 2023, and November 2023. These observations were not scheduled based on the activity of the source but instead carried out as blind observations when the telescope was available. The data was taken with the Effelsberg 100-m telescope located in Germany. The observation details are presented in <ref>. The telescope has a system equivalent flux density (SEFD) of 17 and a minimum fluence threshold of 0.15 considering bursts with a 1 ms duration, a 300 bandwidth and a minimum signal-to-noise ratio (S/N) of 7 <cit.>. The telescope was pointed at : 05:31:58.600 and : +33:08:49.600 to look for FRB 20121102A events. At the start of each scheduling block we started with a short 23 minutes observation on the bright pulsar PSR B0355+54 (pointing to : 03:58:53.7000 and : 54:13:13.8000). This observation was conducted to verify that the system setup and conditions for observing FRB 20121102A were optimal. The observations were carried out with the central beam of the 7-beam receiver (a description of this can be found in ) in combination with Effelsberg's direct digitalization system (EDD) with a 400 bandwidth at central frequency of 1400 with several time and frequency resolutions, going from 51.2 and 512 frequency channels, to 25.6 and 1024 frequency channels in the largest dataset. § DATA PROCESSING The data were in <cit.> format with 4 Stokes parameters and were converted to intensity using the routine from <cit.>. Once converted the files were processed using a custom -based <cit.> pipeline implemented in Python. The pipeline was tested using data collected in each observing epoch from the test pulsar, PSR B0355+54. For each epoch single pulses from this source were found down to a S/N of 7, showing that the pipeline was working correctly. The pipeline executes the radio frequency interference (RFI) mitigation routine to generate a mask file to later be used in the next stages of processing. The data of FRB 20121102A were incoherently dedispersed <cit.> using a DM range from 550570 with a step of 1 and a downsampling factor of 8 was applied to the timeseries. We searched for candidates down to a minimum S/N of 7. The timeseries for the different DM trials were used to finally run the single pulse search. After obtaining the pulse candidates we removed duplicates by clustering events whose arrival time is within a window defined by the dispersive delay of the DM trials and kept the ones with the highest S/N. The final candidates were plotted with a custom made waterfall plot and visually inspected to determine whether they were real events, RFI or other artefacts. § RESULTS §.§ Non-detections No bursts were detected in 12.5 of data of FRB 20121102A at L-band frequencies taken with Effelsberg down to a S/N of 7. This means, no detections down to a fluence[In this work we compute fluence based on the band-averaged peak flux density.] of 0.29 considering the average of the pulse width[We refer to the band-averaged pulse width as pulse width.] from the bursts reported in <cit.>. This fluence corresponds to an isotropic energy of 1.31e38. §.§ Periodicity The observations detailed in <ref> were added to a dataset containing several observation dates gathered from the literature. As there is evidence of chromaticity in the start and duration of the active windows of FRB 20180916B and FRB 20121102A <cit.>, we separated the datasets of observations into L, S, and C-bands, where L-band covers frequencies from 1 to just below 2, S-band spans from 2 to just below 4, and C-band ranges from 4 to just below 8. §.§.§ L-band dataset The dataset is made from the observations reported in <cit.> and the observations reported in this work. We made sure to only include each observation once, as some were reported more than once by different authors. In case of only reported modified Julian date (MJD) of detected bursts we used the MJDs of the bursts and the wait times between them to determine approximate start of observation times. We included in the dataset a single burst reported in <cit.> because there is a lack of detections between 2021 and 2023. The full dataset consists of 272 epochs in a total time span of 4031 days with 160 non-detections and 112 detections. We use the Lomb-Scargle periodogram <cit.> technique to find the periodicity of the source and find a periodicity of 159.3+-.8 days which is in very good agreement with the 161.3+-5 days found in <cit.>. We try two different methods to model the detections in order to calculate the periodicity. We use a “binary model” and a “normalised rate model”. In the first one detections are labelled with 1 and no detections are labelled with 0 similarly as in <cit.>. We obtained a periodicity of 159.3+-0.8 days for the binary model. We estimate the false alarm probability of the peak by using a bootstap resampling with 10000 trials. We get a peak significance above 5σ and smaller uncertainty than <cit.>. <Ref> shows the result of the periodogram. The 1σ_LS uncertainty for the period is calculated using the full width half maximum (FWHM) of the Gaussian fit to the peak as formulated by <cit.>: σ_LS = FWHM/2√(2/N ×(S/N)^2) where N is the number of points in the dataset. Along with the binary detection model, we used a more sophisticated model in which we instead use the rates of detections for each observation in our dataset. To address the difference in sensitivity across the telescopes, we translate the rates to what a 100-m telescope would infer. We refer to this rate as normalised rate, and we calculate it as: R_norm = R_ref( F_min/F_ref) ^γ, where R_ref is a reference rate obtained from an observation, F_min is the minimum detectable fluence of the telescope we use to normalise, F_ref is the minimum detectable fluence of the telescope from which we take the reference rate at a given observing frequency and γ is the power-law value that describes the cumulative energy distribution of the bursts for a given FRB. We consider a gamma value of -1.1 as reported in <cit.>. The fluence of a pulse is given by (radiometer equation): F= S/N×SEFD/√(n_p×Δν)×√(Δ t) where the S/N is taken to be 7, n_p is the number of polarisations taken to be 2, Δ t is the pulse width considered to be 1, and Δν is the observation bandwidth. The periodogram for this dataset yields a periodicity of 159.3+-0.3 days, which is consistent with the one obtained from the binary model. As shown in <ref>, the significance of this peak is greater than 5σ but lower than the one from the binary model. This is expected because, in comparison with the binary model, the same number of observations are now split into more bins, given the continuous value an inferred rate can have. Other significant peaks to the left of the 159.3 days period correspond to harmonics and the two most prominent peaks to the right of 159.3 days are a result of the time difference between the two most active reported windows for FRB 20121102A which are visible at <ref>. §.§.§ S-band dataset The same process described previously was done for a dataset of observations made in S-band. The observations were taken from <cit.>. This dataset is composed of 80 epochs with 64 non-detections and 16 detections for a total time span of 1549 days. The periodicity for this dataset is 159.8+-1.4 days for the binary model and 159.8+-0.7 days for the normalised rates model, in agreement with the previous two obtained in the L-band dataset. The binary model periodogram for this dataset is shown in <ref> and the normalised rate model periodogram in <ref>. §.§.§ C-band dataset The dataset of observations from C-band were taken from <cit.>. This dataset is comprised of 49 epochs with 43 non-detections and 6 detections for a total time span of 673 days. We find no periodicity for this dataset. This is likely due to the low number of detections present in the data. § DISCUSSION The results obtained for the periodicity of FRB 20121102A are consistent and further improved than the previously reported in <cit.> because our dataset contains more epochs and an extended time span. We tried the method of normalising the rates to test whether it improves the estimation of the periodicity of FRB 20121102A and find a period of 159.3 +- 0.3 days. In comparison with the 159.3 +- 0.8 days obtained from the binary model, both peaks have a significance greater than 5σ but the normalised rate model has lower significance of the binary model. We attribute the difference to the fact that in the binary model the power of the signal is split between detections (1) and non-detections (0), while in the normalised rate model the power is distributed in a wider bin range. This introduces as well more noise to the timeseries. A more accurate model to normalise the rates would consider different γ values in <ref> for each telescope in the dataset. To test the importance of the parameter in our results, aside from the γ=-1.1 obtained in <cit.> we try using γ=-1.37 reported in <cit.> and obtain a consistent periodicity. Therefore because we used Effelsberg as our reference in the normalisation we consider that using γ=-1.1 for all telescopes is optimal. Because of the higher significance of the peak in the binary model, we take this as the new best periodicity for FRB 20121102A. To centre the activity window around a phase of ϕ=0.5, we use the MJD 58356.5 as a reference epoch for phase ϕ = 0 and we get an activity window of 53. This means, that roughly out of the 159 days duration of one cycle, the source is active within a 84 days window at L-band. This is broader than the 31% for FRB 20180916B <cit.>. As it is seen in <ref> the highest peak in the periodogram of the normalised rates is 290 days, followed in second place by the peak at 159 days. A third peak around 380 days is also significant. As marked in the periodogram with the purple arrows, the peak at 290 days superposes with the 2 harmonic of the 159 days peak. These peaks are also seen in the periodogram of the binary model but with significantly lower power. By plotting all the observing epochs against their normalised rate as shown in <ref> we observe that the two most active windows correspond to 2018 with the observations of Arecibo from the “November rain” <cit.> and to 2019 with the FAST telescope observations <cit.>. This introduces an artifact in the calculation of the periodicity. The peaks at 290 and 380 days can be explained by the spacing between these high-activity episodes, with intervals ranging from 270 days for the closest observations to 370 days for the furthest apart. If we remove those epochs, we obtain the periodogram shown by the orange curve in <ref>, where the peaks are significantly lower and the 159.3 days periodicity appears as the strongest signal. To verify that such peaks correspond to artefacts, we fold the epochs using the above mentioned reference epoch. The result from folding at 290 and 380 days is the distribution of epochs with detections all over the phase space, with no clear trend. We only find a dispersion optimised distribution of the detection epochs for the 159 days period. This is shown at the bottom of the periodogram in Fig. <ref>. Observations triggered by known activity of the source may introduce bias in the calculation of the periodicity. In our case, since we have 59 non-detections and 41 detections this bias is minimised. From the folding it is clearly seen that the active phase has a Gaussian-like profile, where the peak of the detections are located in the middle of the window. This behaviour has as well been seen for FRB 20180916B <cit.>. Overall, we conclude that although the normalised rate model has greater physical meaning, more data is needed to achieve the same or higher level of significance than the binary model. Regarding the dependence of the activity window with the observing frequency, we obtain for the L and S-band the same period. This means that the phenomena leading to the active phase of the source is intrinsic. However, as is seen in the folding of the S-band dataset shown in Fig. <ref>, the full phase range is not mapped. In particular we lack observations in a considerable part of the phase, towards the start of the activity. We can see that the S-band normalised rates shows as well some indication of a Gaussian-like profile for the activity window but is not possible to infer its duration as it is the case for the L-band dataset. Folding the S-band dataset with the periodicity obtained at L-band we find that the activity window for S-band ends before the L-band activity window, having a phase difference of 0.12. This means that the S-band activity phase finishes 19 days before the L-band active phase. Since we do not have the 00.4 part of the phase completely mapped in S-band we cannot conclude for certain what happens in this part. Regarding the C-band dataset, we do not have enough observations and in particular detections for the Lomb-Scargle periodogram to output a consistent periodicity. More observations will help better understand the shift in phase between frequency bands and the behaviour of the activity phase (Espinoza et al. in prep.). Using our 159.3 days periodicity, 53% of active phase and our reference epoch of 58356.5 we calculate that as of August 2024 the source is at 0.6 phase and the next active phase should begin on 2024-11-12 UTC and end in 2025-02-03 UTC. We emphasise that this does not guarantee detections of the source, but rather indicates when the source is most likely to emit. Our result for the periodicity is consistent with the recent work of <cit.>. Unlike their approach, which considers the MJDs of the bursts independently of the frequency band, we separated the observing epochs based on the frequency at which the observations were conducted. They report a candidate period of 157.1^+5.2_-4.8 and another of 4.605^+0.003_-0.010 days using a phase-folding algorithm. The second candidate period was not detected in their Lomb-Scargle periodogram, nor does it appear in ours. Furthermore, we folded the dataset at the trial period and no trend was seen. Given the chromatic behaviour observed in the active windows of FRB 20180916B <cit.> and hint for dependence in FRB 20121102A presented in our work, we believe a frequency separation to be necessary. Naturally, more observations across the frequency range will help pinpoint the magnitude of the shift and the duration of the activity window. The widely accepted scenarios to explain periodic actively repeating FRBs involve either a precessing magnetar or a neutron star (NS) in a binary system. In the literature, several precession models of young, hot, non-superconducting, and highly active long-period isolated magnetars can explain the periodicity of FRB 20180916B <cit.>. However, most of these models do not consider vortex superfluidity, which can dampen long-period precession <cit.>. Therefore making this explanation for the periodicity of FRB 20121102A unlikely. Although there have been attempts to reconcile long-period precession with quantum vortices, this seems to occur only under very specific conditions such as a weakly coupled toroidal magnetic field <cit.>. Alternatively, models of neutron stars in binary systems have been proposed to explain the longer timescale periodicity of FRB 20121102A <cit.>. A neutron star with a black hole companion could induce spin precession on timescales similar to those observed in actively repeating FRBs. However, for this effect to occur, the neutron star would need to be in a close orbit around the black hole, where the typical orbital lifespan of the system is approximately 10 years. As the neutron star moves closer to the black hole, relativistic effects, such as precession and spin-up of the neutron star, should become increasingly pronounced <cit.>. Despite this, FRB 20121102A has been observed for over a decade and its activity windows has not shown the rate of change predicted by these models. Another possible companion for a neutron star is a massive star that continuously emits stellar winds. As the neutron star interacts with the stellar wind of its companion, it can create activity windows that vary with frequency. To explain the observed DM of FRB 20121102A using these models, the FRB would need to be emitted when the neutron star is near the periastron of its orbit. This scenario would result in a variation of four orders of magnitude in the DM within approximately 10% of the orbital phase <cit.>. However, observations indicate that the activity window of FRB 20121102A spans about half of its cyclic period, and no DM variation of that magnitude has been observed. § CONCLUSIONS We reported 7 observing epochs of FRB 20121102A monitoring and find no detections. We combined our observations with publicly available data of the source and divided it per frequency band and found a new best periodicity for the cyclic activity window to be 159.3 +- 0.8 days at L-band. This updated periodicity is more precise and with higher significance than the previously reported in <cit.>, mainly because of the larger and extended dataset. We also report the same periodicity using a normalised rates model that is more precise but has less significance than the one obtained by the binary model. We found a consistent periodicity at S-band and report that this active phase seems to end 19 days before the L-band active phase. No conclusions can be drawn from the C-band observations given the lack the observing epochs and in particular of detections. We strongly encourage the community to report both detections and non-detections of repeating FRBs and the full observation details. Sky and instrumentation statistics such as exposure time, observation duration, and telescope properties are fundamental to further constrain the nature of FRBs. This publication is based on observations with the 100-m telescope of the Max-Planck-Institut für Radioastronomie at Effelsberg. C.B would like to thank the Max Planck Partner group at PUC led by M. C. for funding the internship that led to this work. C. B. would like to thank B. Briceño for his support and feedback during this project. T. C. acknowledges support by the ANID BASAL FB210003 and fondo de astronomía: ANID / Fondo 2023 QUIMAL/ QUIMAL230001. aa
http://arxiv.org/abs/2408.11320v1
20240821040028
Novel stabilization mechanisms for concentrated emulsions with tunable morphology via amphiphilic polymer-grafted nanoparticles
[ "Kojiro Suzuki", "Yusei Kobayashi", "Takashi Yamazaki", "Toshikazu Tsuji", "Noriyoshi Arai" ]
cond-mat.soft
[ "cond-mat.soft" ]
§ ABSTRACT This study explores the stabilization mechanisms of concentrated emulsions with tunable morphology using amphiphilic polymer-grafted nanoparticles (PGNPs). We employ coarse-grained molecular simulations to investigate concentrated oil-in-water emulsions stabilized by partially hydrolyzed poly(vinyl alcohol)-grafted polymethylmethacrylate (PMMA) particles. Two grafting architectures were examined: hydrophilic-hydrophobic (AB-type) diblock PGNPs and reverse BA-type diblock PGNPs. Our findings reveal that AB-type diblock PGNPs tend to aggregate, leading to droplet-droplet coalescence. In contrast, BA-type diblock PGNPs disperse effectively in the water phase, stabilizing robust emulsion through a space-filling mechanism. The study further demonstrates that the stability and morphology of the emulsions can be tuned by varying the number of PGNPs. Our results suggest that BA-type diblock PGNPs are more effective in stabilizing concentrated emulsions, offering insights for the design of novel emulsifiers in industrial applications. § INTRODUCTION Concentrated emulsions with a dispersed phase of >30% are referred to as medium- or high-internal phase emulsions. Owing to their high volume fractions, they exhibit unique rheological and thermal properties that are not observed in conventional emulsions. This renders them as good candidates for potential applications in food, cosmetics, coatings, paints, pharmaceuticals, and porous material templates<cit.>. Large amounts of surfactants or amphiphilic polymers are frequently employed to prevent the coalescence of concentrated emulsions. However, careful selection is required because several of them exhibit cytotoxicity. Concentrated emulsions stabilized by solid particles have recently attracted attention as a promising alternative because of their reduced toxicity and high attachment energy compared to conventional surfactants or polymers. In particular, the adsorption of solid particles at the oil/water interface acts as a physical barrier that impedes Ostwald ripening. These are also known as Pickering emulsions<cit.>. The stability of Pickering emulsions depends on the wettability of the solid particles<cit.> and the interfacial assembly of their colloidal particles<cit.>. Surface modification of particles is a versatile strategy for controlling the wettability and self-assembled structure at the liquid-liquid interface. Recent techniques have facilitated the synthesis of various polymers, resulting in various polymer-grafted nanoparticles (PGNPs) via chemical bonding or physical adsorption of polymers onto NP surfaces<cit.>. A recent study showed that the partially-hydrolyzed poly (vinyl alcohol) (PVA) with an appropriate adjustment of the degree of saponification can be used as an effective emulsifier by combining experiments and molecular simulations<cit.>. Another study successfully prepared PVA-modified polymethylmethacrylate (PMMA) particles and evaluated the formulation of concentrated oil-in-water (O/W) Pickering emulsions stabilized by PMMA particles with 88 mol% saponified PVA<cit.>. Because PVA is a water-soluble polymer with excellent biocompatibility and chemical properties and PMMA is a biocompatible non-stimuli-responsive polymer, concentrated O/W Pickering emulsions stabilized by PMMA–PVA have great potential applications in cosmetics, healthcare, emulsion-templated scaffolds, and biological applications such as tissue engineering.<cit.> However, the effects of the grafting architecture of PVA molecules on the interfacial assembly and stability of concentrated emulsions at the molecular level remain relatively unknown. This study investigated the effects of the grafting architecture of partially hydrolyzed PVA-grafted PMMA NPs on the stability of concentrated O/W emulsions via coarse-grained molecular simulations. Two types of grafting architectures: (i) hydrophilic-hydrophobic (AB-type) diblock PGNPs (hydroxy groups as inner blocks and acetyl groups as outer blocks) and (ii) BA-type diblock PGNPs (opposite to the AB-type PGNPs) were considered. We discovered two different stabilization mechanisms that depend on the grafting architecture. In contrast to the Pickering stabilization by AB-type PGNPs, we observed that stable emulsion droplets were maintained although the BA-type PGNPs were not adsorbed on the droplets but were dispersed in the water phase. Furthermore, we found that BA-type PGNPs could significantly stabilize concentrated O/W emulsions over a wide range of volume fractions compared to AB-type PGNPs. Consequently, we demonstrated a novel pathway against droplet coalescence and coarsening and concluded that the formation of space-filling BA-type PGNPs between the emulsion droplets is the predominant stabilization mechanism of the concentrated emulsions. § METHODS AND MODEL §.§ DPD method We employed the dissipative particle dynamics method<cit.> which can simulate millisecond timescales and micrometer length scales by tracking the motion of coarse-grained particles (composed of a group of atoms or molecules). The DPD is based on Newton's equation of motion. Each coarse-grained particle (bead) i in the system is subjected to three types of intermolecular forces: conservative, dissipative, and random. The equation of motion is as follows: m_i d v_i/dt = f_i = ∑_j ≠ iF_ij^C + ∑_j ≠ iF_ij^D + ∑_j ≠ iF_ij^R , where m is the mass, v is the velocity, F^C is the conservative force, F^D is the dissipative force, and F^R is the pairwise random force. The sum of the forces acting on all beads between particles i and j is calculated. The conservative force is softly repulsive and is given by F_ij^C = -a_ij( 1-| r_ij|r_c) n_ij, | r_ij| ≤ r_c 0, | r_ij| > r_c , where r_ij = r_j - r_i and n_ij = r_ij / | r_ij|. a_ij describes the magnitude of the repulsive force between beads i and j, and r_c is the cutoff distance. The dissipative force F_ij^D and random force F_ij^R can be expressed as F_ij^D = - γω^D( | r_ij| ) (n_ij·v_ij) n_ij, | r_ij| ≤ r_c 0, | r_ij| > r_c and F_ij^R = σω^R( | r_ij| ) ζ_ijΔ t^-1/2n_ij, | r_ij| ≤ r_c 0, | r_ij| > r_c where v_ij = v_j - v_i, σ is the noise parameter, γ is the friction parameter, and ζ_ij is a random number based on a Gaussian distribution. Here, ω^R and ω^D are r-dependent weight functions expressed as ω^D( r ) = [ ω^R( r ) ]^2 = [1 - | r_ij|r_ c]^2, r_ij≤ r_c 0, r_ij > r_c . Temperature T is controlled by the balance between F_ij^D and F_ij^R. The values of σ and γ are related to each other by the fluctuation-dissipation theorem in the following equation: σ ^2 = 2 γ k_B T, where k_B is the Boltzmann constant. Reduced units are often used in DPD simulations. Herein, the unit of length is the cutoff distance, r_c, the unit of mass is the bead mass, m, and the unit of energy is k_BT. Generally, temperature is expressed in the energy dimension (k_BT). §.§ Models and conditions The coarse-grained PVA-grafted PMMA NP models were basically constructed based on previous studies<cit.>. Each core of PGNP comprises 162 DPD beads and its radius, R_ NP, is set to 2.0 r_ c. The vertex beads of the PMMA NP (denoted by P in Fig. <ref>(a)) are placed on the vertices of a regular icosahedron and connected to their nearest neighbors and the diametrically opposite bead through a harmonic potential U_ b: U_ b(r_ij) = k/2(r_ij-r_0)^2, where k is the spring constant, r_ij is the distance between beads i and j, and r_0 is the desired distance between two bonded vertex beads. We set k=5000 k_ BT/r_ c^2<cit.> to maintain a (nearly) rigid NP shape with Boltzmann constant k_ B and temperature T. The molecular structures and coarse-grained models used in this study are shown in Figs.  <ref>(a-d). The PVA model comprised three types of beads: the main chain (vinyl group), hydrophilic side chain (hydroxy group), and hydrophobic side chain (acetyl group) (denoted as M [yellow region], I [blue region], and O [red region], respectively, in Fig. <ref>(b)). A previous study showed that stable, concentrated emulsions were obtained with a smaller amount of PVA when the degree of saponification was tuned to a suitable value (80%). Thus, the degree of saponification f=a/(a+b) was fixed at f=0.8, where a and b are the number of hydrophilic and hydrophobic groups, respectively. As shown in Fig. <ref>(c) and (d), decamethylcyclopentasiloxane (DMCPS) and solvent (water) molecules are represented by a single bead and denoted by O and W, respectively. The nearest-neighbor particles in the PVA and DMCPS molecules are also connected through U_ b with k=100 k_ BT/r_ c^2 and equilibrium bond length r_0=0.5 r_ c. In addition, a bending potential was included between two adjacent bonds in DMCPS with k_ angle=12 k_ B/ rad^2 and θ_0=108^∘. The effect of the grafting architectures was investigated by considering two types of PGNP: (i) hydrophilic-hydrophobic (AB-type) diblock PGNPs (hydrophilic monomers as inner blocks and hydrophobic monomers as outer blocks) and (ii) BA-type diblock PGNPs (opposite to AB-type PGNPs). To obtain visually interpretable representations, schematics of the two different PGNPs are shown in Fig. <ref>(e). The graft density was set as σ_ g= 3.2; therefore, the number of grafted chains was set as 162. The parameters describing the interaction between beads of the same type, a_ii, are related to the density of the beads in the system, ρ, and the degree of coarse graining based on liquid compressibility<cit.>. In this study, a_ii was set to 18.75 k_ BT using Eq. <ref>: a_ii = 75 k_ BT/ρ. The parameters describing the interaction between the different types of beads a_ij are defined as follows: a_ij = a_ii + 3.268V_ seg/RT(δ_i - δ_j)^2 where V_ seg is the average molar volume of beads i and j, R is the gas constant, and δ is the bead solubility parameter. The interaction parameters between any two DPD beads were inspired by recent studies<cit.> and were estimated using J-OCTA simulation software<cit.> and the Fragment Molecular Orbital (FMO)-based χ-parameter Evaluation Workflow System (FCEWS) package<cit.>. The obtained values are presented in Table <ref>. The complete details of this procedure are provided in Refs. <cit.>. Based on a previous study<cit.>, four oil droplets, each with a face-centered cubic structure, were immersed in the water phase in the initial configuration. Several PGNPs were randomly placed at the oil-water interface. We varied the number of PGNPs per oil droplet as N_ NP=30-60, which resulted in 120-240 PGNPs in the system. The ratio of the number of beads in the oil phase to that in the water phase was set to 6:4 (wt.%); thus, the numbers of beads in the oil and water phases were 64,800 and 43,200, respectively. The length of the edge of the cubic simulation box was adjusted to achieve ρ=4 r_ c^-3 and the radius of the oil droplets was ∼ 10 r_ c. Periodic boundary conditions were applied to all the three dimensions. The temperature was set as 1.0 k_ BT. The noise parameter σ was set to 3.0, the friction parameter γ was set to 4.5, and the time step dt was 0.01. All simulations were performed using the HOOMD-Blue software package (version 2.9.7)<cit.>. § RESULTS AND DISCUSSION §.§ Mechanisms of stabilization We began by comparing the emulsified states of the O/W concentrated emulsions for two different grafting architectures. Figures <ref>(a-d) show representative snapshots of the AB- and BA-type PGNPs with N_ NP=50. In Figs. <ref>(b) and (d), only the oil beads are shown, and each bead in the snapshots is colored based on the group of four oil droplets in the initial configurations. Significant differences were observed between the emulsified states of the two types of PGNPs. The AB-type PGNPs were adsorbed onto the droplets because of the interaction between the outer hydrophobic block of the grafted PVA and the oil droplets (Fig. <ref>(a)). Nevertheless, partial droplet-droplet coalescence occurred, and the DMCPS particles mixed with those of the other oil droplets (Fig. <ref>(b)). This result is somewhat surprising because the adsorption of PGNPs at the oil-water interface is crucial for avoiding the coalescence of emulsion droplets. However, in the case of the O/W emulsion, the outer hydrophobic blocks of the PGNPs caused not only adsorption onto the oil-water interface but also aggregation of the PGNPs (magnified view of Fig. <ref>(a)). For quantitative analysis, we provide the radial distribution function g(r) between the centers of mass of the NPs in Fig. <ref>(e). We observed a slight increase in the proportion of the first, second, and third peaks in the g(r) curve between the AB-type PGNPs before that in the g(r) curve between the BA-type PGNPs. Thus, a tighter aggregation was observed between AB-type PGNPs and BA-type PGNPs. This also explains why the shape of the emulsion droplets covered by the AB-type PGNPs was deformed from a sphere, compared to the case of the BA-type PGNPs (cf. Fig. <ref>(a) and (c)). Thus, for the O/W emulsion, the outer hydrophobic block of grafted PVA molecules became the “sticky” points for the adhesion of both emulsion droplets and each PGNP. This resulted in droplet-droplet coalescence even for effective emulsifiers in reducing interfacial tension<cit.>. Interestingly, in contrast to the AB-type PGNPs, stable emulsion droplets were maintained although the BA-type PGNPs were not adsorbed on the droplets and were rather dispersed in the water phase, as shown in Fig. <ref>(c) and (d). For a more quantitative explanation, we analyzed the radial density distributions from the center of mass of the oil droplets (Fig. <ref>(f)). We confirmed (nearly) zero density (ρ≈ 0 r_ c^-3) at r≳ 20 r_ c, indicating that stable emulsion droplets were maintained. For the molecular distributions around the oil droplets, the density peak of water appeared before the peaks of other species. This indicated the existence of a mechanism governing effective emulsion stabilization that differed from the adsorption of solid particles at the droplet surface (Pickering stabilization). To understand the mechanisms underlying these results, we considered molecular diffusion, which is an important factor in determining emulsion stability. It might be possible to observe differences in the diffusion of the molecules between two different emulsion stabilizations, that is, Pickering stabilization by AB-type PGNPs and space-filling stabilization by BA-type PGNPs (cf. Fig. <ref>(a) and (c)). We computed the mean squared displacement of the water and NP centers of mass, Δ R^2(t) = [𝐑_i(t_0+t) - 𝐑_i(t_0)]^2, as a function of time; the data are shown in Fig. <ref>(a) and (b). Regarding the diffusion behavior of water, the slope for the AB-type PGNPs was smaller than that for the BA-type PGNPs (Fig. <ref>(a)). This is primarily because this difference can be attributed to the ability of water particles in the system with the BA-type PGNPs to effectively disperse between the oil droplets. The BA-type PGNPs did not aggregate with each other; rather, their outer hydrophilic blocks preferred to be in contact with water, resulting in a uniform distribution of water between the oil droplets. In contrast, the AB-type PGNPs formed self-assembled aggregates owing to the hydrophobic effect of the outer blocks. In addition, the aggregates exposed to water did not prefer to be in contact with water, thus preventing the uniform distribution of water between the oil droplets. Compared to the well-dispersed distribution, the diffusivity of the localized water decreases because of the formation of aggregates of PGNPs. We examined the diffusion behavior of PGNPs, as shown in Fig. <ref>(b). The opposite trend was observed for water; thus, the slope in the case of the AB-type PGNPs was greater than that for the BA-type PGNPs. This result indicates that the completely space-filling PGNPs in the interdroplet spaces between the emulsion droplets effectively act as a barrier to droplet-droplet coalescence. Thus, we demonstrated a novel pathway against droplet coalescence and coarsening and concluded that the formation of space-filling BA-type PGNPs between emulsion droplets is the predominant stabilization mechanism of concentrated emulsions. §.§ Tuning morphology and emulsion stability We systematically varied the number of PGNPs, N_ NP, to further investigate the morphology and stability of the O/W concentrated emulsions using the two different diblock PGNPs. Figures <ref>(a-h) show the N_ NP dependence of the emulsified state for different types of PGNPs. We observed three characteristic states: droplet coalescence (circles), penetration of PGNPs into oil droplets (diamonds), and stable droplets by space-filling PGNPs stabilizing (triangles). For N_ NP≤ 30, no stable concentrated O/W emulsion was observed, and droplet-to-droplet coalescence occurred for both types of PGNPs (Fig. <ref>(a) and (e)), reflecting insufficient surface coverage. When N_ NP was increased further, a difference in the morphology and stability of the emulsions was observed between the AB and BA-type PGNPs. For 40 ≤ N_ NP≤ 50, droplet-to-droplet coalescence still occurred in the system containing the AB-type PGNPs (Fig. <ref>(b) and (c)), whereas stable concentrated O/W emulsions were formed only in the system containing the BA-type PGNPs (Fig. <ref>(f) and (g)). These results indicate that the BA-type PGNPs, which enable space-filling stabilization, were more effective emulsifiers than the AB-type PGNPs. At the highest investigated number of PGNPs (N_ NP=60), the emulsified state was maintained for BA-type PGNPs, as shown in Fig. <ref>(h). The fact that BA-type PGNPs can significantly stabilize concentrated O/W emulsions over a wide range of N_ NP values supports the finding that BA-type PGNPs are more effective than AB-type PGNPs. In contrast, even at high N_ NP for the AB-type PGNPs, no stable concentrated O/W emulsion was observed; rather, the penetration of PGNPs into the oil droplets was observed. The PGNPs were no longer adsorbed onto the droplets; therefore, Pickering stabilization can only be realized over a narrow range of N_ NP. § CONCLUSIONS This study performed coarse-grained molecular simulations to investigate the effects of the grafting architecture of partially hydrolyzed PVA-grafted PMMA NPs on the emulsion stability of concentrated oil-in-water (O/W) emulsions. Two different stabilization mechanisms with tunable morphology were identified depending on the grafting architectures. When the hydroxy group was the inner block and acetyl group was the outer block, the PGNPs were adsorbed onto the droplet. This facilitated Pickering stabilization owing to the interaction between the outer hydrophobic block of the grafted PVA and the oil droplets. However, the outer hydrophobic block of grafted PVA molecules became the “sticky” points for the adhesion of both emulsion droplets and each PGNP, resulting in droplet-droplet coalescence. When the acetyl group was the inner block and hydroxy group was the outer block, the PGNPs were not adsorbed on the droplets; rather, they were dispersed in the water phase. Nevertheless, stable emulsion droplets were maintained over a wide range of volume fractions. Further, we demonstrated a novel pathway against droplet coalescence and coarsening, which differed from the adsorption of solid particles on the droplet surface. Thus, the formation of space-filling PGNPs between the emulsion droplets was the predominant stabilization mechanism for the concentrated emulsions. Our findings indicate that careful tuning of the grafting architecture is necessary to control the stability of concentrated O/W emulsions using amphiphilic diblock PVA-grafted PMMA NPs. Our simulations offer a theoretical guide for controlling the morphologies and stabilities of concentrated O/W emulsions via different grafting architectures, paving the way for novel emulsification strategies that may find applications in food, cosmetics, coatings, paints, pharmaceuticals, and porous material templates. § NOTES The authors declare no competing financial interest Y.K. acknowledges JSPS KAKENHI Grant No. JP24K17216 and the support of KIT Grants-in-Aid for Early-Career Scientists. < g r a p h i c s >
http://arxiv.org/abs/2408.11694v1
20240821151422
Common fixed point theorems for a commutative family of nonexpansive mappings in complete random normed modules
[ "Xiaohuan Mu", "Qiang Tu", "Tiexin Guo", "Hong-Kun Xu" ]
math.FA
[ "math.FA" ]
Common fixed point theorems] Common fixed point theorems for a commutative family of nonexpansive mappings in complete random normed modules School of Mathematics and Statistics, Central South University, Changsha 410083, China xiaohuanmu@163.com School of Mathematics and Statistics, Central South University, Changsha 410083, China qiangtu126@126.com School of Mathematics and Statistics, Central South University, Changsha 410083, China tiexinguo@csu.edu.cn *Corresponding author School of Science, Hangzhou Dianzi University, Hangzhou 310018, China College of Mathematics and Information Science, Henan Normal University, Xinxiang, 453007, China xuhk@hdu.edu.cn Primary 46B20, 46H25, 47H09, 47H10, 47H40 § ABSTRACT In this paper, we first introduce and study the notion of random Chebyshev centers. Further, based on the recently developed theory of stable sets, we introduce the notion of random complete normal structure so that we can prove the two deeper theorems: one of which states that random complete normal structure is equivalent to random normal structure for an L^0-convexly compact set in a complete random normed module; the other of which states that if G is an L^0-convexly compact subset with random normal structure of a complete random normed module, then every commutative family of nonexpansive mappings from G to G has a common fixed point. We also consider the fixed point problems for isometric mappings in complete random normed modules. Finally, as applications of the fixed point theorems established in random normed modules, when the measurable selection theorems fail to work, we can still prove that a family of strong random nonexpansive operators from (Ω,ℱ,P)× C to C has a common random fixed point, where (Ω,ℱ,P) is a probability space and C is a weakly compact convex subset with normal structure of a Banach space. [ Hong-Kun Xu August 26, 2024 =================== § INTRODUCTION The fixed point theory of nonexpansive mappings in Banach spaces is one of the most attractive parts of metric fixed point theory since it not only has a strong interaction with geometry of Banach spaces but also is closely connected with the evolution equation governed by accretive operators, see the famous survey papers <cit.>. Let us first recall some known basic fixed point theorems concerning nonexpansive mappings, which is closely related to our work in this paper. The well known Kirk fixed point theorem <cit.> states that if K is a nonempty weakly compact convex subset with normal structure of a Banach space, then every nonexpansive mapping from K to itself has a fixed point. In 1966, Belluce and Kirk <cit.> extended the Kirk fixed point theorem for any finite commutative family of nonexpansive self-mappings. Following <cit.>, in 1967, Belluce and Kirk <cit.> further introduced a strengthening notion of normal structure, called complete normal structure, and then proved that every commutative family of nonexpansive self-mappings acting on a nonempty weakly compact convex subset with complete normal structure of a Banach space has a common fixed point. In 1974, Lim <cit.> proved that complete normal structure is equivalent to normal structure for a weakly compact convex subset in locally convex spaces so that Lim <cit.> can prove that the Kirk fixed point theorem holds true for any commutative family of nonexpansive self-mappings. In fact, in 1948, Brodskii and Milman <cit.> already proved that any family of surjective isometric self-mappings acting on such a set K as in Kirk fixed point theorem has a common fixed point in the Chebyshev center of K. For a single isometric (not necessarily surjective) self-mapping, Lim et al. <cit.> proved an interesting result, namely, every isometric self-mapping acting on such a set K has a fixed point in the Chebyshev center of K. The central purpose of this paper is to extend these classical results in <cit.> from a Banach space to a complete random normed module. The crux of our work is to solve the problem of how to introduce a fruitful notion of random complete normal structure for an almost surely bounded closed L^0-convex subset of a complete random normed module. The success of this paper considerably benefits from the recent advance in fixed point theory in random functional analysis <cit.>. Random functional analysis is based on the idea of randomizing the traditional space theory. Random normed modules, as a central framework of random functional analysis, are a random generalization of normed spaces. Over the past 30 years, random normed modules have been deeply developed and have played an essential role in the development of the theory of conditional (dynamic) risk measures <cit.> and nonsmooth differential geometry on metric measure spaces <cit.> (see <cit.> for Gigli's independent contribution). As is stated in <cit.>, one of the recent central tasks of random functional analysis is to extend the classical fixed point theory in functional analysis to random functional analysis, which is not only of interest in its own right but also required to meet the needs of dynamic financial mathematics (for example, for the study of dynamic Nash equilibrium and dynamic optimal portfolio), and the challenge in achieving this goal lies in overcoming noncompactness since the closed L^0-convex subsets frequently occurring in the theoretic developments and their financial applications are not compact in general. The classical Kirk fixed point theorem involves the two key notions — normal structure and weakly compact convex set. In the case of a complete random normed module, it is not very difficult to introduce the notion of random normal structure for a closed L^0-convex subset as in <cit.>, but it is a delicate problem to speak of weak compactness for a closed L^0-convex subset. Since a random normed module is often endowed with the (ε,λ)-topology and is a metrizable but not locally convex topological module, it makes no sense to speak of weak compactness for a closed L^0-convex subset. Owing to Žitković's contribution <cit.>, he introduced the notion of convex compactness for a closed convex subset of a locally nonconvex space and developed some powerful tools for the study of several important problems in finance and economics. It is motivated by the work in <cit.> that Guo <cit.> introduced the notion of L^0-convex compactness for a closed L^0-convex subset in a complete random normed module, and further established the characterization for a closed L^0-convex subset to be L^0-convexly compact by means of the theory of random conjugate spaces. Therefore, the notion of an L^0-convexly compact set is a proper substitution for the notion of a weakly compact convex subset of a Banach space, which had made Guo et al. smoothly generalize the Kirk fixed point theorem from a Banach space to a complete random normed module in <cit.>. Recently, based on the notion of σ-stable sets introduced in <cit.>, Guo et al. <cit.> developed the theory of random sequentially compact sets and established a noncompact Schauder fixed point theorem, which directly leads to a more abstract development of stable sets and stably compact sets in <cit.>. It is the development of the abstract stable sets that motivates us to introduce the desired notion of random complete normal structure. The remainder of this paper is organized as follows: Section <ref> provides some prerequisites and further presents the main results. Section <ref> is devoted to the proof of Theorem <ref>, namely, proving the equivalence between random complete normal structure and random normal structure for an L^0-convexly compact set in a complete random normed module. Section <ref> is devoted to the proof of Theorem <ref>, which establishes a common fixed point theorem for a commutative family of nonexpansive mappings in a complete random normed module. Section <ref> is devoted to the proofs of two fixed point Theorems <ref> and <ref> for isometric mappings in a complete random normed module . Section <ref> is devoted to the applications of the main results to random fixed point theorems for random nonexpansive operators. Finally, Section <ref> concludes with some remarks pointing out the future possible works. § PREREQUISITES AND MAIN RESULTS This section is divided into the following five subsections in order to clearly state some prerequisites and the main results of this paper. §.§ B_ℱ-stable sets and consistent families with B_ℱ-stable index Throughout this paper, 𝕂 denotes the scalar field ℝ of real numbers or ℂ of complex numbers, (Ω,ℱ,P) a given probability space, ℕ the set of positive integers, ℝ_+ the set of nonnegative real numbers, L^0(ℱ,𝕂) the algebra of equivalence classes of 𝕂-valued ℱ-measurable random variables on (Ω,ℱ,P) (here, two random variables are said to be equivalent if they are equal almost surely), L^0(ℱ):=L^0(ℱ,ℝ) and L̅^0(ℱ) the set of equivalence classes of extended real-valued ℱ-measurable random variables on (Ω,ℱ,P). Besides, I_A denotes the characteristic function of A for any A ∈ℱ and Ĩ_A denotes the equivalence class of I_A. In the remainder of this paper, we always denote the set of countable partitions of Ω to ℱ by Π_ℱ, where a countable partition of Ω to ℱ is a disjoint sequence {A_n,n∈ℕ} in ℱ such that ∪_n=1^∞ A_n=Ω. It is well known from <cit.> that L̅^0(ℱ) is a complete lattice under the partial order ξ≤η iff ξ^0(ω) ≤η^0(ω) for almost all ω∈Ω (briefly, ξ^0(ω) ≤η^0(ω) a.s.), where ξ^0 and η^0 are arbitrarily chosen representatives of ξ and η in L̅^0(ℱ), respectively. Specially, the sublattice (L^0(ℱ),≤) is a Dedekind complete lattice. For a nonempty subset H of L̅^0(ℱ), we usually use ⋁ H and ⋀ H for the supremum and infimum of H, respectively. For any A∈ℱ, we always use the corresponding lowercase letter a for the equivalence class [A] of A (two elements A and D in ℱ are said to be equivalent if P(AΔ D)=0, where AΔ D=(A∖ D)∪(D∖ A) stands for the symmetric difference of A and D). For any A,B∈ℱ, define a∧ b=[A∩ B], a∨ b=[A∪ B] and a^c=[A^c], where A^c denotes the complement of A. Then, it is well known from <cit.> that B_ℱ={a=[A]:A∈ℱ} is a complete Boolean algebra with 1:=[Ω] and 0:=[∅], called the measure algebra associated with (Ω,ℱ,P). A nonempty subset {a_i,i∈ I} of B_ℱ is called a partition of unity if ∨_i∈ I a_i = 1 and a_i ∧ a_j = 0 where i≠ j. It should be also noticed that {i∈ I : a_i > 0} must be at most countable for any partition {a_i,i∈ I} of unity in B_ℱ. It is easy to check that, {a_n,n∈ℕ} is a partition of unity in B_ℱ iff there exists {A_n,n∈ℕ}∈Π_ℱ such that a_n=[A_n] for each n∈ℕ. For a random normed module (E,·) (see Subsection <ref> below for its definition and its (ε,λ)-topology), the L^0-norm can induce the two kinds of topologies — the (ε,λ)-topology and the locally L^0-convex topology, the latter is much stronger than the former, it is in order to establish the inherent connection between the two kinds of topologies that Guo <cit.> introduced the following notion of a σ-stable set. First, let us recall the notion of a regular L^0-module. A left module over the algebra L^0(ℱ,𝕂) (briefly, an L^0(ℱ,𝕂)-module) is said to be regular if E has the following property: for any given two elements x and y in E, if there exists some {A_n,n∈ℕ}∈Π_ℱ such that Ĩ_A_nx=Ĩ_A_ny for each n∈ℕ, then x=y. In the remainder of this paper, we always assume that all the L^0(ℱ,𝕂)-modules occurring in this paper are regular, the assumption is not too restrictive since all random normed modules are regular. Let E be an L^0(ℱ,𝕂)-module and G be a nonempty subset of E. G is said to be finitely stable if Ĩ_Ax+Ĩ_A^cy∈ G for any x,y∈ G and any A∈ℱ. G is said to be σ-stable (or to have the countable concatenation property) if for each sequence {x_n, n∈ℕ} in G and each {A_n,n∈ℕ}∈Π_ℱ, there exists some x∈ G such that Ĩ_A_nx=Ĩ_A_nx_n for each n∈ℕ (x is unique since E is assumed to be regular, usually denoted by ∑_n=1^∞Ĩ_A_nx_n, called the countable concatenation of {x_n,n∈ℕ} along {A_n,n∈ℕ}). By the way, if G is σ-stable and H is a nonempty subset of G, then σ(H):={∑_n=1^∞Ĩ_A_nh_n: {h_n,n∈ℕ} is a sequence in H and {A_n,n∈ℕ}∈Π_ℱ} is called the σ-stable hull of H. The notion of a σ-stable set depends on the structure of an L^0(ℱ,𝕂)-module, but the following notion of a B_ℱ-stable set, as an abstract generalization of the notion of a σ-stable set, was introduced in <cit.> for the development of the theory of stably compact sets, which will be needed in order to introduce the notion of random complete normal structure in Subsection <ref> of this paper. Let X be a nonempty set. An equivalence relation ∼ on X× B_ℱ (denote the equivalence class of (x,a) by x|a for each (x,a)∈ X× B_ℱ) is said to be regular if the following conditions are satisfied: (1) x|a=y|b implies a=b; (2) x|a=y|a implies x|b=y|b for any b≤ a; (3) {a∈ B_ℱ: x|a=y|a} has a greatest element for any x and y in X; (4) x|1=y|1 implies x=y. In addition, given a regular equivalence relation on X× B_ℱ, a nonempty subset G of X is said to be B_ℱ-stable with respect to ∼ if, for each sequence {x_n,n∈ℕ} in G and each partition {a_n,n∈ℕ} of unity in B_ℱ, there exists x∈ G such that x|a_n=x_n|a_n for each n∈ℕ (by (3) and (4) it is easy to see that such an x is unique, denoted by ∑_n=1^∞x_n|a_n). By the way, if G is B_ℱ-stable and H is a nonempty subset of G, then B_σ(H):={∑_n=1^∞h_n|a_n: {h_n,n∈ℕ} is a sequence in H and {a_n,n∈ℕ} is a partition of unity in B_ℱ} is called the B_ℱ-stable hull of H with respect to ∼. Let E be an L^0(ℱ,𝕂)-module. Define an equivalence relation ∼ on E× B_ℱ by (x,a)∼ (y,b) iff a=b and Ĩ_Ax=Ĩ_Ay, then ∼ is regular. It is easy to check that a nonempty subset G is σ-stable iff G is B_ℱ-stable, and in this case, for any sequence {x_n,n∈ℕ} in G and any partition {a_n,n∈ℕ} of unity in B_ℱ, ∑_n=1^∞x_n|a_n=∑_n=1^∞Ĩ_A_nx_n. Consequently, the notion of a σ-stable set is a special case of that of a B_ℱ-stable set. Companying Definition <ref>, Definition <ref> below was also introduced in <cit.>. Let E be an L^0(ℱ,𝕂)-module and G be a σ-stable subset of E. For any sequence of nonempty subsets {G_n, n ∈ℕ} of G and any {A_n, n ∈ℕ}∈Π_ℱ, ∑^∞_n=1Ĩ_A_n G_n: ={∑^∞_n=1Ĩ_A_n x_n: x_n∈ G_n, ∀  n∈ℕ} is called the countable concatenation of {G_n,n∈ℕ} along {A_n,n∈ℕ}. For a nonempty family ℰ of σ-stable subsets of G, σ(ℰ):={∑_n=1^∞Ĩ_A_nG_n: {G_n, n∈ℕ} is a sequence in  ℰ and {A_n, n∈ℕ}∈Π_ℱ} is called the σ-stable hull of ℰ; if σ(ℰ)=ℰ, then ℰ is said to be σ-stable. Let E, G and ℰ be the same as in Definition <ref>. Define the equivalence relation ∼ on ℰ× B_ℱ by (G_1,a)∼ (G_2,b) iff a=b and Ĩ_AG_1=Ĩ_AG_2, then it is easy to check that ∼ is regular. Further, it is also easy to verify that ℰ is σ-stable iff ℰ is B_ℱ-stable with respect to ∼, and in this case, ∑_n=1^∞G_n|a_n=∑_n=1^∞Ĩ_A_nG_n for any sequence {G_n,n∈ℕ} in ℰ and any partition {a_n,n∈ℕ} of unity in B_ℱ (please bear in mind a_n=[A_n] according to our convention). Let us conclude Subsection <ref> with the following notion of a consistent family with B_ℱ-stable index. Let E be an L^0(ℱ,𝕂)-module, G a σ-stable subset of E and Λ a B_ℱ-stable set. A nonempty family ℰ:= {G_α,α∈Λ} of σ-stable subsets of G is called a consistent family with B_ℱ-stable index if G_∑_n=1^∞α_n|a_n=∑_n=1^∞Ĩ_A_nG_α_n for any sequence {G_α_n,n∈ℕ} in ℰ and any partition {a_n,n∈ℕ} of unity in B_ℱ. In addition, if Λ is a B_ℱ-stable directed set, then the consistent family {G_α,α∈Λ} is called a consistent net. Finally, a consistent net {G_α,α∈Λ} is said to be decreasing if G_α⊂ G_β when α≥β. By Remark <ref>, a consistent family with B_ℱ-stable index must be σ-stable in the sense of Definition <ref> since ∑_n=1^∞Ĩ_A_nG_α_n=∑_n=1^∞G_α_n|a_n is just G_∑_n=1^∞α_n|a_n∈ℰ. §.§ Random complete normal structure and random normal structure The following notion of a random normed module was introduced by Guo in <cit.>, see also <cit.>. The notion of an L^0-normed L^0-module independently introduced by Gigli <cit.> in the context of nonsmooth differential geometry on metric measure spaces is equivalent to the notion of a random normed module. Throughout this paper, L^0_+(ℱ)={ξ∈ L^0(ℱ): ξ≥ 0}. An ordered pair (E,·) is called a random normed module (briefly, an RN module) over 𝕂 with base (Ω,ℱ,P) if E is an L^0(ℱ,𝕂)–module and · is a mapping from E to L^0_+(ℱ) such that the following conditions are satisfied: * ξ x = |ξ| x for any ξ∈ L^0(ℱ, 𝕂) and any x ∈ E; * x+y≤x + y for any x and y in E; * x = 0 implies x = θ (the null element in E). As usual, · is called the L^0-norm on E. If · : E → L^0_+(ℱ) only satisfies (1) and (2), then it is called an L^0-seminorm on E. As mentioned in Subsection <ref>, for an RN module (E,·), the L^0-norm · can induce the (ε,λ)-topology as follows. Let (E,·) be an RN module over 𝕂 with base (Ω,ℱ,P). For any real numbers ε and λ with ε>0 and 0< λ <1, let N_θ(ε, λ)={x∈ E: P{ω∈Ω: x(ω)<ε}>1-λ}, then 𝒰_θ:={N_θ(ε, λ): ε>0, 0<λ <1} forms a local base of some metrizable Hausdorff linear topology for E, called the (ε, λ)-topology, denoted by 𝒯_ε, λ. The (ε,λ)-topology is an abstract generalization of the topology of convergence in probability, in fact, a sequence {x_n, n ∈ℕ} in an RN module converges in the (ε,λ)-topology to x if and only if {x_n -x,n ∈ℕ} converges in probability to 0. When (Ω,ℱ,P) is trivial, namely ℱ={Ω,∅}, an RN module (E,·) over 𝕂 with base (Ω,ℱ,P) reduces to an ordinary normed space over 𝕂, and the (ε,λ)-topology to the usual norm topology. The simplest nontrivial RN module is (L^0(ℱ,𝕂),|·|), where |·| is the absolute value mapping. To extend the results of <cit.> from Banach spaces to complete RN modules, the main challenge is to introduce the notion of random complete normal structure in RN modules. Before doing so, let us first recall the notions of normal structure <cit.> and complete normal structure <cit.> in Banach spaces. Let (B,·) be a Banach sapce, F a nonempty subset of B and K a nonempty bounded subset of B. Define r(K,·):B→ℝ_+ by r(K,x)=sup{x-y:y∈ H}, ∀ x∈ B. Then r(K,F)=inf{r(K,x): x∈ F} and c(K,F)={x∈ F: r(K,x)=r(K,F)} are called the Chebyshev radius and Chebyshev center of K with respect to F, respectively, which were introduced in <cit.>. In particular, r(K,K) and c(K,K) are called the Chebyshev radius and Chebyshev center of K, respectively. Further, a nonempty bounded closed convex set K in a Banach space (B,·) is said to have normal structure if for every nonempty closed convex subset F of K, with F containing more than one point, has a point x∈ F such that r(F,x)<δ(F), where δ(F)=sup{z-y:y,z∈ F} is the diameter of F. The following notion of complete normal structure was introduced in <cit.> and once played an important role in the study of common fixed point theorems. Let (B,·) be a Banach space. A bounded closed convex subset K of B is said to have complete normal structure if every closed convex subset W of K, with W containing more than one point, satisfies the following condition: (∗) For every decreasing net {W_α, α∈Λ} of nonempty subsets of W, if r(W_α,W)=r(W,W)  ∀ α∈Λ, then the closure of ⋃_α∈Λ c(W_α,W) is a nonempty proper subset of W. Now, we will introduce the notion of random Chebyshev centers in RN modules as follows. Let us first recall that a subset G of an RN module (E,·) with base (Ω,ℱ,P) is said to almost surely (briefly, a.s.) bounded if ⋁{g:g∈ G}∈ L^0_+(ℱ). Let (E,·) be an RN module over 𝕂 with base (Ω,ℱ,P), G a nonempty subset of E and H an a.s. bounded nonempty subset of E. Define R(H,·):E→ L^0_+(ℱ) by R(H,x)=⋁{x-y:y∈ H}, ∀ x∈ E. Then R(H,G)=⋀{R(H,x): x∈ G} and C(H,G)={x∈ G: R(H,x)=R(H,G)} are called the random Chebyshev radius and random Chebyshev center of H with respect to G, respectively. In particular, we call R(H,H) and C(H,H) the random Chebyshev radius and random Chebyshev center of H, respectively. Let E be an L^0(ℱ,𝕂)-module and G be a nonempty subsets of E. G is said to be L^0-convex if ξ x+(1-ξ)y ∈ G for any x,y∈ G and any ξ∈ L^0_+(ℱ) with 0≤ξ≤ 1. Similarly, one can have the notion of the L^0-convex hull of G, denoted by Conv_L^0(G). Let ξ and η be in L̅^0(ℱ). As usual, ξ > η means ξ≥η but ξ≠η. Besides, for any A∈ℱ with P(A)>0, ξ > η on A means ξ^0(ω)>η^0(ω) for almost all ω∈ A, where ξ^0 and η^0 are arbitrarily chosen representatives of ξ and η, respectively. We always use (ξ> η) for the set {ω∈Ω: ξ^0(ω)> η^0(ω)} for any two representatives ξ^0 and η^0 of ξ and η, respectively. Although (ξ> η) depends on the particular choice of ξ^0 and η^0, a careful reader can check that the subsequent notions and propositions involving (ξ> η) are independent of the particular choice of ξ^0 and η^0. The following notion of random normal structure introduced by Guo in <cit.> was used to establish the random Kirk fixed point theorem. Let (E,·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P) and G be a nonempty 𝒯_ε,λ-closed L^0-convex subset of E. G is said to have random normal structure if for each a.s. bounded 𝒯_ε,λ-closed L^0-convex subset H of G such that D(H):= ⋁{x-y: x,y∈ H}> 0, there exists a nondiametral point z∈ H, namely R(H,z)< D(H) on (D(H)> 0), where D(H) is called the random diameter of H. One can observe that it is easy to generalize the notion of normal structure to that of random normal structure, but it is completely another matter to generalize the notion of complete normal structure to the following notion of random complete normal structure. Throughout this paper, for a subset G of an RN module, G^-_ε,λ denotes the closure of G under the (ε,λ)-topology. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P). An a.s. bounded 𝒯_ε,λ-closed L^0-convex subset G of E is said to have random complete normal structure if every 𝒯_ε,λ-closed L^0-convex subset W of G, with W containing more than one point, satisfies the following condition: (∗) For every decreasing consistent net {W_α, α∈Λ} of σ-stable subsets of W, if R(W_α,W)=R(W,W)  ∀ α∈Λ, then [⋃_α∈Λ C(W_α,W) ]^-_ε,λ≠∅ and for any B⊂(D(W)>0) with P(B)>0, there exists y_B∈ W, such that Ĩ_By_B∉Ĩ_B [⋃_α∈Λ C(W_α,W) ]^-_ε,λ. When ℱ={∅,Ω}, Definition <ref>, of course, reduces to Definition <ref>. The significant difference between Definition <ref> and Definition <ref> lies in that Definition <ref> requires that each W_α be σ-stable and {W_α, α∈Λ} a decreasing consistent net. Besides, in Definition <ref>, we are forced to consider the case for each B⊂(D(W)>0) with P(B)>0. Such a difference makes the proofs of Theorems <ref> and <ref> below more involved than those of their prototypes <cit.>, and in particular we are required to frequently construct a decreasing consistent net for the proof of Theorem <ref>. Since a complete RN module is generally a locally nonconvex space, so it does not make sense to speak of weak compactness for a closed L^0-convex subset. As a proper substitution for ordinary weak compactness, the notion of L^0-convex compactness was introduced and studied by Guo et al. in <cit.>. The initial aim of <cit.> is to generalize the notion of convex compactness for a closed convex set in a linear topological space to the notion of L^0-convex compactness for a closed L^0-convex subset in a topological module over the topological algebra L^0(ℱ,𝕂), it turns out that L^0-convex compactness and convex compactness are equivalent for a closed L^0-convex subset in a complete RN module or a more general complete random locally convex module, see <cit.> for details. But playing a truly crucial role in random functional analysis are closed L^0-convex subsets rather than generic closed convex subsets, and the theory of random conjugate spaces helps establish the Jame's type theorem characterizing L^0-convex compactness for closed L^0-convex subsets in <cit.>. Therefore, we would like to retain the terminology of L^0-convex compactness for closed L^0-convex subsets. Let (E,·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P). A nonempty 𝒯_ε, λ-closed L^0-convex subset G of E is said to be L^0-convexly compact (or, to have L^0-convex compactness) if every family of 𝒯_ε, λ-closed L^0-convex subsets of G has a nonempty intersection whenever the family has the finite intersection property. The first main result of this paper is Theorem <ref> below, which can be regarded as a proper generalization of <cit.> in the case of a Banach space. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P) and G be an L^0-convexly compact subset of E. Then G has random complete normal structure iff G has random normal structure. §.§ A common fixed point theorem for a commutative family of nonexpansive mappings in a complete random normed module Let (E,·) be an RN module, G and F be two nonempty subsets of E. The mapping T: G→ F is said to be nonexpansive if Tx-Ty≤x-y for any x,y∈ G. The second main result of this paper is Theorem <ref> below, which generalizes <cit.> and <cit.> from Banach spaces to 𝒯_ε,λ-complete RN modules. It also extends the random Kirk fixed point theorem <cit.> to the case of a commutative family of nonexpansive mappings. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P), G an L^0-convexly compact subset with random normal structure of E and 𝒯 a commutative family of nonexpansive mappings from G to G. Then 𝒯 has a common fixed point. §.§ A common fixed point theorem for a family of surjective isometric mappings in a complete random normed module As in the spirit of <cit.> and <cit.>, namely, when the nonexpansive mappings are isometric, we can locate fixed points in the Chebyshev center. Theorem <ref> below is a random generalization of <cit.>. In fact, it is also a strengthening form of <cit.> in the special case when the mapping T in <cit.> is isometric. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P) and G be an L^0-convexly compact subset with random normal structure of E. Then every isometric mapping T: G→ G (namely, Tx-Ty= x-y for any x,y∈ G) has a fixed point in C(G,G). Theorem <ref> below is a random generalization of Brodskii and Milman's result in <cit.>. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P), G an L^0-convexly compact subset with random normal structure of E and 𝒯 a family of surjective isometric mappings from G to G. Then 𝒯 has a common fixed point in C(G,G). §.§ Some applications to random fixed point theorems for random nonexpansive operators Random fixed point theory of random operators is aimed at giving the random versions of classical fixed point theorems, which is required for the study of various classes of random equations, see <cit.> for a historical survey of random operator theory. It already obtained an important advance in 1970s when Bharucha-Reid et al. established the random version of the Schauder fixed point theorem <cit.>. Subsequently, random fixed point theory further ushered in an explosive development <cit.> and their references therein, where measurable selection theorems of multivalued measurable functions were playing a key role in the study of random fixed points. While Bharucha-Reid also posed random fixed point problems for random nonexpansive operators in <cit.>, the problems were deeply studied for a single random operator, see, for example, <cit.>. But until the present paper, we have not see even a common random fixed point theorem for a commutative family of nonexpansive random operators since the usual method by measurable selection theorems fails to work for this case. Let us explain the failure by taking the case of Theorem <ref> below for example. For each T∈𝒯, let F_T:Ω→ 2^V defined by F_T(ω)={v∈ V:T(ω,v)=v}, which is a nonempty closed set by the Kirk fixed point theorem, so is ⋂_T∈𝒯F_T(ω) for each ω∈Ω by the Lim's common fixed point theorem <cit.>. When V is separable, it is easy to see that the graph Gr(F_T) of F_T :={(ω,x)∈Ω× B:T(ω,x)-x=0} is ℱ⊗ℬ(V)-measurable (ℬ(V) is the Borel σ-algebra of V), but Gr(⋂_T∈𝒯 F_T):=⋂_T∈𝒯Gr(F_T) is not necessarily ℱ⊗ℬ(V)-measurable when 𝒯 is not countable. So, the measurable selection theorems currently available in <cit.> can not be used to obtain a measurable selection of ⋂_T∈𝒯F_T. Motivated by Guo et al. <cit.>, this paper provides a new method for solving the common random fixed point problem. Precisely speaking, we will lifting each random nonexpansive operator T:Ω× V→ V in 𝒯 to a nonexpansive operator T̂ from the closed L^0-convex subset L^0(ℱ,V) to itself, then Theorem <ref> will be an immediate corollary of Theorem <ref>, see the proof of Theorem <ref> for details. Similarly, Theorems <ref> and <ref> are also respectively the immediate corollaries of Theorems <ref> and <ref>. The new method exhibits the power of the idea of randomizing space theory. Following is a brief introduction of the well known terminologies of random operators. Let (Ω,ℱ,P) be a probability space and (M,d) be a metric space. A mapping X:Ω→ M is said to be a random element <cit.> (or, ℱ-random element) if X^-1(G):={ω∈Ω: X(ω)∈ G}∈ℱ for each open set G of M, furthermore, a random element X is said to be simple if it only takes finitely many values, and X is said to be a strong random element if X is the pointwise limit of a sequence of simple random elements. It is well known that X:Ω→ M is a strong random element if and only if both X is a random element and its range X(Ω) is a separable subset of M. Let (Ω,ℱ,P) be a probability space, (M,d) and (M_1,d_1) two metric spaces and T:Ω× M→ M_1 a mapping. T is called a random operator if T(·,x):Ω→ M_1 is a random element for each x∈ M. Further, T is called a strong random operator if T(·,x) is a strong random element for each x∈ M. It is clear that each random operator T:Ω× M→ M_1 becomes a strong random operator when M_1 is separable. T is called a random nonexpansive operator (resp., random isometric operator) if for each ω∈Ω, d_1(T(ω,x),T(ω,y))≤ d(x,y) (resp., d_1(T(ω,x),T(ω,y))= d(x,y)) for all x,y∈ M. Two random operators T_1:Ω× M→ M_1 and T_2:Ω× M→ M_1 are said to be commutative if for each ω∈Ω, T_1(ω,T_2(ω,x))=T_2(ω,T_1(ω,x)) for any x∈ M. As an application of Theorem <ref>, we can immediately obtain Theorem <ref> below. Let ( B , ·) be a Banach space over 𝕂, V be a weakly compact convex subset with normal structure of B and 𝒯 a commutative family of strong random nonexpansive operators from Ω× V to V. Then there is a strong random element x^0(·):Ω→ V such that T(ω,x^0(ω) )=x^0(ω) for almost all ω∈Ω and all T∈𝒯. Similarly, as a respective application of Theorems <ref> and <ref>, we can immediately have Theorems <ref> and <ref> below. Let ( B , ·) be a Banach space over 𝕂, V a weakly compact convex subset with normal structure of B and T a strong random isometric operator from Ω× V to V. Then there is a strong random element x^0(·):Ω→ c(V,V) such that T(ω,x^0(ω) )=x^0(ω) for almost all ω∈Ω. Let ( B , ·) be a Banach space over 𝕂, V a weakly compact convex subset with normal structure of B and 𝒯 a family of strong random surjective isometric operators from Ω× V to V. Then there is a strong random element x^0(·):Ω→ c(V,V) such that T(ω,x^0(ω) )=x^0(ω) for almost all ω∈Ω and all T∈𝒯. § PROOF OF THEOREM <REF> For the proof of Theorem <ref>, let us first give Propositions <ref> and <ref> below, which are concerned with the basic properties of random Chebyshev radius and random Chebyshev center. Let E, G and H be the same as in Definition <ref>. Then we have the following statements: * R(H,·) is 𝒯_ε,λ-continuous (namely, R(H,·):(E, 𝒯_ε,λ )→ (L^0_+(ℱ), 𝒯_ε,λ) is continuous) and L^0-convex (namely, R(H,λ x+(1-λ)y)≤λ R(H,x)+(1-λ) R(H,y) for any x,y∈ E and any λ∈ L^0_+(ℱ) with 0≤λ≤ 1). * If E is σ-stable, then R(H,x)=R(σ(H),x ) for any x∈ E. * R(H,x)=R([Conv_L^0(H)]^-_ε,λ,x) for any x∈ E. * Ĩ_A R(H,x)=R(Ĩ_AH,Ĩ_Ax) for any x∈ E and any A∈ℱ. Further, if E is σ-stable, then R( ∑_n=1^∞Ĩ_A_nH_n,∑_n=1^∞Ĩ_A_n x_n)=∑_n=1^∞Ĩ_A_n R(H_n,x_n) for any sequence {x_n,n∈ℕ} in E, any sequence {H_n,n∈ℕ} of a.s. bounded nonempty subsets of E and any {A_n,n∈ℕ}∈Π_ℱ. * If E is σ-stable, then R(H,G)=R(σ(H),σ(G) ). * Ĩ_A R(H,G)=R(Ĩ_A H,Ĩ_A G) for any A∈ℱ. Further, if E is σ-stable, then R(∑_n=1^∞Ĩ_A_nH_n, ∑_n=1^∞Ĩ_A_nG_n)=∑_n=1^∞Ĩ_A_n R(H_n,G_n) for any sequence {H_n,n∈ℕ} of a.s. bounded nonempty subsets of E, any sequence {G_n,n∈ℕ} of nonempty subsets of E and any {A_n,n∈ℕ}∈Π_ℱ. * If G is finitely stable and C(H,G)≠∅, then Ĩ_A C(H,G)= C(Ĩ_AH,Ĩ_AG) for any A∈ℱ. Further, if E is σ-stable, then C(∑_n=1^∞Ĩ_A_nH_n, ∑_n=1^∞Ĩ_A_nG_n)=∑_n=1^∞Ĩ_A_n C( H_n,G_n) for any sequence {H_n,n∈ℕ} of a.s. bounded nonempty subsets of E, any sequence {G_n,n∈ℕ} of nonempty finitely stable subsets of E and any {A_n,n∈ℕ}∈Π_ℱ. (1). It is clear, so is omitted. (2). It is obvious that R(H,x)≤ R(σ(H),x). Conversely, for any sequence {x_n,n∈ℕ} in H and any {A_n,n∈ℕ}∈Π_ℱ, x-∑_n=1^∞Ĩ_A_n x_n=∑_n=1^∞Ĩ_A_nx- x_n≤∑_n=1^∞Ĩ_A_n R(H,x)=R(H,x). Thus, R(σ(H),x)≤ R(H,x). (3). First, we assert that R(H,x)=R(Conv_L^0(H),x). It is obvious that R(H,x)≤ R(Conv_L^0(H),x). On the other hand, for any h'∈ Conv_L^0(H), there exist {h_i,i=1∼ n} in H and {ξ_i,i=1∼ n} in L^0_+(ℱ) with ∑_i=1^nξ_i=1 such that h'=∑_i=1^nξ_i h_i, then x-h'=x-∑_i=1^nξ_i h_i≤∑_i=1^nξ_ix- h_i≤ R(H,x), which implies that R(Conv_L^0(H),x) ≤ R(H,x). Then R(Conv_L^0(H),x)= R(H,x). Second, we assert that R(Conv_L^0(H),x)= R([Conv_L^0(H)]^-_ε,λ,x). It is obvious that R(Conv_L^0(H),x) ≤ R([Conv_L^0(H)]^-_ε,λ,x). On the other hand, for any h∈ [Conv_L^0(H)]^-_ε,λ, there exists a sequence {h_n,n∈ℕ} in Conv_L^0(H) which converges to h. Then, we have x-h ≤x-h_n+h_n-h ≤ R(Conv_L^0(H),x)+h_n-h for each n∈ℕ, which implies that R([Conv_L^0(H)]^-_ε,λ,x)≤ R(Conv_L^0(H),x). Thus, R([Conv_L^0(H)]^-_ε,λ,x)= R(Conv_L^0(H),x). To sum up, R(H,x)=R([Conv_L^0(H)]^-_ε,λ,x). (4). For any x∈ E and any A∈ℱ, we have Ĩ_A R(H,x) =Ĩ_A⋁{x-h:h∈ H} =⋁{Ĩ_A x-Ĩ_A h:h∈ H} =⋁{Ĩ_A x- h':h'∈Ĩ_AH} =R(Ĩ_AH,Ĩ_Ax). Further, since ∑_n=1^∞Ĩ_A_n x_n ∈ E and ∑_n=1^∞Ĩ_A_nH_n is still an a.s. bounded subset of E, we have Ĩ_A_n R( ∑_n=1^∞Ĩ_A_n H_n,∑_n=1^∞Ĩ_A_nx_n ) = R(Ĩ_A_n H_n,Ĩ_A_n x_n ) =Ĩ_A_n R(H_n,x_n) for each n∈ℕ. Thus, R(∑_n=1^∞Ĩ_A_nH_n , ∑_n=1^∞Ĩ_A_n x_n)=∑_n=1^∞Ĩ_A_n R(H_n,x_n). (5). It is obvious that R(H,G)≥ R(H,σ(G)). On the other hand, for any x∈σ(G), there exist a sequence {x_n,n∈ℕ} in G and {A_n,n∈ℕ}∈Π_ℱ such that x=∑_n=1^∞Ĩ_A_n x_n, then R(H,x) =∑_n=1^∞Ĩ_A_n R(H,x_n) ≥ R(H,G). It follows that R(H,σ(G))≥ R(H,G), thus R(H,G)= R(H,σ(G)), which, combined with (2), implies that R(H,G)=R(H,σ(G))= R(σ(H),σ(G)). (6). For any A∈ℱ, we have Ĩ_A R(H,G) =Ĩ_A⋀{R(H,x): x∈ G} =⋀{ R(Ĩ_AH,Ĩ_Ax): x∈ G} =⋀{ R(Ĩ_AH,x'): x'∈Ĩ_AG} =R( Ĩ_AH,Ĩ_AG). Further, if E is σ-stable, similar to the proof of (4), we have R(∑_n=1^∞Ĩ_A_nH_n, ∑_n=1^∞Ĩ_A_nG_n)=∑_n=1^∞Ĩ_A_n R(H_n,G_n). (7). For any A∈ℱ and any x∈ C(H,G), we have R(Ĩ_AH,Ĩ_Ax)=Ĩ_A R(H,x)=Ĩ_A R(H,G)=R( Ĩ_AH,Ĩ_AG), which implies that Ĩ_Ax∈ C( Ĩ_AH,Ĩ_AG), and then Ĩ_A C(H,G)⊂ C(Ĩ_AH,Ĩ_AG). Conversely, for any A∈ℱ and any y∈ C(Ĩ_AH,Ĩ_AG), Ĩ_A R(H,y)= R( Ĩ_AH,Ĩ_Ay)=R( Ĩ_AH,Ĩ_AG)=Ĩ_A R(H,G). Take an arbitrary x_0 ∈ C(H,G) and let z=Ĩ_Ay+Ĩ_A^cx_0, then z∈ G and we have R(H,z)=Ĩ_AR(H,y)+Ĩ_A^cR(H,x_0)=R(H,G), which implies that z∈ C(H,G). Further, y=Ĩ_Ay=Ĩ_Az∈Ĩ_A C(H,G), then C(Ĩ_AH,Ĩ_AG) ⊂Ĩ_A C(H,G). Thus Ĩ_A C(H,G)=C(Ĩ_AH,Ĩ_AG) for any A∈ℱ. By simple calculations as in (4), we can obtain C(∑_n=1^∞Ĩ_A_nH_n, ∑_n=1^∞Ĩ_A_nG_n)=∑_n=1^∞Ĩ_A_n C( H_n,G_n). The following Lemma <ref>, established in <cit.>, provides a useful property of a σ-stable set and is frequently used in this paper. Throughout this paper, L^0_++(ℱ) ={ξ∈ L^0(ℱ): ξ> 0  on Ω}. Let G be a σ-stable subset of L^0(ℱ) such that G has an upper (resp., lower) bound ξ∈ L^0(ℱ), then for each ε∈ L^0_++(ℱ) there exists some g_ε∈ G such that g_ε> ⋁ G-ε on Ω (resp., g_ε< ⋀ G+ε on Ω). As shown by <cit.>, if G is a finitely stable and 𝒯_ε,λ-closed subset of some σ-stable subset of an RN module (E,·), then G is σ-stable. Clearly, if (E,·) is 𝒯_ε,λ-complete, then E is σ-stable. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P), G an L^0-convexly compact subset of E and H an a.s. bounded nonempty subset of E. Then C(H,G) is a nonempty 𝒯_ε,λ-closed L^0-convex subset of G. For each n∈ℕ, let C_n={x∈ G : R(H,x) ≤ R(H,G)+ 1/n}. First, since G is σ-stable, then {R(H,x): x∈ G} is a σ-stable subset of L^0_+(ℱ) by (4) of Proposition <ref>, and hence C_n is nonempty by Lemma <ref>. Second, since G is 𝒯_ε,λ-closed and R(H,·) is 𝒯_ε,λ-continuous, then C_n is 𝒯_ε,λ-closed. Finally, for any x_1,x_2∈ C_n and any λ∈ L^0_+(ℱ) with 0 ≤λ≤ 1, λ x_1+(1-λ)x_2 ∈ G and we have R(H,λ x_1+(1-λ)x_2) ≤λ R(H,x_1)+(1-λ)R(H,x_2) ≤ R(H,G)+ 1/n, thus C_n is L^0-convex. Therefore, {C_n,n∈ℕ} is a family of nonempty 𝒯_ε,λ-closed L^0-convex subset of G. Further, it is obvious that {C_n,n∈ℕ} has finite intersection property. By the L^0-convex compactness of G, C(H,G)=⋂_n=1^∞ C_n≠∅. It is straightforward to check that complete normal structure implies normal structure for any bounded closed convex subset of a Banach space by taking W_α=W for any α∈Λ in Definition <ref>. However, as shown in Proposition <ref> below, this is not so straightforward in the case of an RN module. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P) and G be an a.s. bounded 𝒯_ε,λ-closed L^0-convex subset of E such that G has random complete normal structure. Then G has random normal structure. Assume that G does not have random normal structure, then there exists a nonempty 𝒯_ε,λ-closed L^0-convex subset W of G , with W containing more than one point, such that for any x∈ W, there exists A_x⊂ (D(W)>0) with P(A_x)>0 such that Ĩ_A_xR(W,x)=Ĩ_A_x D(W). Let Λ be a B_ℱ-stable directed set and W_α=W for any α∈Λ, then {W_α, α∈Λ} is a decreasing consistent net of σ-stable subsets of W and R(W_α,W) =R(W,W) for any α∈Λ. Since G has random complete normal structure, then [C(W,W) ]^-_ε,λ≠∅. For any given x_0∈ C(W,W), by (<ref>), there exists A_x_0⊂ (D(W)>0) with P(A_x_0)>0 such that Ĩ_A_x_0R(W,x_0)=Ĩ_A_x_0D(W). For A_x_0, there exists y_0∈ W such that Ĩ_A_x_0 y_0 ∉Ĩ_A_x_0 [C(W,W)]^-_ε,λ. It follows that Ĩ_A_x_0y_0 ∉Ĩ_A_x_0C(W,W)=C(Ĩ_A_x_0W, Ĩ_A_x_0W), which implies that R(Ĩ_A_x_0W,Ĩ_A_x_0y_0)≠ R(Ĩ_A_x_0W, Ĩ_A_x_0W). Thus, there exists C⊂ A_x_0 with P(C)>0 such that Ĩ_C R(Ĩ_A_x_0W,Ĩ_A_x_0y_0) >Ĩ_C R(Ĩ_A_x_0W,Ĩ_A_x_0W) =Ĩ_CR(W,W) =Ĩ_CR(W,x_0) =Ĩ_CD(W) on C, which contradicts with Ĩ_C R(Ĩ_A_x_0W,Ĩ_A_x_0y_0)=Ĩ_C R(W,y_0)≤Ĩ_CD(W). To proceed, we need the notion of random asymptotic centers as follows. Let ( E , ·) be an RN module over 𝕂 with base (Ω,ℱ,P), G a nonempty subset of E and ℰ={B_α, α∈Λ} a decreasing net of a.s. bounded nonempty subsets of E. Define AR(ℰ,·):E→ L^0_+(ℱ) by AR(ℰ,x)=⋀{R(B_α,x): α∈Λ}, ∀ x∈ E. Then AR(ℰ,G)=⋀{AR(ℰ,x): x∈ G} and AC(ℰ,G)={x∈ G: AR(ℰ,x)=AR(ℰ,G) } are called the random asymptotic radius and random asymptotic center of ℰ respect to G, respectively. For an RN module (E,·) over 𝕂 with base (Ω,ℱ,P), d: E× E→ L^0_+(ℱ) defined by d(x,y)=x-y for any x,y∈ E, is clearly a random metric on E and thus (E,d) is a random metric space <cit.>. Let CB(E) be the family of nonempty a.s. bounded and 𝒯_ε,λ-closed subsets of E and CB_σ(E)={G ∈ CB(E): G  is σ-stable}. Define the random Hausdorff metric H: CB_σ(E) × CB_σ(E) → L^0_+(ℱ) by H(G_1,G_2)=max{⋁_x∈ G_1d(x, G_2), ⋁_x_2∈ G_2d(x_2, G_1) } for any G_1 and G_1 in CB_σ(E), where d(x,G)=⋀{d(x,g):g∈ G} denotes the random distance from x∈ E to a nonempty subset G of E. Then, (CB_σ(E), H) is a random metric space and the L^0-topology on CB_σ(E) is denoted by 𝒯^H_c. Here, we do not give the notion of the L^0-topology, see <cit.> for the detailed definition of the L^0-topology, it suffices to say that a net {G_α,α∈Λ} in CB_σ(E) converges in 𝒯^H_c to G iff for any ε∈ L^0_++(ℱ) there exists α_ε∈Λ such that H(G_α,G)<ε on Ω whenever α≥α_ε. We would like to remind readers that an L^0-convexly compact subset G of an RN module must be a.s. bounded (see <cit.> or <cit.> for details). Lemmas <ref> and <ref> below are crucial for the proof of Theorem <ref>. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P), W an L^0-convexly compact subset of E and ℰ={W_α, α∈Λ} a decreasing consistent net of σ-stable subsets of W with R(W_α,W)=R(W,W) for each α∈Λ. Then we have the following statements: * AR(ℰ,x)=R(W,W) for any x∈ [⋃_α∈Λ C(W_α,W)]^-_ε,λ. * If AR(ℰ,x)=0, then {[W_α]^-_ε,λ, α∈Λ} converges in 𝒯^H_c to {x}. (1). For any x∈⋃_α∈Λ C(W_α,W), there exists β∈Λ such that x∈ C(W_β,W), then we have AR(ℰ,x)≤ R(W_β,x)=R(W_β,W)=R(W,W). On the other hand, for any x∈⋃_α∈Λ C(W_α,W), we have R(W,W)=R(W_α,W)≤ R(W_α,x)  ∀ α∈Λ, which implies that R(W,W) ≤ AR(ℰ,x). Therefore, AR(ℰ,x)=R(W,W) for any x ∈⋃_α∈Λ C(W_α,W). By the 𝒯_ε,λ-continuity of AR(ℰ,·) (see (1) of Lemma <ref> below), we complete the proof. (2). Since Λ is a B_ℱ-stable directed set, it is easy to check that {R(W_α,x): α∈Λ} is a σ-stable subset of L^0_+(ℱ) by (4) of Proposition <ref>. By Lemma <ref>, for any ε∈ L^0_++(ℱ) there exists α_0∈Λ such that R(W_α,x)<AR(ℰ,x)+ε=ε on Ω for any α∈Λ with α≥α_0. Further, for any α∈Λ, H({x},[W_α]^-_ε,λ) =max{⋁_x∈{x}d(x, [W_α]^-_ε,λ), ⋁_y∈ [W_α]^-_ε,λd(y, {x}) } = R([W_α]^-_ε,λ,x) = R(W_α,x). Then H({x},[W_α]^-_ε,λ)<ε on Ω for any α∈Λ with α≥α_0, which implies that {[W_α]^-_ε,λ, α∈Λ} converges in 𝒯^H_c to {x}. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P) and G be a nonempty 𝒯_ε,λ-closed L^0-convex subset of E. If there exists a sequence {x_n,n∈ℕ} in G such that for some c∈ L^0_++(ℱ), x_n-x_m≤ c, x_n+1-x_n≥ c-1/n^2 for all n≥ 1, m≥ 1, where x_n=1/n∑_i=1^nx_i, then G does not have random normal structure. Let H=[ Conv_L^0({x_n,n∈ℕ}) ]^-_ε,λ. For any fixed x̂∈ Conv_L^0({x_n,n∈ℕ}), let x̂=a_1x_1+a_2x_2+⋯+ a_lx_l with a_i∈ L^0_+(ℱ) for any i=1,⋯,l and ∑_i=1^la_i=1, a:=max{a_1, a_2,⋯, a_l} and a_l+1=⋯=a_n=0 for any n>l, we have x_n+1-x̂ = nax_n+1-a ∑_i=1^nx_i -nax_n+1+a ∑_i=1^nx_i + ∑_i=1^na_i x_n+1 - ∑_i=1^na_i x_i = na(x_n+1-1/n∑_i=1^nx_i) - ∑_i=1^n (a-a_i)(x_n+1-x_i) ≥ na x_n+1-x_n - ∑_i=1^n (a-a_i) x_n+1-x_i ≥ na (c-1/n^2)- (na-1)c ≥ c-1/n, which implies that R({x_n,n∈ℕ},x̂ ) ≥ R({x_n+1,n>l},x̂ ) =⋁{x̂-x_n+1:n>l} ≥⋁{c-1/n,n>l} =c. On the other hand, it is easy to check that R( {x_n,n∈ℕ},x̂ )≤ c. Thus, R( {x_n,n∈ℕ},x̂ )=c. Further, (3) of Proposition <ref> shows that R( H,x̂ )=c. By the 𝒯_ε,λ-continuity of R(H,·), we have R(H,x)=c for any x∈ H, and hence D(H)=c. Then, the a.s bounded 𝒯_ε,λ-closed L^0-convex subset H of G does not have a nondiametral point, namely, G does not have random normal structure. For ξ∈ L^0( ℱ ,𝕂 ) with a representative ξ^0, (ξ^0 )^-1:Ω→𝕂 is defined by (ξ^0 )^-1(ω)=1/ξ^0(ω) if ξ^0(ω)≠ 0 and 0 otherwise. Then the equivalence class of (ξ^0)^-1 is called the generalized inverse of ξ, denoted by ξ^-1. It is clear that ξ·ξ^-1=Ĩ_(ξ≠ 0). Now, we are ready to prove Theorem <ref>. 𝐍𝐞𝐜𝐞𝐬𝐬𝐢𝐭𝐲. It is obvious by Proposition <ref>. 𝐒𝐮𝐟𝐟𝐢𝐜𝐢𝐞𝐧𝐜𝐲. Suppose that G does not have random complete normal structure, then there exists a nonempty 𝒯_ε,λ-closed L^0-convex subset W of G, with W containing more than one point, together with a decreasing consistent net ℰ:={W_α, α∈Λ} of σ-stable subsets of W satisfying R(W_α,W)=R(W,W) for each α∈Λ, but there exists some B⊂ (D(W)>0) with P(B)>0 such that Ĩ_By ∈Ĩ_B [⋃_α∈Λ C(W_α,W)] ^-_ε,λ for any y∈ W (note that [⋃_α∈Λ C( W_α,W)]^- _ε,λ≠∅ by Proposition <ref>). Consequently, Ĩ_B [⋃_α∈Λ C(W_α,W)]^-_ε,λ= Ĩ_B W. Without loss of generality, we can assume that B=Ω (otherwise, we consider the RN module (E_B,·_E_B) over 𝕂 with base (B,ℱ_B, P_B), where E_B=Ĩ_BE, ·_E_B be the restriction of · to E_B, ℱ_B={A∩ B:A∈ℱ} and P_B(A∩ B)=P(A∩ B)/P(B) for any A∩ B∈ℱ_B). Then D(W)∈ L^0_++(ℱ) and [⋃_α∈Λ C(W_α,W)]^-_ε,λ=W. First, we prove that R(W,W)>0 on Ω. Otherwise, there exists C∈ℱ with P(C)>0 such that R(W,W)=0 on C. Since D(W)∈ L^0_++(ℱ) and {x-y:x,y∈ W} is a σ-stable subset of L^0_+(ℱ), by Lemma <ref>, for ε=D(W)/2∈ L^0_++(ℱ), there exist x_0, y_0∈ W such that x_0-y_0> D(W)-ε=D(W)/2 on Ω. Further, since [⋃_α∈Λ C(W_α,W)]^-_ε,λ=W, by (1) of Lemma <ref>, we have AR(ℰ,x_0)=AR(ℰ,y_0)=R(W,W)=0 on C. It follows that {Ĩ_C [W_α]^-_ε,λ, α∈Λ} converges in 𝒯^H_c to two distinct sets {Ĩ_Cx_0} and {Ĩ_Cy_0} by (2) of Lemma <ref>, which contradicts with the fact that 𝒯^H_c is a Hausdorff topology. Second, we prove that G does not have random normal structure, which leads to a contradiction with the assumption of sufficiency. By Lemma <ref>, it suffices to show that there exists a sequence {x_n,n∈ℕ} in W such that for some c∈ L^0_++(ℱ), x_n-x_m≤ c, x_n+1-x_n≥ c-1/n^2 for all n≥ 1, m≥ 1, where x_n=1/n∑_i=1^nx_i. We construct {x_n, n ∈ℕ} by using the method of induction. Let x_1 be an arbitrary element of ⋃_α∈Λ C( W_α,W), there exists α_1∈Λ such that x_1∈ C( W_α_1,W), then R(W_α_1,x_1)= R(W_α_1,W)=R(W,W). By Lemma <ref>, there exists x_2∈ W_α_1 such that x_1-x_2≥ R(W_α_1,x_1)-1= R(W,W)-1 and x_1-x_2≤ R(W_α_1,x_1) = R(W,W). Suppose now x_1,x_2,⋯,x_n have been chosen in W such that x_i-x_j≤ R(W,W) and x_i+1-x_i≥ R(W,W)-1/i^2 for all 1≤ i≤ n-1 and 1≤ j≤ n, where x_i=1/i∑_k=1^ix_k. We proceed to choose x_n+1 as follows. Since ℰ={W_α, α∈Λ} is a decreasing consistent net and R(W,W)=AR(ℰ,x_i)=AR(ℰ,x_n) for any i∈{1,2,⋯,n} by (1) of Lemma <ref>. By Lemma <ref>, there exists α_n∈Λ such that R(W_α_n,x_i) ≤ AR(ℰ,x_i)+1/n^2 (n+1) =R(W,W)+1/n^2 (n+1) for any i∈{1,2,⋯,n} and R(W_α_n,x_n) ≤ AR(ℰ,x_n)+1/n^2 (n+1) =R(W,W)+1/n^2 (n+1). Since {x_n-y:y∈ W_α_n} is a σ-stable subset of L^0_+(ℱ), by Lemma <ref>, there exists z_0∈ W_α_n such that x_n-z_0 ≥ R(W_α_n,x_n)- 1/n^2 (n+1) ≥ AR(ℰ,x_n)- 1/n^2 (n+1) = R(W,W)- 1/n^2 (n+1). Let z_i=t_i x_i+ (1-t_i)z_i-1, where t_i=max{x_i-z_i-1^-1 (x_i-z_i-1-R(W,W)), 0}, i=1,2,⋯,n. Let x_n+1:=z_n, we assert that x_n+1-x_i≤ R(W,W) for any i=1,2,⋯,n and x_n+1-x_n≥ R(W,W)-1/n^2. Indeed, first, for any i∈{1,⋯,n}, z_i-x_i =t_i x_i+ (1-t_i)z_i-1-x_i =(1-t_i)z_i-1-x_i =(1-max{x_i-z_i-1^-1 (x_i-z_i-1-R(W,W)), 0})z_i-1-x_i =min{R(W,W),z_i-1-x_i} ≤ R(W,W), then we have x_n+1-x_i =z_n-x_i =t_nx_n+(1-t_n)z_n-1-x_i ≤ t_nx_n-x_i+(1-t_n)z_n-1-x_i ⋮ ≤ t_nx_n-x_i+(1-t_n)t_n-1x_n-1-x_i+⋯ +(1-t_n)(1-t_n-1)⋯(1-t_i+2)t_i+1x_i+1-x_i +(1-t_n)(1-t_n-1)⋯(1-t_i+2)(1-t_i+1)z_i-x_i ≤ R(W,W). Second, (<ref>) implies that z_0-x_i≤ R(W,W)+1/n^2 (n+1), then we have z_i-1-x_i =t_i-1x_i-1+(1-t_i-1)z_i-2-x_i ≤ t_i-1x_i-1-x_i+(1-t_i-1)z_i-2-x_i ⋮ ≤ t_i-1x_i-1-x_i+(1-t_i-1)t_i-2x_i-2-x_i+⋯ +(1-t_i-1)(1-t_i-2)⋯(1-t_2)t_1x_1-x_i +(1-t_i-1)(1-t_i-2)⋯(1-t_1)z_0-x_i ≤ R(W,W)+1/n^2 (n+1). It follows that x_n+1-z_0 =z_n-z_0 ≤∑ _i=1^nz_i-z_i-1 = ∑ _i=1^n t_i x_i-z_i-1 =∑ _i=1^nmax{x_i-z_i-1-R(W,W) , 0} ≤n/n^2 (n+1), which, combined with (<ref>), implies that x_n+1-x_n ≥x_n -z_0 - z_0- x_n+1≥ R(W,W)- 1/n^2 . Let (E,·) be an RN module over 𝕂 with base (Ω,ℱ,P) and ξ=⋁{x:x∈ E} (generally, ξ∈L̅^0_+(ℱ):={ξ∈L̅^0(ℱ): ξ≥ 0}). (E,·) is said to have full support if P((ξ>0))=1. In the remainder of this paper, all RN modules mentioned are assumed to have full support. Further, we employ the following notations for a brief introduction to random uniformly convex RN modules: ε_ℱ[0,2]={ε∈ L^0_++(ℱ)| there exists a positive numberλsuch thatλ≤ε≤ 2}. δ_ℱ[0,1]={δ∈ L^0_++(ℱ)| there exists a positive numberηsuch thatη≤δ≤ 1}. A_x=(x>0), A_xy=A_x∩ A_y and B_xy=A_x∩ A_y∩ A_x-y for any x and y in E. An RN module ( E , ·) is said to be random uniformly convex <cit.> if for each ε∈ε_ℱ[0,2], there exists δ∈δ_ℱ[0,1] such that x-y≥ε on D always implies x+y≤ 2(1-δ) on D for any x,y ∈ U(1) and any D∈ℱ such that D⊂ B_xy with P(D)>0, where U(1)={z∈ E: z≤ 1}, called the random closed unit ball of E. Let ( E , ·) be a 𝒯_ε,λ-complete random uniformly convex RN module over 𝕂 with base (Ω,ℱ,P) and G be a nonempty a.s. bounded 𝒯_ε,λ-closed and L^0-convex subset of E. Then G has random complete normal structure. By <cit.>, E is random reflexive, it follows from <cit.> that G is L^0-convexly compact. Furthermore, <cit.> shows that G has random normal structure. Therefore, G has random complete normal structure by Theorem <ref>. Let B be a Banach space over 𝕂, V a closed convex subset of B and L^0(ℱ,B) (resp., L^0(ℱ,V)) the set of equivalence classes of strong random elements from (Ω,ℱ,P) to B (resp., V). It is known from <cit.> that L^0(ℱ,B) (resp., L^0(ℱ,V)) is a 𝒯_ε,λ-complete RN module (resp., 𝒯_ε,λ-closed L^0-convex subset of L^0(ℱ,B)). Let ( B , ·) be a Banach space over 𝕂 and V a nonempty weakly compact convex subset of B such that V has complete normal structure. Then, L^0(ℱ,V) is an L^0-convexly compact subset of L^0(ℱ,B) with random complete normal structure. <cit.> shows that V has normal structure, which, combined with <cit.>, implies that L^0(ℱ,V) is an L^0-convexly compact subset with random normal structure. By Theorem <ref>, L^0(ℱ,V) has random complete normal structure. § PROOF OF THEOREM <REF> The aim of this section is to prove Theorem <ref>. To this end, we first present Lemma <ref> below, which is the special case of Theorem <ref> when the family is finite. Lemmas <ref> and <ref> below are respectively a random generalization of <cit.> and <cit.>, their proofs are omitted since the ideas of the proofs are respectively the same as <cit.> and <cit.> except for some changes in the random setting as made in the proof of <cit.>. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P), G an a.s. bounded 𝒯_ε,λ-closed L^0-convex subset of E such that G has random normal structure, M an L^0-convexly compact subset of G and T: G→ G a nonexpansive mapping with the property that [{T^n(x): n=1,2,⋯} ]^-_ε,λ∩ M≠∅ for any x∈ G. Then T has a fixed point. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P), G an L^0-convexly compact subset with random normal structure of E and 𝒯 a finite commutative family of nonexpansive mappings from G to G. Then 𝒯 has a common fixed point. It is obvious that a singleton is a B_ℱ-stable set in a natural way, whereas a nonempty set containing more than one point may be not necessarily a B_ℱ-stable set, Example <ref> below shows that we can always stabilize such a nonempty set by generating its B_ℱ-stable hull. Let E be a nonempty set containing more than one point, denoting by (e_n,a_n)_n a sequence {(e_n,a_n), n∈ℕ} in E× B_ℱ, and S={(e_n,a_n)_n:{a_n,n∈ℕ} is a partition of unity in B_ℱ, {e_n,n∈ℕ} is a sequence in E}. Let us first define an equivalence relation ∼_1 on S by (e_n,a_n)_n∼_1(f_m,b_m)_m iff ⋁{a_n:e_n=z}=⋁{b_m:f_m=z} for any z∈ E. Denote by [(e_n,a_n)_n] the equivalence class of (e_n,a_n)_n under ∼_1 and assume X={[(e_n,a_n)_n]:(e_n,a_n)_n∈ S}, now let us define another equivalence relation ∼_2 on X× B_ℱ by ([(e_n,a_n)_n],a)∼_2([(f_m,b_m)_m],b) iff a=b and (⋁{a_n:e_n=z})∧ a=(⋁{b_m:f_m=z})∧ a for any z∈ E, then it is easy to check that ∼_2 is regular and X is B_ℱ-stable with respect to ∼_2. For any x,y∈ E, it is clear that [x,1]=[y,1] iff x=y, and hence [x,1] can be identified with x for any x∈ E. If each (e_n,a_n)_n in S can be interpreted as the step function taking the value e_n∈ E on a_n, then X can be interpreted as the set of equivalence classes of step functions under ∼_1. Besides, it is obvious that [(e_n,a_n)_n]=∑_n=1^∞[e_n,1]|a_n, so we can use ∑_n=1^∞e_n|a_n for [(e_n,a_n)_n]. We call X the B_ℱ-stable hull of E, denoted by B_σ(E) in accordance with Definition <ref>. Although a net {G_α,α∈ D} of σ-stable subsets of an L^0(ℱ,𝕂)-module is not necessarily a consistent net, by the method provided by <cit.>, we can generate a consistent net from {G_α,α∈ D}. First, let B_σ(D) be the B_ℱ-stable hull of a directed set D, then the directed relation ≤ on D naturally induces a directed relation on B_σ(D) (also denoted by ≤) by ∑_n=1^∞α_n|a_n≤∑_n=1^∞β_m|b_m iff α_n ≤β_m whenever a_n ∧ b_m>0, it is known from <cit.> that (B_σ(D),≤) is a B_ℱ-stable directed set. Now, for any α̂=∑_n=1^∞α_n|a_n∈ B_σ(D), where {α_n,n∈ℕ} is a sequence in D and {a_n,n∈ℕ} a partition of unity in B_ℱ, define G_α̂=∑_n=1^∞Ĩ_A_nG_α_n, then it is easy to check that {G_α̂, α̂∈ B_σ(D)} is a consistent net, which is called the consistent net generated by {G_α,α∈ D}. Lemmas <ref> and <ref> below are given to simplify the proof of Theorem <ref>. Let ( E, ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P), K an L^0-convexly compact subset of E and {M_α, α∈ D} a decreasing net of σ-stable subsets of K with R(M_α,K)=R(K,K) for any α∈ D. Then the consistent net { M_α̂, α̂∈ B_σ(D) } generated by {M_α, α∈ D} is still decreasing and satisfies the following conditions: * R(M_α̂,K)=R(K,K) ∀ α̂∈ B_σ(D). * ⋃_α̂∈ B_σ(D)C(M_α̂,K) is L^0-convex. For any α̂=∑_n=1^∞α_n|a_n and β̂=∑_m=1^∞β_m|b_m in B_σ(D) with α̂≤β̂, where {a_n,n∈ℕ} and {b_n,n∈ℕ} are two partitions of unity in B_ℱ and {α_n, n∈ℕ} and {β_n, n∈ℕ} are two sequences in D, we have M_β̂ =∑_m=1^∞Ĩ_B_mM_β_m =∑_m,n=1^∞Ĩ_A_n∩ B_mM_β_m ⊂∑_m,n=1^∞Ĩ_A_n∩ B_mM_α_n =M_α̂, which implies that { M_α̂, α̂∈ B_σ(D) } is decreasing. (1). For any α̂∈ B_σ(D), there exist a partition {a_n,n∈ℕ} of unity in B_ℱ and a sequence {α_n, n∈ℕ} in D such that α̂=∑_n=1^∞α_n|a_n, by (6) of Proposition <ref>, we have R(M_α̂,K) =R(∑_n=1^∞Ĩ_A_n M_α_n,K) =∑_n=1^∞Ĩ_A_nR(M_α_n,K) =R(K,K). (2). For any x,y ∈⋃_α̂∈ B_σ(D)C(M_α̂,K), there exist α̂_x, α̂_y ∈ B_σ(D) such that x∈ C( M_α̂_x, K)  and  y∈ C( M_α̂_y,K). Let α̂_x=∑_n=1^∞α_n|a_n and α̂_y=∑_m=1^∞β_m|b_m, where {a_n,n∈ℕ} and {b_n,n∈ℕ} are two partitions of unity in B_ℱ and {α_n, n∈ℕ} and {β_n, n∈ℕ} are two sequences in D. By (7) of Proposition <ref>, we have C( M_α̂_x, K)=C( ∑_n=1^∞Ĩ_A_n M_α_n, K)=∑_n=1^∞Ĩ_A_n C( M_α_n, K). and C( M_α̂_y,K)=C(∑_m=1^∞Ĩ_B_m M_β_m, K)=∑_m=1^∞Ĩ_B_m C( M_β_m,K). Since D is a directed set, for each n and m such that a_n∧ b_m>0, there exists γ_n,m∈ D such that γ_n,m≥α_n and γ_n,m≥β_m. Further, for any u∈ C( M_α_n,K) and any v∈ C( M_β_m, K), since R(M_α,K)=R(K,K) for any α∈ D, we have R(M_γ_n,m,u)≤ R(M_α_n,u)=R(M_α_n,K)=R(K,K)=R(M_γ_n,m,K) and R(M_γ_n,m,v)≤ R(M_β_n,v)=R(M_β_n,K)=R(K,K)=R(M_γ_n,m,K), which implies that u∈ C( M_γ_n,m,K) and v∈ C( M_γ_n,m,K), thus, C( M_α_n,K) ⊂ C( M_γ_n,m, K)  and  C( M_β_m,K) ⊂ C( M_γ_n,m, K). Let α̂_xy=∑_n,m=1^∞γ_n,m|a_n ∧ b_m, then α̂_xy∈ B_σ(D) and we have C( M_α̂_x, K) =∑_n,m=1^∞Ĩ_A_n ∩ B_m C( M_α_n,K) ⊂∑_n,m=1^∞Ĩ_A_n ∩ B_m C( M_γ_n,m, K) =C( ∑_n,m=1^∞Ĩ_A_n ∩ B_m M_γ_n,m,K) =C( M_α̂_xy,K) and C( M_α̂_y,K) =∑_n,m=1^∞Ĩ_A_n ∩ B_m C( M_β_n, K) ⊂∑_n,m=1^∞Ĩ_A_n ∩ B_m C( M_γ_n,m, K) =C( ∑_n,m=1^∞Ĩ_A_n ∩ B_m M_γ_n,m,K) =C( M_α̂_xy,K). Since C( M_α̂_xy,K) is L^0-convex by Proposition <ref>, then for any λ∈ L^0_+(ℱ) with 0≤λ≤ 1, we have λ x+(1-λ)y ∈λ C(M_α̂_x, K)+(1-λ) C( M_α̂_y,K) ⊂λ C( M_α̂_xy,K)+(1-λ) C(M_α̂_xy,K) = C( M_α̂_xy,K) ⊂⋃_α̂∈ B_σ(D) C(M_α̂,K), namely, ⋃_α̂∈ B_σ(D) C(M_α̂,K) is L^0-convex. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P), G an L^0-convexly compact subset of E and {M_α,α∈ D} a net of nonempty σ-stable subsets of G with finite intersection property. Then ⋂_α̂∈ B_σ(D) [Conv_L^0 (M_α̂)]^-_ε,λ≠∅, where {M_α̂, α̂∈ B_σ(D)} is the consistent net generated by {M_α,α∈ D}. Let B_α=[Conv_L^0 (M_α)]^-_ε,λ for each α∈ D and {B_α̂,α̂∈ B_σ(D) } the consistent net generated by { B_α, α∈ D}. It is easy to check that Ĩ_A H^-_ε,λ= (Ĩ_A H) ^-_ε,λ and Ĩ_A Conv_L^0(H)= Conv_L^0(Ĩ_A H) for any A∈ℱ and any H⊂ E, then for any α̂=∑_n=1^∞α_n|a_n in B_σ(D) with {a_n,n∈ℕ} a partition of unity in B_ℱ and {α_n,n∈ℕ} a sequence in D, we have B_α̂ =∑_n=1^∞Ĩ_A_nB_α_n =∑_n=1^∞Ĩ_A_n[Conv_L^0 (M_α_n)]^-_ε,λ =∑_n=1^∞Ĩ_A_n[Ĩ_A_nConv_L^0 (∑_n=1^∞Ĩ_A_nM_α_n)]^-_ε,λ =∑_n=1^∞Ĩ_A_n[Conv_L^0 (∑_n=1^∞Ĩ_A_nM_α_n)]^-_ε,λ =∑_n=1^∞Ĩ_A_n[Conv_L^0 (M_α̂)]^-_ε,λ =[Conv_L^0 (M_α̂)]^-_ε,λ. Then, we only need to prove ⋂_α̂∈ B_σ(D)B_α̂≠∅. Since { B_α, α∈ D} is a net of nonempty 𝒯_ε,λ-closed L^0-convex subsets of G with finite intersection property, we have ⋂_α∈ DB_α≠∅ by the L^0-convex compactness of G. It is easy to check that ⋂_α∈ D B_α=⋂_α̂∈ B_σ(D)B_α̂, so the latter is nonempty. For an RN module (E,·) over 𝕂 with base (Ω,ℱ,P), (E,·) can also be endowed with the following locally L^0-convex topology, which will be used in the proof of Theorem <ref>. For any ε∈ L^0_++(ℱ), let V(θ, ε)={x∈ E: x< ε on Ω}, then 𝒱:={x+V(θ, ε): x∈ E  and ε∈ L^0_++(ℱ) } forms a topological base of some Hausdorff topology for E, called the locally L^0-convex topology <cit.>, denoted by 𝒯_c. It is known from <cit.> that (E,𝒯_c) is a topological module over the topological ring (L^0(ℱ,𝕂),𝒯_c). As shown by <cit.>, if G is a σ-stable subset of an RN module, then G^-_ε,λ=G^-_c, where G^-_ε,λ and G^-_c denote the closure of G under 𝒯_ε, λ and 𝒯_c, respectively. Let (E,·) be an RN module with base (Ω,ℱ,P), G and F be two nonempty σ-stable subsets of E. The mapping T: G→ F is said to be σ-stable if T(∑_n=1^∞Ĩ_A_nx_n)=∑_n=1^∞Ĩ_A_nT(x_n) for each sequence {x_n, n∈ℕ} in G and each {A_n, n∈ℕ}∈Π_ℱ. It is known from <cit.> that if E is 𝒯_ε, λ-complete and T is nonexpansive, then T must be σ-stable. Now we are ready to prove Theorem <ref>. Let 𝒜={H: His a nonempty𝒯_ε,λ-closedL^0-convexsubset of G such thatT(H)⊂ Hfor anyT∈𝒯}, then 𝒜 is a partially ordered set under the usual inclusion relation. By the L^0-convex compactness of G and the Zorn's Lemma, there exists a minimal element K in 𝒜. We assert that K consists of a single point, which is the desired common fixed point of 𝒯. Otherwise, K contains more than one point. Let 𝒬 be the family of nonempty finite subsets of 𝒯. For any Q∈𝒬, let M_Q={x∈ K: T(x)=x  ∀  T∈ Q}, then M_Q is nonempty by Lemma <ref> and is σ-stable since each T is σ-stable. Obviously, M_Q_2⊂ M_Q_1 if Q_1⊂ Q_2. Therefore, {M_Q,Q∈𝒬} is a decreasing net of nonempty σ-stable subsets of K, and then the consistent net {M_Q̂, Q̂∈σ(𝒬 )} generated by {M_Q,Q∈𝒬}, is a decreasing consistent net of nonempty σ-stable subsets of K. Since G has random normal structure, by Theorem <ref>, G has random complete normal structure. The remainder of the proof is divided into two steps. Step 1. We prove that R(M_Q̂,K)=R(K,K) for any Q̂∈ B_σ(𝒬). By Lemma <ref>, it suffices to prove R( M_Q,K)=R( K,K) for any Q∈𝒬. For any given Q_0 ∈𝒬, let r_0:=R( M_Q_0,K) and H_Q={x∈ K: R( M_Q,x)≤ r_0} for any  Q∈𝒬 with  Q_0⊂ Q. Then H_Q_0=C(M_Q_0,K)≠∅ since K is L^0-convexly compact, and the collection {H_Q:Q∈𝒬 with  Q_0⊂ Q} forms an increasing net of nonempty L^0-convex sets. Let S_Q_0=⋃{H_Q:Q∈𝒬 and  Q_0⊂ Q}, we claim that [σ(S_Q_0)]^-_ε,λ∈𝒜. First, since each H_Q is L^0-convex and H_Q increases with Q, it is clear that S_Q_0 is L^0-convex, and so is σ(S_Q_0), which implies that [σ(S_Q_0)]^-_ε,λ is a nonempty 𝒯_ε,λ-closed L^0-convex subset of G. Second, for any T∈𝒯 and any x∈ S_Q_0, there exists Q∈𝒬 with Q_0⊂ Q such that x∈ H_Q and T∈ Q, therefore, R(M_Q,T(x)) =⋁{T(x)-z:z∈ M_Q} =⋁{T(x)-T(z):z∈ M_Q} ≤⋁{x-z:z∈ M_Q} =R(M_Q,x) ≤ r_0, which implies that T(x)∈ H_Q⊂ S_Q_0. Thus T( S_Q_0)⊂ S_Q_0 for each T∈𝒯. Further, for each T∈𝒯, since T is σ-stable, then T(σ( S_Q_0))⊂σ( S_Q_0), and hence T([σ( S_Q_0)]^-_ε,λ)⊂ [σ( S_Q_0)]^-_ε,λ by the 𝒯_ε,λ-continuity of T, which means [σ(S_Q_0)]^-_ε,λ=K by the minimality of K. Since [σ( S_Q_0)]^-_c=[σ( S_Q_0)]^-_ε,λ=K, for any ε∈ L^0_++(ℱ) and any x∈ K, there exists h∈σ( S_Q_0) such that x-h≤ε. For such a point h, there exist {A_n,n∈ℕ}∈Π_ℱ and {h_n,n∈ N} in S_Q_0 such that h=∑_n=1^∞Ĩ_A_nh_n. For each h_n, there exists Q_n ∈𝒬 with Q_0⊂ Q_n such that h_n∈ H_Q_n, namely, R(M_Q_n,h_n)≤ r_0. Let Q̂_x=∑_n=1^∞Q_n|a_n, then Q̂_x∈ B_σ(𝒬) and Q_0⊂Q̂_x, we have R(M_Q̂_x,x) =⋁{x-y:y∈ M_Q̂_x} ≤⋁{x-h+h-y: y∈ M_Q̂_x} ≤⋁{h-y:y∈ M_Q̂_x} +ε =R(M_Q̂_x,h)+ε =R( ∑_n=1^∞Ĩ_A_nM_Q_n,∑_n=1^∞Ĩ_A_n h_n)+ε =∑_n=1^∞Ĩ_A_n R( M_Q_n,h_n )+ε ≤ r_0 +ε, which implies that M_Q̂_x⊂ B(x,r_0+ε), where B(x,r_0+ε):={y∈ K: x-y≤ r_0+ε} is 𝒯_ε,λ-closed and L^0-convex. Thus, we have ⋂_Q̂∈ B_σ(𝒬), Q_0 ⊂Q̂ [Conv_L^0(M_Q̂)]^-_ε,λ ⊂ ⋂_Q̂_x∈ B_σ(𝒬), Q_0⊂Q̂_x [Conv_L^0(M_Q̂_x)]^-_ε,λ ⊂ ⋂_x∈ K B(x,r_0+ε). Further, Lemma <ref> shows that ⋂_Q̂⊂ B_σ(Q), Q_0⊂Q̂ [Conv_L^0(M_Q̂)]^-_ε,λ≠∅. Then, ⋂_x∈ K B(x,r_0+ε)≠∅. For any y∈⋂_x∈ K B(x,r_0+ε), we have R(K,y)≤ r_0+ε, which implies that R(K,K)≤ R(K,y)≤ r_0+ε. By the arbitrariness of ε, we have R(K,K) ≤ r_0=R( M_Q_0,K). On the other hand, since M_Q_0⊂ K, we have R( M_Q_0,x)≤ R(K,x) for any x∈ K, which implies that R( M_Q_0,K)≤ R(K,K). To sum up, R( M_Q,K)= R(K,K) for each Q∈𝒬. Step 2. We prove that [⋃_Q̂∈ B_σ(𝒬) C( M_Q̂,K)]^-_ε,λ=K, which leads to a contradiction with the fact that G has random complete normal structure. By the minimality of K, it suffices to prove [⋃_Q̂∈ B_σ(𝒬) C( M_Q̂,K)]^-_ε,λ∈𝒜. First, Lemma <ref> implies that ⋃_Q̂∈ B_σ(𝒬) C( M_Q̂, K) is nonempty and L^0-convex, so [⋃_Q̂∈ B_σ(𝒬) C( M_Q̂,K)]^-_ε,λ is a nonempty 𝒯_ε,λ-closed L^0-convex subset of K. Second, we prove that T([⋃_Q̂∈ B_σ(𝒬) C( M_Q̂,K)]^-_ε,λ)⊂ [⋃_Q̂∈ B_σ(𝒬) C( M_Q̂,K)]^-_ε,λ for any T∈𝒯. Since each T is continuous, we only need to show that T(⋃_Q̂∈ B_σ(𝒬) C( M_Q̂, K))⊂⋃_Q̂∈ B_σ(𝒬) C( M_Q̂, K) for any T∈𝒯. For any T∈𝒯 and any x∈⋃_Q̂∈ B_σ(𝒬) C( M_Q̂, K), there exists Q̂_x∈ B_σ(𝒬) such that x ∈ C( M_Q̂_x,K). For Q̂_x, there exist {A_n, n∈ℕ}∈Π_ℱ and {Q_n,n∈ℕ} in 𝒬 such that Q̂_x=∑_n=1^∞Q_n|a_n. Let Q'_n=Q_n ∪{T} for each n∈ℕ and Q̂'_x=∑_n=1^∞Q'_n|a_n, then Q̂'_x∈ B_σ(𝒬) and we have C(M_Q̂_x,K) =∑_n=1^∞Ĩ_A_n C(M_Q_n, K) ⊂∑_n=1^∞Ĩ_A_n C( M_Q'_n, K) =C(∑_n=1^∞Ĩ_A_n M_Q'_n, K) =C(M_Q̂'_x, K), which implies that x∈ C( M_Q̂'_x, K). Further, since R( M_Q'_n,T(x)) =⋁{T(x)-z:z∈ M_Q'_n} =⋁{T(x)-T(z):z∈ M_Q'_n} ≤⋁{x-z:z∈ M_Q'_n} = R( M_Q'_n,x) for each n∈ℕ, then we have R( M_Q̂'_x,T(x)) = R( ∑_n=1^∞Ĩ_A_n M_ Q'_n,T(x)) =∑_n=1^∞Ĩ_A_n R( M_ Q'_n,T(x)) ≤∑_n=1^∞Ĩ_A_n R( M_ Q'_n,x) = R( ∑_n=1^∞Ĩ_A_nM_Q'_n, x) =R( M_Q̂'_x,K), that is, T(x) ∈ C( M_Q̂'_x, K) ⊂⋃_Q̂∈ B_σ(𝒬) C( M_Q̂, K). Thus, T(⋃_Q̂∈ B_σ(𝒬) C( M_Q̂, K)) ⊂⋃_Q̂∈ B_σ(𝒬) C( M_Q̂, K) for each T∈𝒯. Corollary <ref> below is a random generalization of <cit.>. Let ( E , ·) be a 𝒯_ε,λ-complete random uniformly convex RN module over 𝕂 with base (Ω,ℱ,P), G a nonempty a.s. bounded 𝒯_ε,λ-closed L^0-convex subset of E and 𝒯 a commutative family of nonexpansive mappings from G to G. Then 𝒯 has a common fixed point. Corollary <ref> below is a random generalization of <cit.>. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P), G an L^0-convexly compact subset with random normal structure of E and 𝒯 a commutative family of nonexpansive mappings from G to G with [T(G)]^-_ε,λ=G for each T∈𝒯. Then 𝒯 has a common fixed point in C(G,G). Obviously, G is a.s. bounded. For any x∈ C(G,G) and any T∈𝒯, we have R(G,Tx)=R([T(G)]^-_ε,λ,Tx)=R(T(G),Tx)≤ R(G,x)=R(G,G), which implies that Tx∈ C(G,G), and then T(C(G,G))⊂ C(G,G) for each T∈𝒯. Further, Proposition <ref> shows that C(G,G) is an L^0-convexly compact subset with random normal structure. Thus, by Theorem <ref>, there exists x∈ C(G,G) such that Tx=x for each T∈𝒯. § PROOFS OF THEOREMS <REF> AND <REF> This section is devoted to the proofs of Theorems <ref> and <ref>, which are the random generalizations of the classical results in <cit.> and <cit.>, respectively. An RN module ( E , ·) is said to be random strictly convex <cit.> if for any x and y in E\{θ} such that x+y=x+y, there exists ξ∈ L^0_+(ℱ) such that ξ >0 on A_xy and Ĩ_A_xyx=ξ( Ĩ_A_xy y ). The study of affine geometry in regular L^0-module began with <cit.>. We give Propositions <ref> and <ref> below since they are of independent interest. Proposition <ref> is a basic result concerning L^0-affine mappings, which is a random generalization of <cit.>. Let ( E , ·) be a 𝒯_ε,λ-complete random strictly convex RN module over 𝕂 with base (Ω,ℱ,P), G an L^0-convex subset of E and T: G→ E an isometric mapping, then T is L^0-affine (namely, T(λ x+(1-λ)y)=λ T(x)+(1-λ) T(y) for any x,y∈ G and any λ∈ L^0_+(ℱ) with 0≤λ≤ 1). For any x,y∈ G and any α∈ L^0_+(ℱ) with 0≤α≤ 1, let z=α x+(1-α)y, it suffices to prove that Tz=α Tx+(1-α)Ty. Since x-z=(1-α)x-y and z-y=αx-y, it follows that Tx-Tz+Tz-Ty =x-z+z-y =x-y. Then x-y>0 on A_(Tx-Tz)(Tz-Ty). By the definition of random strict convexity, there exists ξ∈ L^0_+(ℱ) such that ξ >0 on A_(Tx-Tz)(Tz-Ty) and Ĩ_A_(Tx-Tz)(Tz-Ty)(Tx-Tz)=ξĨ_A_(Tx-Tz)(Tz-Ty) (Tz-Ty), namely, Ĩ_A_(Tx-Tz)(Tz-Ty)Tz =Ĩ_A_(Tx-Tz)(Tz-Ty)(1/1+ξTx+ξ/1+ξTy). Hence, we have Ĩ_A_(Tx-Tz)(Tz-Ty)Tx-Tz-ξĨ_A_(Tx-Tz)(Tz-Ty)Tz-Ty = Ĩ_A_(Tx-Tz)(Tz-Ty)x-z- Ĩ_A_(Tx-Tz)(Tz-Ty)ξz-y = Ĩ_A_(Tx-Tz)(Tz-Ty)(1-α)x-y- Ĩ_A_(Tx-Tz)(Tz-Ty)ξαx-y = Ĩ_A_(Tx-Tz)(Tz-Ty) (1-(ξ+1) α) x-y = 0, which implies that Ĩ_A_(Tx-Tz)(Tz-Ty)1/1+ξ=Ĩ_A_(Tx-Tz)(Tz-Ty)α. Therefore, Ĩ_A_(Tx-Tz)(Tz-Ty)Tz=Ĩ_A_(Tx-Tz)(Tz-Ty)(α Tx+(1-α)Ty). On the other hand, Ĩ_A^c_(Tx-Tz)∩ A_(Tz-Ty)Tx-Tz =Ĩ_A^c_(Tx-Tz)∩ A_(Tz-Ty)x-z =Ĩ_A^c_(Tx-Tz)∩ A_(Tz-Ty) (1-α) x-y =0 implies that Ĩ_A^c_(Tx-Tz)∩ A_(Tz-Ty) (1-α)=0. Then, we have Ĩ_A^c_(Tx-Tz)∩ A_(Tz-Ty) Tz =Ĩ_A^c_(Tx-Tz)∩ A_(Tz-Ty) Tx =Ĩ_A^c_(Tx-Tz)∩ A_(Tz-Ty) (α Tx+(1-α)Ty). Similarly, we can obtain Ĩ_A_(Tx-Tz)∩ A^c_(Tz-Ty) Tz=Ĩ_A_(Tx-Tz)∩ A^c_(Tz-Ty) (α Tx+(1-α)Ty) and Ĩ_A^c_(Tx-Tz)∩ A^c_(Tz-Ty) Tz=Ĩ_A^c_(Tx-Tz)∩ A^c_(Tz-Ty) (α Tx+(1-α)Ty). Since {A_(Tx-Tz)(Tz-Ty),A^c_(Tx-Tz)∩ A_(Tz-Ty),A_(Tx-Tz)∩ A^c_(Tz-Ty), A^c_(Tx-Tz)∩ A^c_(Tz-Ty)}∈Π_ℱ, we have Tz=α Tx+(1-α)Ty. Let ( E , ·) be a 𝒯_ε,λ-complete random uniformly convex RN module over 𝕂 with base (Ω,ℱ,P), G a nonempty a.s. bounded 𝒯_ε,λ-closed L^0-convex subset of E and T: G→ G an isometric mapping, then R(T(G),T(G))=R(G,G) and C(T(G),T(G))=T( C(G,G) ). It is known from <cit.> that a random uniformly convex RN module must be random strictly convex, then T is L^0-affine by Proposition <ref>, and hence T(G) is L^0-convex. It is clear that T(G) is an a.s. bounded 𝒯_ε,λ-closed L^0-convex subset of G, as in the proof of Corollary <ref>, T(G) is L^0-convexly compact, and further C(T(G),T(G))≠∅ by Proposition <ref>. Then, we have R(T(G),T(G)) =⋀{R(T(G),Tx):x∈ G} =⋀{R(G,x):x∈ G} =R(G,G). For any x_0∈ C(G,G), since R(T(G),Tx_0)=R(G,x_0)=R(G,G)=R(T(G),T(G)), then Tx_0∈ C(T(G),T(G)), which implies that T( C(G,G) )⊂ C(T(G),T(G)). Conversely, for any y' ∈ C(T(G),T(G)), there exists x'∈ G such that y'=Tx' and R(G,x')=R(T(G),y')= R(T(G),T(G))=R(G,G), then x'∈ C(G,G), which implies that C(T(G),T(G)) ⊂ T( C(G,G) ). Thus, C(T(G),T(G))=T(C(G,G) ). We first give Lemmas <ref> and <ref> below for the proofs of Lemma <ref> and Theorem <ref>. Let E, G and ℰ be the same as in Definition <ref>, we have the following statements: * AR(ℰ,·) is 𝒯_ε,λ-continuous and L^0-convex. * Ĩ_A AR(ℰ,x)=AR(Ĩ_Aℰ,Ĩ_Ax) for any x∈ E and any A∈ℱ. Further, if E is σ-stable, then AR(ℰ,∑_n=1^∞Ĩ_A_n x_n)=∑_n=1^∞Ĩ_A_n AR(ℰ,x_n) for any sequence {x_n,n∈ℕ} in E and any {A_n,n∈ℕ}∈Π_ℱ. * Ĩ_A AR(ℰ,G)=AR(Ĩ_Aℰ,Ĩ_A G) for any A∈ℱ. Further, if E is σ-stable, then AR(ℰ, ∑_n=1^∞Ĩ_A_nG_n)=∑_n=1^∞Ĩ_A_n AR(ℰ,G_n) for any sequence {G_n,n∈ℕ} of nonempty subsets of E and any {A_n,n∈ℕ}∈Π_ℱ. * If G is finitely stable and AC(H,G)≠∅, then Ĩ_A AC(ℰ,G)= AC(Ĩ_Aℰ,Ĩ_AG) for any A∈ℱ. Further, if E is σ-stable, then AC(ℰ, ∑_n=1^∞Ĩ_A_nG_n)=∑_n=1^∞Ĩ_A_n AC(ℰ,G_n) for any sequence {G_n,n∈ℕ} of finitely stable subsets of E and any {A_n,n∈ℕ}∈Π_ℱ. By the same methods used in the proof of Proposition <ref>, it is easy to check that (2), (3) and (4) are true. We focus on proving (1). By simple calculations, we have |AR(ℰ,x)-AR(ℰ,y)|≤x-y for any x,y∈ W, thus AR(ℰ,·) is 𝒯_ε,λ-continuous. For any α_1, α_2 ∈Λ and any λ∈ L^0_+(ℱ) with 0≤λ≤1, there exists α_3 ∈Λ with α_3≥α_1 and α_3≥α_2, such that W_α_3⊂λ W_α_1+(1-λ)W_α_2, we have R(W_α_3,λ x+(1-λ)y) ≤ R(λ W_α_1+(1-λ)W_α_2,λ x+(1-λ)y) =⋁{λ x+(1-λ)y-λ a-(1-λ)b: a∈ W_α_1, b∈ W_α_2} ≤λ R(W_α_1,x)+(1-λ) R(W_α_2,y). From AR( ℰ,λ x+(1-λ)y) ≤ R(W_α_3,λ x+(1-λ)y) ≤λ R(W_α_1,x)+ (1-λ) R(W_α_2,y), we have AR( ℰ,λ x+(1-λ)y)≤λ AR(ℰ,x) + (1-λ)AR(ℰ,y) by the arbitrariness of α_1 and α_2. Similarly to Proposition <ref>, one can have the following. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P), G an L^0-convexly compact subset of E and ℰ={B_α, α∈Λ} a decreasing net of a.s. nonempty bounded subsets of E. Then AC(ℰ,G) is a nonempty 𝒯_ε,λ-closed L^0-convex subset of G. Lemma <ref> below is crucial for the proof of Theorem <ref>. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P), G an L^0-convexly compact subset of E and T: G→ G a nonexpansive mapping. Then AC(ℰ,G) is T-invariant (namely, T(AC(ℰ,G))⊂ AC(ℰ,G)), where ℰ={T^m(G), m∈ℕ}. It is clear that ℰ={T^m(G), m∈ℕ} is a decreasing sequence of subsets of G, then AC(ℰ,G) is a nonempty 𝒯_ε,λ-closed L^0-convex subset of G by Lemma <ref>. For any x∈ AC(ℰ,G), we have AR(ℰ,Tx) =⋀_mR(T^m(G),Tx) ≤⋀_mR(T^m-1(G),x) =AR(ℰ,x) =AR(ℰ,G), which implies that Tx∈ AC(ℰ,G). Thus, AC(ℰ,G) is T-invariant. Let ℰ={T^m(G), m∈ℕ}, then AC(ℰ,G) is an L^0-convexly compact subset with random normal structure by Lemma <ref>. Lemma <ref> shows that T maps AC(ℰ,G) into AC(ℰ,G), further, by Theorem <ref>, there exists z∈ AC(ℰ,G) such that Tz=z. It suffices to show that z∈ C(G,G). By the isometry of T, we have R(T^m(G),z)=R(T^m(G),Tz)=R(T^m-1(G),z), ∀ m∈ℕ. It follows that AR(ℰ,G)=AR( ℰ,z )=⋀_m R(T^m(G),z)=R(G,z). For any x∈ G, we have R(G,x)≥⋀_m R(T^m(G),x)=AR( ℰ,x )≥ AR(ℰ,G)=R(G,z), which implies that R(G,G)≥ R(G,z). Thus z∈ C(G,G). As in the proof of Corollary <ref>, G in Corollary <ref> is L^0-convexly compact with random normal structure. Thus, Corollary <ref> follows directly from Theorem <ref>. Let ( E , ·) be a 𝒯_ε,λ-complete random uniformly convex RN module over 𝕂 with base (Ω,ℱ,P) and G be a nonempty a.s. bounded 𝒯_ε,λ-closed L^0-convex subset of E. Then every isometric mapping T: G→ G has a fixed point in C(G,G). To prove Theorem <ref>, we first present Lemma <ref> below. Let ( E , ·) be a 𝒯_ε,λ-complete RN module over 𝕂 with base (Ω,ℱ,P), G an L^0-convexly compact subset of E and T: G→ G a surjective isometric mapping. Then T(C(G,G))=C(G,G). By Proposition <ref>, C(G,G)≠∅. For any x∈ C(G,G), since T is a surjective isometric mapping, one has R(G,Tx)=R(T(G),Tx)=R(G,x)=R(G,G), and then Tx∈ C(G,G). So, T(C(G,G))⊂ C(G,G). On the other hand, applying the above argument to T^-1, we can obtain C(G,G) ⊂ T(C(G,G)).   By Proposition <ref>, C(G,G) is a nonempty 𝒯_ε,λ-closed L^0-convex subset of G. Let 𝒜={ H : His a nonempty 𝒯_ε,λ-closed L^0-convex subset ofC(G,G)such thatT(H)⊂ H, ∀  T∈𝒯}. Since T(C(G,G))=C(G,G) for every T∈𝒯 by Lemma <ref>, then 𝒜 is nonempty. The L^0-convex compactness of G and the Zorn' Lemma together show that there exists a minimal element H_0 in 𝒜. We assert that H_0 consists of a single element, which is the common fixed point of 𝒯. Otherwise, let A=(D(H_0)>0), then P(A)>0. Since G has random normal structure, there exists z_0∈ H_0 such that l:=R(H_0,z_0)<D(H_0) on A. Let H_1={x∈ H_0: R(H_0,x)≤ l}, it is clear that H_1 is nonempty, 𝒯_ε,λ-closed and L^0-convex. We assert that H_1∈𝒜. It suffices to show that T(H_1)⊂ H_1 for every T∈𝒯. Observe that T∈𝒯 implies that T^-1∈𝒯 and T^-1(H_0)⊂ H_0. For any T∈𝒯 and any x∈ H_1, we have R(H_0,Tx) =⋁{Tx-h:h∈ H_0} =⋁{Tx-T(T^-1h):h∈ H_0} =⋁{x-T^-1h:h∈ H_0} ≤ R(H_0,x) ≤ l. Then, Tx∈ H_1 and hence T(H_1)⊂ H_1. By the minimality of H_0, we have H_0=H_1, and then D(H_0)≤ l, which contradicts with the fact that l<D(H_0) on A. Let ( E , ·) be a 𝒯_ε,λ-complete random uniformly convex RN module over 𝕂 with base (Ω,ℱ,P), G a nonempty a.s. bounded 𝒯_ε,λ-closed L^0-convex subset of E and 𝒯={T  |  T: G→ Gis a surjective isometric mapping}, then 𝒯 has a common fixed point in C(G,G). § PROOFS OF THEOREMS <REF>, <REF> AND <REF> By Theorems 2.11 and 2.16 of <cit.>, L^0(ℱ,V) is an L^0-convexly compact subset with random normal structure of the 𝒯_ε,λ-complete RN module L^0(ℱ,B). For each T∈𝒯, define T̂: L^0(ℱ,V)→ L^0(ℱ,V) by T̂(x)= the equivalence class of  T(·, x^0(·)),  ∀  x∈ L^0(ℱ,V), where x^0 is an arbitrarily chosen representative of x. It is easy to check that T̂(x) is well defined and 𝒯̂:={T̂: T∈𝒯} is a commutative family of nonexpansive mappings. Applying Theorem <ref> to 𝒯̂ and L^0(ℱ,V), there exists x∈ L^0(ℱ,V) such that T̂(x)=x for each T̂∈𝒯̂. Then an arbitrarily chosen representative x^0 of x must satisfy T(ω,x^0(ω) )=x^0(ω) for almost all ω∈Ω and all T∈𝒯. To prove Theorems <ref> and <ref>, we first give the following Lemma <ref>. Here, we would like to remind the readers that if V is a nonempty weakly compact convex subset of a Banach space, then c(V,V) is a nonempty closed convex subset of V (see <cit.> for details). Conventionally, by identifying each v∈ V with Ĩ_Ωv, we can regard V as a subset of L^0(ℱ,V). Let (B,·) be a Banach space over 𝕂 and V be a nonempty weakly compact convex subset of B. Then C(L^0(ℱ,V),L^0(ℱ,V))=L^0(ℱ,c(V,V)). For any x∈ L^0(ℱ,V), it is clear that r(V,x^0(·)):Ω→ℝ_+ is a representative of R(L^0(ℱ,V),x), where x^0 is an arbitrarily chosen representative of x. Let S(ℱ,V) be the set of equivalence classes of simple random elements of Ω to V, since S(ℱ,V) is 𝒯_ε,λ-dense in L^0(ℱ,V), R(L^0(ℱ,V),·):E→ L^0_+(ℱ) is 𝒯_ε,λ-continuous and ·:E→ L^0_+(ℱ) is also 𝒯_ε,λ-continuous, we have R(L^0(ℱ,V),x) =R(L^0(ℱ,V),L^0(ℱ,V)) =⋀_y∈ L^0(ℱ,V)R(L^0(ℱ,V),y) =⋀_∑_i=1^nĨ_A_iv_i∈ S(ℱ,V)R(L^0(ℱ,V),∑_i=1^nĨ_A_iv_i) =⋀_∑_i=1^nĨ_A_iv_i∈ S(ℱ,V)∑_i=1^nĨ_A_iR(L^0(ℱ,V),v_i) =⋀_v∈ VR(L^0(ℱ,V),v) =⋀_v∈ V⋁_u∈ Vu-v =R(V,V), which, by noting that r(V,V) is a representative of R(V,V), implies that r(V,x^0(·))=r(V,V)  a.s.  iff  R(L^0(ℱ,V),x)=R(L^0(ℱ,V),L^0(ℱ,V)). Hence, the proof is complete. Define T̂: L^0(ℱ,V)→ L^0(ℱ,V) by T̂(x)= the equivalence class of  T(·, x^0(·)),  ∀  x∈ L^0(ℱ,V), where x^0 is an arbitrarily chosen representative of x. It is easy to check that T̂ is isometric and L^0(ℱ,V) is L^0-convexly compact with random normal structure. Applying Theorem <ref> to T̂ and L^0(ℱ,V), there exists x∈ C(L^0(ℱ,V), L^0(ℱ,V)) such that T̂(x)=x. By Lemma <ref>, x∈ L^0(ℱ,c(V,V)). Let x^0 be an arbitrarily chosen representative of x, then x^0(ω)∈ c(V,V) and T(ω,x^0(ω) )=x^0(ω) for almost all ω∈Ω. Similar to the proof of Theorem <ref>, by Theorem <ref>, 𝒯̂={T̂:T∈𝒯} has a common fixed point x in C(L^0(ℱ,V),L^0(ℱ,V)). Then, for an arbitrarily chosen representative x^0 of x, x^0(ω)∈ c(V,V) and T(ω,x^0(ω) )=x^0(ω) for almost all ω∈Ω and all T∈𝒯. § CONCLUDING REMARKS As in <cit.>, we may suppose that the base space of an RN module (E,·) is an arbitrary σ-finite measure space (Ω,ℱ,μ). A probability measure P_μ associated with (Ω,ℱ,μ) is defined as follows. When (Ω,ℱ,μ) is a finite measure space, P_μ(A)=μ(A)/μ(Ω) for each A∈ℱ; when (Ω,ℱ,μ) is a σ-finite measure space, for example, let {A_n,n∈ℕ} be a countable partition of Ω to ℱ such that 0<μ(A_n)<+∞ for each n in ℕ, then P_μ(A)=∑_n=1^∞μ(A∩ A_n)/2^nμ(A_n) for each A∈ℱ. It is clear that P_μ and μ are equivalent. When we regarded the RN module (E,·) as the one with base (Ω,ℱ,P_μ), in this case a sequence {x_n, n ∈ℕ} in (E,·) converges in the (ε,λ)-topology to x iff {x_n -x,n ∈ℕ} converges in probability measure P_μ to 0, that is, iff {x_n -x,n ∈ℕ} converges locally in measure μ to 0. From the equivalence of P_μ to μ, one can easily see that all the results of this paper are still true in the case when the base space is σ-finite. Further, define ·:E→ [0,+∞) by x =∑_n=1^∞1/2^nμ(A_n)∫_A_nx∧ 1 dμ (or x =∑_n=1^∞1/2^nμ(A_n)∫_A_nx/x+1 dμ) for any x∈ E, then it is known that (E,·) is a Fréchet space when (E,·) is 𝒯_ε,λ-complete, and the (ε,λ)-topology is exactly the linear topology induced by the quasinorm ·. In nonsmooth differential geometry on metric measure spaces, the metric measure spaces are often assumed to be σ-finite, the L^0-normed L^0-modules (namely, RN modules) are just endowed with the linear topology induced by the quasinorm · <cit.>. We believe that the fixed point theorems developed in <cit.> and this paper will be useful in the future study of nonsmooth differential geometry on metric measure spaces. Geometry of RN modules began with <cit.>, where random uniform and strict convexities were introduced and their relations with classical convexities of the L^p-spaces generated by the RN modules were established. Recently, these works have been used by Pasqualetto et al. <cit.> in the study of Banach bundles. Random normal structure and random complete normal structure were studied in <cit.> and this paper. But, up to now, the notions of random uniform smoothness and random uniform normal structure for complete RN modules have not even been defined, we hope to see a good theory of them in the near future since metric fixed point theory in random functional analysis will unavoidably touch on their researches. Lau et al. <cit.> have made much remarkable contribution to the common fixed point theorems for semigroups of nonexpansive mappings in Banach spaces. In the future, an important research topic is naturally the study of the common fixed point theorems for semigroups of nonexpansive mappings in complete RN modules, but this will be a difficult topic since RN modules are not locally convex in general and this topic will unavoidably involve the stable random weak or weak* topology under the theory of random conjugate spaces, which together with stable compactness is currently the most difficult subject being developed, see <cit.> for details. § ACKNOWLEDGMENT The first three authors were supported by the National Natural Science Foundation of China (Grant No.12371141) and the Natural Science Foundation of Hunan Province of China (Grant No.2023JJ30642). The fourth author was supported by the National Natural Science Foundation of China (Grant No. U1811461). 1 BK1966 Belluce, L.P., Kirk, W.A.: Fixed point theorems for families of contraction mappings. Pacific J. Math. 18, 213–217 (1966) BK1967 Belluce, L.P., Kirk, W.A.: Nonexpansive mappings and fixed points in Banach spaces. Illinois J. Math. 11, 474–479 (1967) Bha1976 Bharucha-Reid, A.T.: Fixed point theorems in probabilistic analysis. Bull. Amer. Math. Soc. 82(5), 641–657 (1976) BM1948 Brodskii, M.S., Milman, D.P.: On the center of a convex set (in Russian). Dokl. Akad. Nauk SSSR 59, 837–840 (1948) Browder1965 Browder, F.E.: Nonexpansive nonlinear operators in a Banach space. Proc. Natl. Acad. Sci. U.S.A. 54(4), 1041–1044 (1965) CLP2024 Caputo, E., Lučić, M., Pasqualetto, E., Vojnović, I.: On the integration of L^0-Banach L^0-modules and its applications to vector calculus on spaces. Rev. Mat. Complut. (2024) DS58 Dunford, N., Schwartz, J.T.: Linear Operators. Interscience, New York (1958) ET2020 El-Ghabi, A., Taoudi, M.A.: Random fixed point theorems under weak topology features and application to random integral equations with lack of compactness. J. Fixed Point Theory Appl. 22, 85 (2020) En78 Engl, H.W.: A general stochastic fixed point theorem for continuous random operators on stochastic domains. J. Math. Anal. Appl. 66(1), 220–231 (1978) En1978 Engl, H.W.: Random fixed point theorems for multivalued mappings. Pacific J. Math. 76(2), 351–360 (1978) FKV2009 Filipovic, D., Kupper, M., Vogelpoth, N.: Separation and duality in locally L^0-convex modules. J. Funct. Anal. 256, 3996–4029 (2009) Gigli2018 Gigli, N.: Nonsmooth differential geometry-an approach tailored for spaces with Ricci curvature bounded from below. Mem. Amer. Math. Soc. 251, 1196 (2018) GR1984 Goebel, K., Reich, S.: Uniform Convexity, Hyperbolic Geometry, and Nonexpansive Mappings. Monogr. Textbooks Pure Appl. Math., 83. Marcel Dekker, New York (1984) Guo92 Guo, T.X.: Random metric theory and its applications. Ph.D thesis, Xi'an Jiaotong University, China (1992) Guo93 Guo, T.X.: A new approach to probabilistic functional analysis. Proceedings of the first China Postdoctoral Academic Conference, The China National Defense and Industry Press, Beijing, pp. 1150–1154 (1993) Guo99 Guo, T.X.: Some basic theories of random normed linear spaces and random inner product spaces. Acta Anal. Funct. Appl. 1, 160–184 (1999) Guo5 Guo, T.X.: Optimization of conditional convex risk measures. An invited lecture presented in the 8th International Congress of Chinese Mathematicians, Tsinghua University, Beijing, June 9–14, (2019) Guo10 Guo, T.X.: Relations between some basic results derived from two kinds of topologies for a random locally convex module. J. Funct. Anal. 258(9), 3024–3047 (2010) GMT24 Guo, T.X., Mu, X.H., Tu Q.: The relations among the notions of various kinds of stability and their applications. Banach J. Math. Anal. 18, 42 (2024) GZZ14 Guo, T.X., Zhao, S.E., Zeng, X.L.: The relations among the three kinds of conditional risk mearures. Sci. China Math. 57(8), 1753–1764 (2014) GZ10 Guo, T.X., Zeng, X.L.: Random strict convexity and random uniform convexity in random normed modules. Nonlinear Anal. 73(5), 1239–1263 (2010) GZ12 Guo, T.X., Zeng, X.L.: An L^0(ℱ,ℝ)-valued function's intermediate value theorem and its applications to random uniform convexity. Acta Math. Sinica 28(5) 909–924 (2012) GZWY20 Guo, T.X., Zhang, E.X., Wang, Y.C. Yuan, G.: L^0-convex compactness and random normal structure in L^0 (ℱ, B). Acta Math. Sci. 40(2), 457–469 (2020) GZWW20 Guo, T.X., Zhang, E.X., Wang, Y.C., Wu, M.Z.: L^0-convex compactness and its applications to random convex optimization and random variational inequalities. Optimization 70(5-6), 937–971 (2021) GZWG20 Guo, T.X., Zhang, E.X., Wang, Y.C., Guo, Z.C.: Two fixed point theorems in complete random normed modules and their applications to backward stochastic equations. J. Math. Anal. Appl. 483(2), 123644 (2020) GWT23 Guo, T.X., Wang, Y.C., Tang Y.: The Krein-Milman theorem in random locally convex modules and its applications (in Chinese). Sci. Sin. Math. 53(12), 1–18 (2023) GWXYC24 Guo, T.X., Wang, Y.C., Xu, H.K., Yuan, G., Chen, G.: The noncompact Schauder fixed point theorem in random normed modules and its applications. arXiv: 2104.11095v10, (2024) GWYZ20 Guo, T.X., Wang, Y.C., Yang, B.X., Zhang, E.X.: On d-σ-stability in random metric spaces and its applications. J. Nonlinear Convex Anal. 21(6), 1297–1316 (2020) Han57 Hanš, O.: Random fixed point theorems, Transaction of the first prague Conference on information Theory, Statistical Decision Functions, Random Process, pp. 105–125 (1957) Iton1979 Itoh, S.: Random fixed point theorems with an application to random differential equations in Banach spaces. J. Math. Anal. Appl. 67(2), 261–273 (1979) Kirk1965 Kirk, W.A.: A fixed point theorem for mappings which do not increase distances. Amer. Math. Monthly. 72, 1004–1006 (1965) Kirk1981 Kirk, W.A.: Fixed point theory for nonexpansive mappings. Fixed point theory (Sherbrooke, Que., 1980), pp. 484–505. Lecture Notes in Math., 886. Springer, Berlin (1981) KS2001 Kirk, W.A., Sims B.: Handbook of metric fixed point theory. Kluwer Academic Publishers, Dordrecht (2001) LZ2008 Lau, A.T.M., Zhang, Y.: Fixed point properties of semigroups of nonexpansive mappings. J. Funct. Anal. 254(10), 2534–2554 (2008) LZ2012 Lau, A.T.M., Zhang, Y.: Fixed point properties for semigroups of nonlinear mappings and amenability. J. Funct. Anal. 263(10), 2949–2977 (2012) lim1974 Lim, T.C.: Characterizations of normal structure. Proc. Amer. Math. Soc. 43(2), 313–319 (1974) Lim1974 Lim, T.C.: A fixed point theorem for families on nonexpansive mappings. Pacific J. Math. 53(2), 487–493 (1974) Lim2003 Lim, T.C., Lin, P.K., Petalas, C., Vidalis, T.: Fixed points of isometries on weakly compact convex sets. J. Math. Anal. Appl. 282, 1–7 (2003) Lin1988 Lin, T.C.: Random approximations and random fixed point theorems for non-self-maps. Proc. Amer. Math. Soc. 103, 1129–1135 (1988) LP19 Lučić, D., Pasqualetto, E.: The Serre-Swan theorem for normed modules. Rend. Circ. Mat. Palermo 68, 385–404 (2019) LPV24 Lučić, M., Pasqualetto, E., Vojnović, I.: On the reflexivity properties of Banach bundles and Banach modules. Banach J. Math. Anal. 18, 7 (2024) MKB89 Monk, J.D., Koppelberg, S., Bonnet, R.: Handbook of Boolean Algebras. North-Holland, (1989) Pa74 Padgett, W.J., Tsokos, C.P.: Random Integral Equations with Applications to Life Sciences and Engineering, Mathematics in Science and Engineering, vol. 108. Academic Press [A subsidiary of Harcourt Brace Jovanovich, Publishers], New York (1974) Papa1990 Papageorgiou, N.S.: On the measurable selection approach in random differential inclusions, fixed point theory and optimization. J. Math. Phys. Sci. 24(5), 331–345 (1990) Raj2015 Rajesh, S., Veeramani, P.: Chebyshev centers and fixed point theorems. J. Math. Anal. Appl. 422, 880–885 (2015) Reich1978 Reich, S.: A random fixed point theorem for set-valued mappings. Atti Accad. Naz. Lincei Rend. Cl. Sci. Fis. Mat. Nat. 64, 65–66 (1978) TY1995 Tan, K.K., Yuan, X.Z.: Random fixed point theorems and approximation in cones. J. Math. Anal. Appl. 195(2), 619 (1995) Wag1977 Wagner, D.H.: Survey of measurable selection theorems. SIAM J. Control Optim. 15(5), 859–903 (1977) WGL22 Wu, M.Z., Guo, T.X., Long, L.: The fundamental theorem of affine geometry in regular L^0-modules. J. Math. Anal. Appl. 507, 125827 (2022) WZZ22 Wu, M.Z., Zeng, X.L., Zhao, S.E.: On L^0-convex compactness in random locally convex modules. J. Math. Anal. Appl. 515, 126404 (2022) Xu1990 Xu, H.K.: Some random fixed point theorems for condensing and nonexpansive operators. Proc. Amer. Math. Soc. 110, 395–400 (1990) Xu1993 Xu, H.K.: A random fixed point theorem for multivalued nonexpansive operators in uniformly convex Banach spaces. Proc. Amer. Math. Soc. 117, 1089–1092 (1993) Zit2010 Žitković, G.: Convex compactness and its applications. Math. Financial Econ. 3, 1–12 (2010)
http://arxiv.org/abs/2408.11696v1
20240821151449
M2CS: A Microwave Measurement and Control System for Large-scale Superconducting Quantum Processors
[ "Jiawei Zhang", "Xuandong Sun", "Zechen Guo", "Yuefeng Yuan", "Yubin Zhang", "Ji Chu", "Wenhui Huang", "Yongqi Liang", "Jiawei Qiu", "Daxiong Sun", "Ziyu Tao", "Jiajian Zhang", "Weijie Guo", "Ji Jiang", "Xiayu Linpeng", "Yang Liu", "Wenhui Ren", "Jingjing Niu", "Youpeng Zhong", "Dapeng Yu" ]
quant-ph
[ "quant-ph" ]
These authors contributed equally to this work. zhangjw2022@mail.sustech.edu.cn These authors contributed equally to this work. zhongyp@sustech.edu.cn § ABSTRACT As superconducting quantum computing continues to advance at an unprecedented pace, there is a compelling demand for the innovation of specialized electronic instruments that act as crucial conduits between quantum processors and host computers. Here, we introduce a Microwave Measurement and Control System (M^2CS) dedicated for large-scale superconducting quantum processors. M^2CS features a compact modular design that balances overall performance, scalability and flexibility. Electronic tests of M^2CS show key metrics comparable to commercial instruments. Benchmark tests on transmon superconducting qubits further show qubit coherence and gate fidelities comparable to state-of-the-art results, confirming M^2CS's capability to meet the stringent requirements of quantum experiments run on intermediate-scale quantum processors. The system's compact and scalable design offers significant room for further enhancements that could accommodate the measurement and control requirements of over 1000 qubits, and can also be adopted to other quantum computing platforms such as trapped ions and silicon quantum dots. The M^2CS architecture may also be applied to wider range of scenarios, such as microwave kinetic inductance detectors, as well as phased array radar systems. M^2CS: A Microwave Measurement and Control System for Large-scale Superconducting Quantum Processors Dapeng Yu August 26, 2024 ==================================================================================================== § INTRODUCTION Quantum computing holds the potential of solving some hard problems that are otherwise intractable with classical computers, such as large number factorization Shor1,Grover1. Superconducting qubits, one of the most promising platforms towards large-scale, fault-tolerant quantum computers, have made tremendous progresses in recent years arute2019quantum,wu_strong_2021. Both the quality and quantity of qubits in superconducting quantum processors have been rapidly improved wu_strong_2021,xu_digital_2023,quafu, breaking through the 1000 qubits barrier on a single superconducting quantum chip recently IBM1000. Concurrently, to achieve logical qubits and, ultimately, fault-tolerant quantum computing, some progress has been made in qubit error correction acharyaSuppressingQuantumErrors2023,guptaEncodingMagicState2024,niBeatingBreakevenPoint2023,sivakRealtimeQuantumError2023. Such rapid progress poses new challenges for electronic instruments used to control and measure superconducting quantum processors. Electronic instruments are crucial for precise control of superconducting qubits and high-fidelity measurement of their states. Taking the widely used transmon superconducting qubits as an example koch2007,barends2013coherent, electronic instruments must be able to perform the following functions at a minimum bardin2020,krantz2019: (1) output high-precision intermediate-frequency (IF) signals for qubit Z bias and adjust couplers between qubits; (2) output raido-frequency (RF) microwave signals (typically within the C band frequency range of 4–8 GHz) finely tuned for qubit XY operation and state detection; (3) sample RF microwave signals returned from the quantum processor, demodulate the signals to discriminate qubit states. Driven by increasing demand in this field, commercial products have been developed to fulfill these requirements, however, their closed-source hardware and digital logic hinder custom optimization, and their high cost limits affordability. Laboratory-based electronic instruments have also been developed, some of which utilize a combination of customized RF hardware and commercial high-performance Field Programmable Gate Array (FPGA) evaluation boards with their FPGA logic being open-sourced dingExperimentalAdvancesQICK2023,stefanazziQICKQuantumInstrumentation2022,xuQubiCExtensibleOpenSource2023,xuQubiCOpenSourceFPGABased2021. Others employ entirely customized FPGA-based hardware to enhance scalability further guoControlReadoutSoftware2019,linScalableCustomizableArbitrary2019,sunScalableSelfAdaptiveSynchronous2020,yangFPGAbasedElectronicSystem2022,yang2021fpga,wang_hardware_2021-1. Here, we introduce an FPGA-based electronic instrument dedicated for large-scale superconducting quantum processors–which we call the Microwave Measurement and Control System (M^2CS). The M^2CS adopts a modular architecture for scalability and flexibility, where the hardware is housed within a 6U compactPCI chassis. Each chassis comprises one backplane and 14 slots which can accommodate arbitrary waveform generator (AWG) and data acquisition (DAQ) modules. Each AWG module features four digital-to-analog (DAC) channels that can generate IF pulses with a sampling rate of 2 Gsps and a vertical resolution of 14 bits. To generate RF microwave pulses for qubit XY control and state detection, some AWG modules are equipped with two surface-mounted In-phase and Quadrature (IQ) mixers that up-convert the IF pulses to RF pulses in the GHz range. To avoid confusion, we call the AWG modules with or without IQ mixers RF-AWG or IF-AWG modules respectively, and refer to AWG when discussing features they share in common. The DAQ modules are essentially the dual of RF-AWG modules. Each DAQ module offers two RF input channels, which are down-converted to IF signals by the on-board IQ mixers, then digitized by analog-to-digital converters (ADCs) with a sampling rate of 1 Gsps and a resolution of 8 bits. To meet expanded system requirements, multiple chassis can be cascaded in a tree-like structure, forming a scalable system with one master chassis connected to a maximum of 11 first-level slave chassis, and more than a hundred second-level slave chassis. The system's stability and efficiency are bolstered by a unified clock network, a 2-level trigger chain, and 1 Gbps UDP Ethernet communication, all integrated on the system backplane. Electronics tests of M^2CS show key metrics comparable to commercial instruments. The IF-AWG output has a spurious-free dynamic range (SFDR) of <-50 dBc over the entire bandwidth of 500 MHz, and a phase noise floor of -140 dBc/Hz beyond 10 kHz frequency offset. The IQ mixers on the RF-AWG modules can be carefully calibrated to suppress the carrier and image sideband leakage to below -80 dBm. The DAQ module supports fast on-board demodulation of 12 channels with a feedback loop latency as low as 180 ns, enabling fast mid-circuit quantum measurement and error correction. Finally, we benchmark the M^2CS performance with superconducting transmon qubits, showing 128.7 μs qubit lifetime and 99.73% CZ gate fidelity, comparable to state-of-the-art results. This confirms M^2CS's capability to meet the stringent requirements of quantum experiments run on intermediate-scale superconducting quantum processors. § SYSTEM ARCHITECTURE The system architecture of M^2CS, depicted in Fig. <ref>, comprises four principal parts: a unified clock network, a two-level trigger chain, a low-latency feedback loop and a 1Gbps UDP Ethernet communication network. These parts are primarily constituted by FPGA internal logic located on the backplane and the modules, supplemented by auxiliary components such as clock and communication chips. The unified clock network distributes 10 MHz reference clock sourced from a Rubidium atomic clock. All FPGAs internally multiply the 10 MHz clock to 250 MHz, serving as the primary operating frequency. The 2 GHz and 1 GHz sampling clocks required for the AWG and DAQ modules are distributed from the backplane using a clock generation device HMC7044 from Analog Devices Inc. analog_devices_hmc7044, ensuring precise clock alignment across modules. Additionally, the HMC7044 provides a unified 500 MHz synchronous clock to the digital-to-analog (DAC) chips on the AWG modules, facilitating synchronization of all output channels without manual calibration. To flexibly control the synchronization of signal generation and acquisition in M^2CS, we adopt a two-level trigger chain architecture. The level 1 trigger is used to align the zero-time points of all chassis. When a user command is received from the host computer, a software start trigger is sent to the master chassis, which subsequently distributes level 1 trigger signals to all slave chassis. This action initiates simultaneous operation of all backplanes within the system. Subsequently, the backplanes transmit level 2 triggers to different modules based on user-defined timestamps, allowing users to control the start time of each module with flexibility. In the low-latency feedback loop, the FPGA on the DAQ module discriminates the qubit states based on the phase of the demodulated signals. These state discrimination results are transmitted back to the backplane encoded in Low-Voltage Differential Signaling (LVDS) format via physically independent Category 8 Ethernet cables. The FPGA on the backplane, based on these results, sends level 2 branch triggers to the AWG modules at the user-defined timestamps. The FPGA on the AWG module selects the corresponding branch waveform data for output based on the branch triggers and a playlist uploaded in advance. During the communication with host computers, M^2CS is tasked with uploading waveform data to AWG modules and downloading sampled data along with demodulation outcomes from DAQ modules. The total data volume can reach magnitudes of MB or even GB per experiment cycle. Given the impact of communication bandwidth on experiment efficiency, the chassis are connected to the host computer through a commercial ethernet switch operating at 10Gb/s, where data is distributed with a 1Gb/s UDP Ethernet protocol within each chassis supported by the ZYNQ FPGA xilinx_zynq-7000_2018 on the backplane, where UDP datagrams are unpacked and distributed either to the designated module or the backplane itself, based on the datagram's content. Diverging from standard compactPCI chassis backplanes, the M^2CS backplane not only provides standard functionalities such as power supply and wiring but also offers highly integrated features, including dynamic clock networks, communication switching, and custom triggers, as depicted in Fig. <ref>(a). The main control FPGA utilizes Xilinx's ZYNQ-7045 chip xilinx_zynq-7000_2018. On its Processing System (PS) side, it accommodates Dual-core ARM Cortex-A9 processors running clock management software. Upon detecting the insertion of AWG or DAQ modules, the ZYNQ controls the HMC7044 clock chip analog_devices_hmc7044 to provide 2 GHz or 1 GHz clock for the module respectively. On the Programmable Logic (PL) side, hardware logic circuits are employed to perform unpacking, packing, and forwarding of 1Gbps UDP datagram. The low-latency nature of hardware logic ensures forwarding delays of only a few tens of nanoseconds. In the level 2 trigger RAM, users can store custom 36-bit trigger instructions. This instruction set consists of lower 4 bits representing trigger types (start, stop, or branch requests for AWG modules to select branch waveforms) and upper 32 bits denoting timestamps. When the system timer matches the timestamp value, the control Finite State Machine (FSM) executes the instruction. For start/stop/branch trigger types, the FSM directly transmits the lower 4 bits of the instruction. For feedback trigger types, the FSM adjusts the output to branch 0 or 1 trigger based on feedback from the DAQ. The AWG modules in our system are equipped with AD9739 DAC chips with a sampling rate of 2 Gsps and a resolution of 14 bits analogdevices_ad9739. Xilinx's Kintex-7 FPGA  xilinx_7_2020 is utilized as the main control FPGA, allowing each channel to store waveform data up to 400K, with a maximum 200 μs waveform for each channel. We employ the LVDS communication format between the AD9739 and FPGA to minimize transmission delays. As illustrated in Fig. <ref>(b), we utilize Xilinx's MicroBlaze soft-core processor within the FPGA, executing control software written in C language. This software manages pre- and post-experiment data processing as well as parameter configurations. Upon receiving waveform data from the backplane, it is temporarily stored in the 2 GB DDR memory via the AXI bus before being efficiently transferred to each channel's RAM using the MicroBlaze-controlled DMA (direct memory access) module. During experiments, main control of the AWG is transferred to the control FSM. Trigger signals received from the backplane prompt the FSM to access the Play List RAM, retrieving the storage address of the next waveform data. Subsequently, waveform data is fetched from the Wave Data RAM based on this address and then promptly dispatched to the AD9739 via the interface module. To support branching functions, the Play List RAM accommodates waveform data storage structures with branches. This readout link has been finely optimized, with an internal FPGA delay of only 32 ns (8 clock cycles) from receiving the trigger to transmitting the waveform. Each DAQ module features two dual-channel, 1 Gsps, 8-bit ADC08D1020 chips texasinstruments_adc08d1020_nodate. It also uses the Xilinx Kintex-7 FPGA as its primary control FPGA. Each dual-channel ADC chip connects to an IQ mixer to down-convert the RF input to IF signals for subsequent sampling and processing. For low-latency demodulation of qubit readout signal, a digital signal processing (DSP) module has been designed within the DAQ FPGA, as depicted in Fig. <ref>(c). Each RF input is down-converted to two IF signals first, which are subsequently digitized to two sequences of data i[n] and q[n] respectively, where n is the index of the data. To obtain the qubit-state-imparted phase information, the data array are demodulated into a data point (I,Q) in the phase space by the DSP in the following way: {[ I = ∑_n i[n] × demod_I[n] - q[n] × demod_Q[n],; Q = ∑_n i[n] × demod_Q[n] + q[n] × demod_I[n], ]. where demod_I [n] and demod_Q [n] are user-defined demodulation factors, typically taking the form of {[ demod_I[n] = cos ( - ω _demod× nΔ t+ϕ_demod),; demod_Q[n] = sin ( - ω _demod× nΔ t+ϕ_demod), ]. where ω _demod/2π is the demodulation frequency, Δ t=1 ns is the time step determined by the DAQ sampling rate, and ϕ_demod is the demodulation phase. One can also incorporate an envelope such as Hann window into the demodulation factor to optimize the demodulation performance. Up to 12 channels of user-defined demodulation factors are supported for multiplexed readout of superconducting qubits coupled to the same readout line. The DAQ module also supports multiple-time readouts with up to 8 μs sample length. The module includes 2 GB of DDR memory, providing storage for 60,000 sets of demodulation results and over 10 ms of raw data storage per readout line. To implement fast feedback control protocols, the DSP module compares the IQ demodulation data with the user-defined state discrimination thresholds to discriminate the superconducting qubit's state, which is subsequently transmitted to the backplane through the low-latency feedback loop. § ELECTRONICS PERFORMANCE The Spurious-Free Dynamic Range (SFDR) is a critical specification for evaluating the quality of IF-AWG output spectrum. SFDR quantifies the power distinction between the generated signal and the highest spurious signal, typically the harmonics. We connect an IF-AWG module to a Rohde&Schwarz FSL18 spectrum analyzer to measure its spectrum across the 10–2000 MHz range, see Fig. <ref>(a) for example, where a SFDR of -69.4 dBc is measured for a 100 MHz sinusoidal wave generated by the IF-AWG. The SFDR measurements at different IF-AWG output frequencies are shown as inset to Fig. <ref>(a), which are below -50 dBc over the entire bandwidth of 500 MHz. A Rohde&chwarz FSWP26 phase noise analyzer was used to conduct the phase noise tests. We benchmarked the phase noise performance of sinusoidal waves generated by the IF-AWG at different frequencies from 50 MHz to 500 MHz, as depicted in Fig. <ref>(b). The results indicate a phase noise floor of about -140 dBc/Hz beyond 10 kHz frequency offset. Integrating the phase noise in Fig. <ref>(b) from 1 Hz to 30 MHz yields a root mean square (RMS) jitter of 1 ps for the IF-AWG output. Frequency tunable superconducting qubits require precise direct current (DC) sources to adjust their operating frequencies, as well as fast Z pulses to dynamically tune the qubit frequencies for gate operations. As the number of qubits rapidly increases, it is highly desirable to reduce the wiring channels and use IF-AWGs to provide both the DC bias as well as fast Z pulses. This requires proper wiring inside the dilution fridge, and that the IF-AWG output voltage is sufficiently stable to avoid qubit frequency drift. The IF-AWG outputs are buffered by low noise, large-bandwidth differential amplifiers to convert the DAC differential outputs to single-ended outputs with a DC voltage swing of -1 to 1 V, yielding a least significant bit (LSB) resolution of 122 μV. The differential amplifiers are powered by two stages of low-dropout regulators to reject ripples from the power supply. As a benchmark test, we set the IF-AWG output to 0 V and 200 mV respectively, and monitor its long term stability (8 hours) using a KEITHLEY DMM6500 6½ digital multimeter, showing a peak-to-peak fluctuation of 31 μV_p-p and 121 μV_p-p respectively, see Fig. <ref>(c). With a typical wiring attenuation of 30 dB and a mutual inductance coupling of 2 pH to the qubit junction loops, this voltage fluctuation corresponds to a flux shift of the order of 1 μΦ_0 and the qubit frequency shift of the order of 1 MHz, a tolerable value given the time span of 8 hours. While the slow drift of the DC voltage only affects the operating frequencies of the qubits, the low frequency noise at the audio band can affect the phase coherence of frequency tunable qubits. We measure the low frequency noise spectrum of the IF-AWG with a Rohde&Schwarz upv audio analyzer, as shown in Fig. <ref>(d), where the spectrum reaches a noise floor of 20 nV/√(Hz) at 10 kHz. It is worth mentioning that the noise floor of the audio analyzer is 10 nV/√(Hz), as confirmed by a 50 Ω load. To generate RF microwave pulses for qubit XY control and state detection, RF-AWG modules are quipped with two surface-mounted IQ mixers. IQ mixers can up-convert IF pulses that are relatively easy to handle with electronic devices to RF microwave pulses, or inversely down-convert RF signals to IF signals for subsequent processing. They have been commonly used in point-to-point communication, test and measurement applications and so on. An IQ mixer consists of four ports: the LO port which is driven by a local oscillator with a continuous drive of cos(ω_LOt), two IF ports for in-phase (I port) and quadrature (Q port) modulation of the LO drive, and the RF port whose output is ideally given by RF(t) = I(t)cos(ω_LOt) + Q(t)sin(ω_LOt). By modulating the I and Q port at a sideband frequency as I(t) = cos(ω_sbt), Q(t) = -sin(ω_sbt), one can obtain a frequency up-conversion at the RF port: RF(t) = cos(ω_sbt)cos(ω_LOt) - sin(ω_sbt)sin(ω_LOt) = cos[(ω_LO+ω_sb)t]. However, as nonlinear analog devices, nonidealities such as offset and amplitude/phase imbalance in the I and Q ports lead to signal leakage at the LO frequency or the image sideband frequency (i.e., ω_LO-ω_sb), which could detrimentally affect the qubit coherence and quantum gate fidelity. Techniques that eliminate LO and image sideband leakage have been well-established at room temperature utilizing spectrum analyzers or additional instruments for output diagnosis jolin2020calibration,herrmann2022frequency. In Fig. <ref>(a), we show the RF-AWG output spectrum before and after applying such calibration techniques, where the LO frequency is 5 GHz, and the sideband frequency is 200 MHz. The LO leakage at 5 GHz is suppressed from ∼ -30 dBm to <-80 dBm, whereas the image sideband leakage is suppressed from ∼ -50 dBm to <-90 dBm. Figure <ref>(b) shows the LO and sideband leakage calibration at various LO frequencies from 4 to 8 GHz, with the sideband frequency fixed at 200 MHz. Both the LO and image sideband leakage are suppressed <-80 dBm after calibration over the entire C band. One critical question arises as how stable is the calibration. In Fig. <ref>(c), we test the long term stability of the calibration and find the LO and image sideband leakage quite stable over 5 days. Utilizing high speed DAC chips, one can also synthesize RF microwave pulses directly without the need for mixer calibration, but these cutting-edge devices usually have worse phase noise (typically around -90 dBc at 1 kHz offset) and complex synchronization. The frequency mixing scheme we adopt here is relatively simple in system architecture because of low digital clock frequency and shows better phase noise performance. Figure <ref>(d) shows the phase noise of the RF-AWG output, where the LO frequencies are changed from 4 to 8 GHz at a step of 1 GHz, the sideband frequency is fixed at 200 MHz, with a phase noise <-110 dBc at 1 kHz offset. This is a 20 dB improvement in terms of phase noise compared to direct synthesis schemes. Limited by efficiency and resource, such calibration method is cumbersome for large-scale quantum processors requiring hundreds or even thousands of channels. To address this challenge, an in situ calibration method of mixers with superconducting qubits has been demonstrated Wu2024. To test the functionality of the DAQ module, we drive the LO port of the mixers on the DAQ module at 5 GHz, and directly apply a continuous microwave signal to the RF input port. As adjacent readout resonators are typically separated in frequency by ∼ 30 MHz or more arute2019quantum, we apply a 5.03 GHz microwave signal to the RF input and digitize the down-converted IF signals to two sequences of data i[n] and q[n], as shown in Fig. <ref>(a). Fast Fourier transform (FFT) analysis of the i[n] sequence is shown in inset of Fig. <ref>(a), from which we obtain a signal-to-noise ratio (SNR) of 45.80 dBc, a total harmonic distortion (THD) of -51.93 dBc, and 7.2 effective number of bits (ENOB). The FPGA on the DAQ module supports fast on-board demodulation of up to 12 channels, where the demodulation factors of each channel are user-defined. In Fig. <ref>(b), we show the fast on-board demodulation of the signal in Fig. <ref>(a), where the horizontal axis is the user-defined demodulation frequency ω_demod/2π artificially varied to demonstrate its frequency selectivity. A clear peak is observed in the demodulated signal magnitude when ω_demod/2π=30 MHz, with more than 30 dBc contrast over spurious peaks. We further vary the RF input frequency ω_RF/2π from (5-0.18) GHz to (5+0.18) GHz, and analyze the SNR and ENOB of the DAQ, as shown in Fig. <ref>(c), with >45 dBc SNR and >7 ENOB over the entire frequency range. To verify the frequency multiplexing and phase coherence, we apply input waveforms comprising the superposition of sinusoidal waves at 6 different sideband frequencies from 30 MHz to 180 MHz with decreasing amplitudes. The input signal is digitized and simultaneously demodulated with demodulation frequencies ω_demod/2π matching the sideband frequencies, while the demodulation phase ϕ_demod is varied at a step of 30 degrees, see Fig. <ref>(d). The demodulated data for the same ω_demod/2π are concentric around the origin in the phase space. With the AWG and DAQ fully tested separately, we can connect them together and test the feedback functionality, as shown in Fig. <ref>. In this test, AWG1 is used to generate a 100 ns readout wave to the DAQ module with phases of either 0 or 180 degrees to respectively simulate the detection of qubit |0⟩ or |1⟩ state. The DAQ's sampling window is also set to 100 ns to align with the readout waveform. After the DAQ completes the discrimination of the qubit state, the results are transmitted to the backplane via a 1 m Ethernet cable. The backplane's FPGA then dispatches branch trigger signals to AWG2 based on the outcomes of the state classification. AWG2 selectively outputs either branch 0 or branch 1 waveform accordingly. AWG1 and AWG2 outputs are connected to the oscilloscope via equal-length cables. As illustrated in Fig. <ref>(b) and (c), the total closed-loop latency of the feedback is 180 ns (excluding the 100 ns sampling length). Specifically, the AWG latency is 72 ns, the backplane latency is 24 ns, the DAQ latency is 48 ns, and the remaining 36 ns comes from communication and wiring. § BENCHMARK WITH SUPERCONDUCTING QUBITS To demonstrate the practical performance of M^2CS, we utilize it to calibrate our superconducting quantum processor with 66 qubits Yang2024. The readout resonators of the superconducting qubits are around 6 GHz. As shown in Fig. <ref>(a) and (b), M^2CS achieves a readout fidelity of 99.2% for the qubit ground state |0⟩ and 97.4% for the qubit excited state |1⟩, without the assistance of parametric amplifiers. We further benchmark the qubit coherence by measuring the lifetime T_1, the Ramsey T_2, and the echo T_2,echo of the qubits, as shown in Fig. <ref>(c) and (d). In the T_1 measurement, an RF-AWG module outputs a so-called π-pulse at the qubit frequency after up-conversion, flipping the qubit from the ground state |0⟩ to the first excited state |1⟩. After some delay time t, the qubit state is detected using the DAQ to discriminate the probability P_1 of the qubit being in the |1⟩ state. Scanning the delay time t, we observe an exponential decay in P_1 with a decay constant of T_1=128.7 μs. In the T_2 Ramsey measurement, a π/2-pulse from an RF-AWG rotates the qubit to an equal superpsition of |0⟩ state and |1⟩ state. After some delay time t, another π/2-pulse is applied, followed by measurement using the DAQ. The probability P_1 of |1⟩ state exhibits an oscillation with evolution time t, where the envelope of this oscillation decays at a dephasing time T_2,ramsey=12.0 μs. In the T_2 spin echo measurement, an additional refocusing π-pulse is added between the two π/2-pulses to suppress low frequency noise, resulting in a longer coherence time of T_2,echo=43.4 μs. We further calibrate single qubit gates and two-qubit control-Z (CZ) gates using randomized benchmarking (RB) to extract the gate errors by running m cycles of random Clifford gates arute2019quantum, obtaining single qubit gate fidelity of 99.96%, and two-qubit CZ gate fidelity of 99.73%, see Fig. <ref>(e) and (f). Both the qubit coherence and gate fidelities are comparable to state-of-the-art results. § CONCLUSION In this study, we have demonstrated a Microwave Measurement and Control System (M^2CS) tailored specifically for large-scale superconducting quantum processors. This customized and modular design can meet the stringent requirements of quantum experiments run on intermediate-scale quantum processors, effectively balancing overall performance, scalability and flexibility. Electronic tests of M^2CS show comparable key metrics to those of commercial instruments. Benchmark tests on transmon superconducting qubits confirm M^2CS's capability to control and measure quantum processors, where both the qubit coherence and gate fidelities are comparable to state-of-the-art results. Moreover, the system's compact and scalable design offers significant room for further enhancements that could accommodate the measurement and control requirements of over 1000 qubits. The M^2CS architecture may be adopted to other quantum computing platforms such as trapped ions zhu2023fpga and silicon quantum dots xue2022quantum. It could also be applied to a wider range of scenarios, such as Microwave Kinetic Inductance Detectors (MKIDs) mchugh2012readout,stefanazziQICKQuantumInstrumentation2022, as well as phased array radar systems. This work was supported by the Science, Technology and Innovation Commission of Shenzhen Municipality (KQTD20210811090049034, RCBS20231211090824040, RCBS20231211090815032), the National Natural Science Foundation of China (12174178, 12204228, 12374474 and 123b2071), the Innovation Program for Quantum Science and Technology (2021ZD0301703), the Shenzhen-Hong Kong Cooperation Zone for Technology and Innovation (HZQB-KCZYB-2020050), and Guangdong Basic and Applied Basic Research Foundation (2024A1515011714, 2022A1515110615). apsrev4-2
http://arxiv.org/abs/2408.11905v1
20240821180046
A theory of time based on wavefunction collapse
[ "Sung-Sik Lee" ]
gr-qc
[ "gr-qc", "hep-th" ]
=1 compatibility=false [1] { #1} 1/2 ϵ ε † σ̅ γ → ⇒ [1]#1 [1]#1 [1](#1) κ [1]#1 ∴ [1]Eq. (<ref>) [1]Fig. <ref> [1]𝒪(#1) [1]#1 [1]#1 [2]d#1/d#2 [2]∂#1/∂#2 [1]Θ(#1) α̂ ϕ̂ p̂_α p̂_ϕ p_α p_ϕ 3/2 ( 3/2 )^2 α/2 √(^2+) √(k^2+)/3 √(E e^3 α)/3 K_α Ψ_0'(α,ϕ) Ψ_0(α,ϕ) Ψ_(α,ϕ) Ψ(α,ϕ;) q 𝒯 T ϱ § ABSTRACT We propose that moments of time arise through the failed emergence of the temporal diffeomorphism as gauge symmetry, and that the passage of time is a continual process of an instantaneous state collapsing toward a gauge-invariant state. Unitarity and directedness of the resulting time evolution are demonstrated for a minisuperspace model of cosmology. A theory of time based on wavefunction collapse Sung-Sik Lee Department of Physics & Astronomy, McMaster University,1280 Main St. W., Hamilton ON L8S 4M1, Canada Perimeter Institute for Theoretical Physics, 31 Caroline St. N., Waterloo ON N2L 2Y5, Canada August 26, 2024 ====================================================================================================================================================================================================================== § INTRODUCTION Time is the most fundamental concept in physics, yet the least understood one. In the Newtonian paradigm, time is a parameter that labels moments of history and endows them with a chronological order. Like the conductor of an orchestra who silently leads other musicians, time itself is not observable but provides incessant cues for physical degrees of freedom to march on. Physical laws dictate how dynamical variables evolve as functions of time, but explaining the flow of time is not necessarily a mandate of theories in this framework. Einstein's theory of gravity deems temporal evolution as a gauge transformation that generates redundant descriptions of one spacetime. Specifying a moment without a reference to dynamical variables is impossible because only relations among them are measurable<cit.> While the theory predicts correlation among physical observables, it does not explain why events unfold in a particular order. Therefore, relational theories such as general relativity are often challenged with the apparent gap between our experience of instants that persistently pass by and the four-dimensional block universe present once and for all. Quantizing gravity<cit.> comes with a new set of challenges related to time <cit.>. Here, we focus on one. Suppose Ψ is a gauge-invariant state annihilated by the Hamiltonian constraint Ĥ. Being a steady state of the generator of temporal translations, Ψ encodes dynamical information through the entanglement of physical degrees of freedom<cit.>. A moment is defined through a measurement of a variable chosen as a clock. The entanglement between the clock and other variables determines the dynamics, that is, the latter's dependence on the former. However, there are many ways of defining moments, even for one clock variable, because the basis of the clock can be rotated to define a different set of moments in history, which may even change the notion of locality in space<cit.>. To illustrate this point, let us consider a simple constrained system made of two dynamical variables t̂ and x̂ whose Hamiltonian constraint reads Ĥ = p̂_t + p̂_x, where p̂_t and p̂_x are conjugate momenta of t̂ and x̂, respectively. General gauge invariant wavefunctions take the form of Ψ(t,x) = f(x-t). An instant is defined by measuring a clock variable, which we choose to be t̂. Upon the measurement of `time' t, the conditional probability for the outcome of x̂ measurement becomes P(x|t)=|Ψ(t,x)|^2/∫ dx' |Ψ(t,x')|^2. Now, let us consider a new basis, | ⟩_± = ∫ dt  1/γ Ai(±t-/γ) | t ⟩, where Ai(t) is the Airy function and γ is a positive constant. 1/γ Ai(t-/γ)[ 1/γ Ai(-t-/γ) ] is peaked around t= with its amplitude exponentially suppressed for t > [t <] with width γ but only power-law suppressed for t < [t >]. The new basis satisfies _+⟨' | ⟩_+ =   _-⟨' | ⟩_- = δ(-'), and the Hamiltonian is invariant under the unitary transformation that generates the basis change. Therefore, one may well define a moment of time from the projective measurement of the clock in the | ⟩_+ or | ⟩_- basis. Upon the measurement of the clock in | ⟩_±, the conditional probability of x is controlled by an instantaneous wavefunction, Ψ_±(,x) = ∫ dt  1/γ Ai(±t-/γ) Ψ(t,x). At a moment of time defined in this new basis, the system is in a linear superposition of states with vastly different t. For example, Ψ(t,x) = δ(t-x) describes a particle localized at position x=t at time t defined in the |t ⟩ basis. For the same state, the instantaneous wavefunction defined at time in the | ⟩_± basis is Ψ_±(,x)=1/γ Ai(±x-/γ), which includes the contributions from the far past and future in the original basis, respectively. In the absence of a preferred basis of clock, we may wonder why we are confined to this state of instant at this moment in time. The fundamental difficulty of defining time in relational quantum theories is that the notion of instant is not gauge invariant. No matter what clock we choose, the state of an instant that arises from a projective measurement of time is not gauge invariant. Therefore, restoring time in quantum gravity may involve reconsidering the role of the temporal diffeomorphism as gauge symmetry<cit.>. For other ideas on the origin of time, see Refs. <cit.> In this paper, we posit that the temporal diffeomorphism is not fundamental but is enforced only approximately. This amounts to replacing δ(Ĥ) with a soft constraint. Relaxing the strict gauge constraint is not as bad as it may sound because gauge symmetry can emerge at long-distance scales in models where the constraint is imposed softly at the microscopic scale<cit.>. Furthermore, strictly gauge-invariant Hilbert spaces can not be written as a product of local Hilbert spaces. A soft projection of gauge non-invariant state Ψ_0 can be implemented through a random walk along the gauge orbit, where one step is taken to be either e^i Ĥϵ or e^-i Ĥϵ with ϵ being an infinitesimal step size and the randomly chosen sign. The state obtained from averaging over all paths of N steps becomes Ψ_N ∼∑_{ϵ_j} e^i ∑_j=1^N ϵ_j ĤΨ_0, where ϵ_j= ϵ or -ϵ. In the large N limit with fixed = √(N)ϵ, the net gauge parameter τ≡∑_j=1^N ϵ_j acquires the Gaussian distribution with width , and eq:averagedstate becomes Ψ() = N() ∫_-∞^∞ d τ  e^-1/2^2τ^2 e^i ĤτΨ_0, where N() is a normalization. Integrating over τ, one obtains Ψ() = √(2π) N() e^-^2/2Ĥ^2Ψ_0. With increasing , Ψ_0 is gradually projected to a gauge-invariant state. The exact gauge constraint is restored at = ∞. How close the state at a finite is to a gauge-invariant state crucially depends on whether the gauge group is compact or non-compact. To quantify the violation of the gauge constraint that remains after the soft projection, we use the normalized trace distance d_Ψ(y) ≡1/2 ⟨Ψ | Ψ⟩ tr {| | Ψ⟩⟨Ψ | - e^i y Ĥ | Ψ⟩⟨Ψ | e^-i y Ĥ| } that measures the distance between Ψ and e^i y ĤΨ: for gauge invariant Ψ, d_Ψ(y) = 0 for all y; if Ψ and e^i y ĤΨ are orthogonal, d_Ψ(y) = 1; otherwise it takes values between 0 and 1. If Ĥ is a generator of a compact group, the spectrum of Ĥ is discrete. Because the gauge non-invariant components of Ψ() are uniformly suppressed at large , Ψ() can be made arbitrarily close to a gauge-invariant state: for any non-zero δ, there exists a sufficiently large such that d_Ψ()(y) < δ for all y. The situation is different for non-compact groups such as the temporal diffeomorphism, where gauge-invariant states are generally within a band of states with continuously varying eigenvalues. In such cases, no matter how large is, there always exists a sufficiently large y such that d_Ψ()(y) is O(1). To see this, we write |Ψ_0 ⟩ = ∫ dE Φ_0(E) |E ⟩, where |E ⟩ is the eigenstate of Ĥ with eigenvalue E. The trace distance between |Ψ() ⟩ = e^- ^2/2Ĥ^2 |Ψ_0⟩ and e^iĤ y |Ψ()⟩ becomes d_Ψ()(y) = √( 1- | ∫ dE   |Φ_0(E)|^2 e^-^2 E^2 + i y E/∫ dE   |Φ_0(E)|^2 e^-^2 E^2|^2 ). For smooth function Φ_0(E), lim_y →∞ d_Ψ()(y) = 1 for any finite . In this sense, the non-compact gauge symmetry does not emerge from a soft projection. This difference also affects whether the Coulomb phase of a gauge theory can emerge or not in lattice models with soft gauge constraints. Examples are discussed in Appendix <ref>. § EMERGENT TIME FROM A COLLAPSE OF WAVEFUNCTION We view the failed emergence of non-compact gauge symmetry as the underlying reason why moments of time ever exist, and time continues to flow. There is a similarity between this and how the bulk space emerges in holographic duals of field theories<cit.>. The renormalization group flow, which generates the radial direction of the emergent bulk space<cit.>, can be understood as the gradual collapse of a state associated with a UV action toward the state associated with an IR fixed point through an action-state mapping<cit.>. Here, the UV state, which is not annihilated by the radial constraint, exhibits a non-trivial RG flow, and the inability to project a highly entangled UV state to the trivial IR state creates a space with infinite radial depth in the bulk<cit.>. This leads us to make the following proposal: a gauge non-invariant state represents a moment of time, and time evolution is a continual collapse of the state toward a gauge invariant state. In the following, we apply this scenario to the Friedmann–Robertson–Walker (FRW) model for the scale factor (α) of a three-dimensional space and a massless free scalar (ϕ). The Hamiltonian constraint reads Ĥ = e^-3 (-^2 + ^2 ) + e^3 ρ() , where and are the conjugate momenta of and , respectively, and Ô≡1/2 ( Ô + Ô^†) is `hermitianized' Ô. We consider the α-dependent energy density of the form, ρ(α) = Λ_c(α) + Λ_m e^-3 α + Λ_r e^-4 α. Here, Λ_c(α) = Λ_0+ Λ_1 e^-2 α; Λ_0 is the α-independent cosmological constant, and Λ_1 includes the component of the dark energy that decays as e^-2α<cit.> and the contribution of the spatial curvature. Henceforth, Λ_c(α) will be simply called the dark energy. Λ_m and Λ_r represent the contributions of matter and radiation, respectively. H_FRW can be obtained by projecting a Hamiltonian of all degrees of freedom to a sub-Hilbert space in which the degrees of freedom other than α and ϕ are in an α-dependent state (see Appendix <ref>). The Planck scale is set to be 1. We consider a process in which an instantaneous state Ψ_0(α, ϕ) gradually collapses to a gauge-invariant state through the soft projection in mastereq. However, we have to first address two immediate issues before we can interpret such wavefunction collapses as time evolution. The first is unitarity. In general, the projection of a wavefunction causes its norm to change. One can, in principle, enforce unitarity by choosing N() so that the norm of mastereq is independent of . However, such a choice of N() generally depends on Ψ_0. A state-dependent N() leads to an evolution that is non-linear in state. To keep the linearity, one should be able to choose N() that is independent of state. A state-independent normalization does not affect any observable because two states ψ and a ψ are physically equivalent for any non-zero complex number a. It turns out that there exists a state-independent N() that makes the evolution unitary for a large class of initial states in the large limit. To see this, we write the initial wavefunction as = ∫ dE d Φ(E, ) Ψ_E,(α,ϕ), where Ψ_E,(α,ϕ) is the eigenstate of Ĥ with eigenvalue E with denoting the eigenvalue of p̂_ϕ. We consider normalizable states with lim_E →±∞ |Φ(E,q)|^2=0. Then, mastereq is written as Ψ(T) = √(2π) N() ∫ dE d e^-^2/2 E^2Φ(E, ) Ψ_E,(α,ϕ), where has been abbreviated into Ψ(T). Its norm becomes ⟨Ψ() | Ψ() ⟩ = 2 π^2 N()^2 ∫ dE d |Φ(E, )|^2 e^-^2 E^2. For |Φ(E, )|^2 that is smooth and non-zero at E=0, normDelta approaches 2 π^3/2 N()^2 ∫ d |Φ(0, )|^2 in the large limit. With the choice of N() = 1/2 π√(), the norm becomes independent of . Unitarity emerges in the large limit because the projection affects the norm of the near-gauge invariant states in a universal manner.[ This is analogous to the way universality emerges as the renormalization group flow is controlled by a small set of couplings at long-distance scales. ] The second issue is the directedness of time. The gradual projection of the wave function is a result of a stochastic evolution along the gauge orbit. Under such an evolution, a state usually diffuses in all directions in the gauge orbit. If one of the variables is used as a clock, the diffusion would create a state that is merely more spread over a more extensive range of past and future without pushing time in one direction. In the present case, however, a directedness of time evolution can arise because Ĥ is asymmetric in α. In particular, the kinetic terms in H_FRW are exponentially suppressed at large α. This can be viewed as the α-dependent effective masses, which tend to make the dynamics slower as the universe gets bigger. Since the random walk is exponentially slowed down at large α, configurations generated through the random walk at a larger scale factor add up with a stronger constructive interference in the ensemble of eq:averagedstate. This α-dependent effective mass makes the state evolve preferably toward the one with larger α with increasing . For the same reason, gauge invariant states can have exponentially larger amplitude at larger α, as is shown in fig:psiinv. A projection of Ψ_0 with finite support in α toward such gauge invariant states makes Ψ_0 evolve toward the region of large α to maximize the overlap. Now, we demonstrate the unitarity and directedness of the time evolution through an explicit calculation. We write eigenstates of H_FRW with eigenvalue E as Ψ_E,q(α,ϕ) = e^i q ϕ + 3/2α f_E,q(α), where f_E,q(α) satisfies f^”_E,q(α) + P_E,q(α)^2 f_E,q(α) = 0, where P_E,q(α)^2 = q^2 + (3/2)^2 + ρ(α) e^6 α - E e^3 α. For simplicity, we focus on states with q ≲ 1, and assume that there exists a hierarchy among different types of energy densities such that (Λ_1/Λ_0)^1/2≫Λ_m/Λ_1≫Λ_r/Λ_m ≫ 1/Λ_r^1/2≫ 1. In this case, the evolution undergoes a series of crossovers at α_A ∼log (1/Λ_r^1/2), α_B ∼log (Λ_r/Λ_m) and α_C ∼log (Λ_m/Λ_1). Between these crossover scales, one of the terms dominates the energy density in eq:PE, which results in the following epochs: 1) pre-radiation era (α≪α_A), 2) radiation-dominated era (α_A≪α≪α_B), 3) matter-dominated era, (α_B≪α≪α_C), 4) dark-energy-dominated era (α_C≪α). The dark-energy-dominated era is further divided into two sub-eras around α^* ∼1/2log ( Λ_1/Λ_0), depending on whether the Λ_1 or Λ_0 term dominates the dark energy. Below, we describe the evolution of the universe in each era. §.§ Pre-radiation era In this era, the Hamiltonian constraint becomes H = e^-3 α( ∂_α^2 - ∂_ϕ^2 ). This may not describe the realistic pre-radiation era as it ignores other effects, such as inflation. Nonetheless, we study this as a toy model because the exact solution available in this limit is useful for demonstrating the general idea without an approximation. Normalizable eigenstates of H have non-positive eigenvalues. Eigenstates with eigenvalue E (E ≤ 0), which are regular in the small |E| limit, are given by Ψ_E,^(±)(α,ϕ) = e^α+ i ϕ (-E)^∓ i ϵ_/3  J[ ± i 2/3ϵ_ ; 2/3√(-E) e^α],        where J[ν;z] is the Bessel function of the first kind of order ν and ϵ_ = √( ^2 + ). In the small |E| limit, eq:generalsol reduces to gauge-invariant states: lim_E→0Ψ_E,^(±)(α,ϕ) ∼ e^α e^i ϕ± i ϵ_α. A general normalizable gauge non-invariant state can be written as Ψ_0(α,ϕ) = ∑_s=±∫_-∞^0 dE ∫ d Φ_s(E, ) Ψ_E,^(s) (α,ϕ).   For Φ_±(E, ) that is smooth in E, eq:mastereq2 at large becomes Ψ(T) = ∑_s=±∫ d Φ_s(0, ) e^i ϕ + i s ϵ_αχ_ ,s( e^α/^1/3) + O(1/),   where χ_ ,±(z) = 6^∓ i2ϵ_/ 3π^1/2/2 z^3/2[   _0F̃_2(; 3 ± 2 iϵ_/6, 3 ± i ϵ_/3; z^6/648)- √(2)/36  _1F̃_3( 1; 3/2, 3 ± i ϵ_/3, 9 ± 2 iϵ_/6; z^6/648)z^3 ] with _pF̃_p'(a_1,..,a_p;b_1,..,b_p';x ) = _p F_p'(a_1,..,a_p;b_1,..,b_p';x )/Γ(b_1)..Γ(b_p'). psiaphiDnonchiral describes the evolution of the state as it gradually collapses toward a gauge invariant state with increasing which is regarded as time. χ_ ,±(z), which controls the magnitude of the wavefunction for each component of q, is peaked at z_^*, as is shown in fig:chikz. The wavefunction for α is peaked at -dependent scale factor α() with a finite uncertainty. At time , e^α()∼^1/3, and its conjugate momentum is p_α() ≈±ϵ_q. While is not gauge invariant, it satisfies the Hamiltonian constraint at the semi-classical level. Since the state of α is fixed at each , α is not an independent dynamical variable. The scalar, which retains information about the initial state, is the physical degree of freedom. Therefore, the present theory keeps the same number of physical degrees of freedom as the system in which the gauge symmetry is strictly enforced. For T ≫ 1, the norm of the wavefunction is independent of and the resulting unitary time evolution can be written as Ψ(α,ϕ;+Δ) = e^-i ΔĤ_eff()Ψ(α,ϕ;), where Ĥ_eff() = 1/3 - Π̂√(^2+) is the effective Hamiltonian. Here, Π̂ is the operator that takes eigenvalues ± 1 for ψ^(±)_E,.[Π̂ commutes with because a translation in ϕ does not change the parity of ψ^(±)_E,.] The effective Hamiltonian makes α to increase with increasing time irrespective of p_α. This arrow of time arises because the preferred direction of the gauge parameter is determined by the state: for states with p_α > 0 (p_α < 0), e^i ϵĤ (e^-i ϵĤ) generates a stronger constructive interference to always push the state to larger α. §.§ Radiation and matter-dominated eras At _A = 1/Λ_r^3/2, the peak of the wavefunction reaches the first crossover scale: α(_A) ∼α_A. For > _A, the evolution becomes dominated by radiation and then matter consecutively. We consider the two eras together because the analysis is parallel for those two cases. In each era, we can keep only one dominant term in the energy density to write eq:PE as P_E,q(α)^2= C_n e^n α - E e^3 α with C_2=Λ_r and C_3=Λ_m, respectively. In solving eq:PE, it is useful to understand the relative magnitude between the two terms in PEq23 for typical values that E and α take. At time , the range of E in eq:mastereq2 is E() ∼^-1 while the wavefunction is peaked at α(). At _A, the two terms are comparable: C_2 e^2 α(_A)/ E(_A) e^3 α(_A)∼Λ_r e^2 α_A∼ 1[It follows from the fact that ∼ e^3α() in the pre-radiation era.]. For ≫_A, a hierarchy emerges such that C_ne^n α()≫ E() e^3 α()≫ 1. This will be shown to be true through a self-consistent computation in the following. For now, we proceed, assuming that this is the case. With P_E,q≫ 1, we can use the WKB-approximation to write the eigenstates of H with eigenvalue E as Ψ_E,q^(s)(α,ϕ) = e^3/2α + i q ϕexp[ i s ∫ P_E,q(α) dα]/√(P_E,q(α)) with s=± 1. Furthermore, eq:smallE allows us to expand eq:PsiEqn around E=0 to write Ψ_E,q^(±)(α,ϕ) ≈ e^3/2α + i [ q ϕ±∫ P_0,q(α) dα∓ E ∫e^3α/ 2 P_0,q(α) dα] /√(P_0,q(α)). To the leading order in 1/T, the integration over E in eq:mastereq2 leads to Ψ(T) = ∑_s=±∫ d Φ_s(0, ) e^ i ( ϕ + 2 s √(C_n)/n e^n/2α) × χ_n ( e^α/ (C_n ^2)^1/6-n), where χ_n(z) = z^6-n/4 e^ -z^6-n/2 (6-n)^2. At time , the wavefunction is peaked at e^α()∼ (C_n ^2)^1/6-n. In the radiation-dominated era, the size of the universe increases as e^α(T)∼ e^α_A( T/T_A )^1/2 until α() reaches α_B around _B ∼Λ_r^3/2/Λ_m^2. In ≫_B, the matter dominates and the universe expands as e^α(T)∼ e^α_B( T/T_B )^2/3. We note that eq:smallE is indeed satisfied throughout the radiation-dominated era and afterward because C_ne^n α()/ E() e^3 α()∼C_ne^n α()/ C_n^1/2 e^3 α() - 6-n/2α() ∼( C_ne^n α())^1/2≫ 1 for ≫_A. Therefore, the approximation used in psiaphigeneral is justified. In these eras, the effective Hamiltonian is given by Ĥ_eff() = 2/(6-n) - √(C_n)Π̂e^n/2 to the leading order in e^-α, where Π̂ is an operator that takes eigenvalue s for Ψ_E,q^(s)(α,ϕ). In the regime where the WKB approximation is valid, Π̂≈/||. The effective Hamiltonian does not depend on q to the leading order in e^-α. §.§ Dark-energy-dominated era Around time _C ∼Λ_m/Λ_1^3/2, the wavefunction becomes peaked at α_C. Beyond this size, the dark energy dominates and eq:PsiEqn becomes Ψ_E,q(α,ϕ) = e^3/2α e^i ( q ϕ±η(α) ∓ E ξ(α) ) /[ Λ_0 e^6α + Λ_1 e^4α]^1/4, where η(α)= e^2α/3Λ_0^2 e^4α + 3 Λ_0 Λ_1 e^2α + 3 Λ_1^2 /( Λ_0 e^2α+Λ_1 )^3/2 + Λ_1^3/2, ξ(α) = 1/4 √(Λ_0)log[ ( √(Λ_0 e^2α) + √(Λ_0 e^2α + Λ_1))^2 /Λ_1]. The soft projection gives the time-dependent wavefunction, Ψ(T) = e^3/2α/√()∑_s=±∫ d Φ_s(0, ) e^ i ( ϕ + s η(α) ) e^ -ξ(α)^2/2 ^2 /[ Λ_0 e^6α + Λ_1 e^4α]^1/4. In the first part of the dark-energy-dominated era, Λ_0 is negligible, and psiaphiLambdac reduces to psiaphigeneral with n=4 and C_4=Λ_1. In this era, the universe expands as e^α()∼ e^α_C (T/T_C), and its unitary evolution is governed by eq:Heff_general for n=4. becomes Ψ(T) ≈∑_s=±∫ d Φ_s(0, ) e^ i ( ϕ + s/2Λ_1^1/2 e^2α) χ_4( e^α/√(Λ_1))    with χ_4(z) = z^1/2 e^ -z^2/8. In this era, the universe expands as e^α()∼ e^α_C (T/T_C) and the evolution is governed by the effective Hamiltonian, Ĥ_eff() = 1/ - √(Λ_1 )Π̂e^2. At ^* = Λ_0^-1/2, the evolution crossovers to the Λ_0-dominated era and the wavefunction is peaked around α^* ≡1/2logΛ_1/Λ_0. In ≫^*, the form of the wavefunction becomes qualitatively different. In α≪α^*, it is still described by psiaphigeneral with n=4. In α≫α^*, however, the wavefunction becomes Ψ(T) = ^-1/2Λ_0^-1/4× ∑_s=±∫ d Φ_s(0, ) e^ i ( ϕ + s √(Λ_0)/3 e^3 α) e^ -(α- α^* )^2/8 Λ_0^2. As is shown in fig:3a, the peak of the wavefunction is pinned at α^*, and the width grows as Δα∼ on the side of α>α^*. While the expectation value of e^α grows exponentially in T, the wavefunction acquires an increasingly large uncertainty of α. In this era, the effective Hamiltonian, which can be written as Ĥ_eff() = 1/ (-α^*) ( - √(Λ_0 ) e^3Π̂) in α≫α^*, describes the broadening of the wavefunction. Therefore, the semi-classical time evolution ends once the α-independent cosmological constant dominates. The change in the character of the time evolution in the Λ_0-dominated era can be understood from the profile of gauge-invariant wavefunction. For α < α^*, the gauge-invariant wavefunction Ψ_E=0,q(α,ϕ) in eq:PsiEqn3 grows exponentially in α as is shown in fig:3b. If an initial wavefunction is localized in α < α^*, the projection e^-^2/2Ĥ^2 pushes the wavefunction to the region with larger amplitude to maximize the overlap, which gives rise to the directed semi-classical time evolution. On the other hand, the amplitude of Ψ_E=0,q(α,ϕ) becomes flat in α > α^*, and the projection makes the wavefunction evolve diffusively. § DISCUSSION In the present proposal, the time evolution is one big measurement that causes a gauge non-invariant state of an instant to gradually collapse toward a gauge-invariant state. The wavefunction collapse is not only an intrinsic part of time evolution<cit.> but is the very driving force. Time flows toward the direction that maximizes the overlap between the instantaneous state and a gauge-invariant state. Therefore, the violation of the Hamiltonian constraint at the quantum level plays the role of an absolute time. A prediction that follows is that the expectation value of Ĥ^2, which is in principle measurable, is non-zero for our vacuum, and decreases monotonically with time as ⟨Ψ() | Ĥ^2 | Ψ() ⟩∼ T^-2. § ACKNOWLEDGMENTS The research was supported by the Natural Sciences and Engineering Research Council of Canada. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. § FAILED EMERGENCE OF COULOMB PHASE FROM A SOFT NON-COMPACT GAUGE CONSTRAINT In models that exhibit emergent gauge symmetry, the full Hilbert space of microscopic degrees of freedom includes states that do not satisfy Gauss's constraint. Nonetheless, gauge theories can dynamically emerge at low energies in the presence of interactions that energetically penalize states that violate the constraint. In this appendix, we review how this works for a compact group and discuss how it fails for the non-compact counterpart. §.§ U(1) group Here, we consider a lattice model where the pure U(1) gauge theory emerges at low energies. Let θ̂_i,μ be the U(1) rotor variable defined on link (i,μ) of the d-dimensional hyper-cubic lattice, where i is the site index and μ=1,2,..,d denotes d independent directions of links. For links along -μ direction, we define θ̂_i,-μ = - θ̂_i-μ,μ. n̂_i,μ denotes the conjugate momentum of θ̂_i,μ. With θ_i,μ∼θ_i,μ + 2π, n̂_i,μ takes integer eigenvalues. The Hamiltonian is written as Ĥ = U ∑_i Q̂_i^2 + g ∑_i,μn̂_i,μ^2 + ..., where Q̂_i ≡∑_μ (n̂_i μ - n̂_i-μ,μ) and ... denotes other terms. The first two terms in the Hamiltonian respect the local U(1) symmetry for every link, but ... may partially or completely break the symmetry. For example, we add Ĥ_J = - 2 J ∑_i,μcos(θ̂_i,μ) that breaks all internal symmetry. We are interested in the low-energy spectrum of the theory in the limit that U is larger than all other couplings. If we view n̂_i,μ as the electric flux in direction μ, Q̂_i corresponds to the divergence of the electric field evaluated at site i. The U-term in the Hamiltonian penalizes states that violate Gauss's constraint. In the U →∞ limit, Gauss's constraint is strict, and states with finite energies only have closed loops of electric flux lines. For a finite U, Gauss's constraint is not strictly enforced. However, the gap between the low-energy sector with energy E ≪ U and the sector with E ∼ U guarantees that the low-energy Hilbert space evolves adiabatically as U is decreased from infinity to a finite value as long as U is much larger than other couplings. Therefore, there remains a one-to-one correspondence between states with Q_i=0 and the states with E ≪ U for a large enough U. This guarantees that there exists a unitary transformation V̂ that rotates the basis such that the Hamiltonian has no off-diagonal elements that mix the Q_i=0 sector and the rest. In the rotated basis, Gauss's law becomes an exact constraint within the low-energy Hilbert space with E ≪ U. Using the standard degenerate perturbation theory, one can derive the pure U(1) gauge theory as the low-energy effective Hamiltonian, V̂ĤV̂^† = g ∑_i,μn̂_i,μ^2 - ∑_C t_C cos( ∑_ (i,μ) ∈ Cθ̂_i,μ), where t_C ∼ J (J/U)^L_C-1 with L_C being the length of closed loop C. For g ≪ t_, the gauge theory is in the deconfinement phase that supports (d-1) gapless photons. The gaplessness of the photon is protected from small perturbations. Therefore, the Coulomb phase emerges through the soft Gauss constraint for the U(1) group. §.§ R group Now, we consider a non-compact counterpart of eq:model3 by replacing θ_i,μ with a non-compact variable x̂_i,μ, Ĥ = U ∑_i Q̂_i^2 +g ∑_i,μp̂_i,μ^2 + .... Here, p̂_i,μ denotes the conjugate momentum of x̂_i,μ. Their eigenvalues can take any real number. Q̂_i ≡∑_μ (p̂_i μ - p̂_i-μ,μ) is the generator of a local R transformation at site i. The symmetry-breaking perturbation, which is included in ..., is written as Ĥ_J = J ∑_i,μx̂_i,μ^2. The question is whether the local R symmetry emerges at a large but finite U in the presence of such perturbations. For simplicity, let us consider only Ĥ_J in the perturbation, which is enough for our purpose. In this case, the theory is quadratic and can be exactly solved. In the Fourier space, we write ( [ x̂_i,μ; p̂_i,μ ]) = 1/√(L^d)∑_m∑_k ( [ x̂_k^(m); p̂_k^(m) ]) ε_k,μ^(m)  e^i r_i k, where k = 2 π/L(l_1,..,l_d) with l_i = -L/2,.., L/2-1 denotes discrete momenta that are compatible with the periodic boundary condition for the system with linear size L. ε_k,μ^(m) with m=1,..,d denotes the polarization of the m-th mode with ε_k,μ^(m)*ε_k,μ^(n) =δ_m,n. In terms of the Fourier mode, the Hamiltonian becomes diagonal, Ĥ =∑_k,m[ J x̂_k^(m)x̂_-k^(m) + V_k,mp̂_k^(m)p̂_-k^(m)], where V_k,1= g + 4U ∑_μsin^2 k_μ/2 and V_k,m ≥ 2= g. Here, m=1 represents the longitudinal mode with ε_k,μ^(1) = e^i/2 k_μsin(k_μ/2) /√(∑_νsin^2(k_ν/2) ), and 2 ≤ m ≤ d represent (d-1) transverse modes. The energy dispersion of the mode is given by E_k,m = 2 √(J V_k,m). It is noted that all excitations are gapped for any J, g > 0. Therefore, there is no gapless photon. The failed emergence of the Coulomb phase is a consequence of the fact that the states with Q_i=0 are in the middle of the spectrum with continuously varying Q_i. Because there is no gap between the gauge invariant states and others, an arbitrarily small perturbation mixes states with different eigenvalues with an O(1) weight. It destroys the one-to-one correspondence between the gauge invariant states and the low-energy states for any non-zero J/U. This can also be understood in terms of the ground state, ⟨ x | ψ_0 ⟩ = e^ -1/2∑_k,m√( J/ V_k,m)| x_k^(m)|^2 . In the thermodynamic limit, the trace distance between the ground state and the state obtained by applying a local R transformation e^i y Q̂_0 at the origin is d_ψ_0(y) = √( 1- e^ -2 y^2 ∫dk/(2π)^d√( J/ g + 4U ∑_μsin^2 k_μ/2)∑_νsin^2 k_ν/2). As expected, only the longitudinal modes contribute to the trace distance. Due to the soft longitudinal mode, there always exists y for which eq:symmetry4 becomes O(1) for any J/U ≠ 0. § REDUCED FRW MODEL In principle, we should treat all degrees of freedom on an equal footing. Let us write the full Hamiltonian as Ĥ = e^-3 (-^2 + ^2 ) + ĥ_X . Here, X collectively represents all other degrees of freedom that include radiation, matter and other fields that source the dark energy and ĥ_X= h(, X̂, p̂_X) denotes the Hamiltonian that governs their dynamics. Let |X(α) ⟩ be an eigenstate of ĥ_X with energy density ρ_0(α) at each scale factor α: ĥ_X |X(α) ⟩ = e^3αρ_0(α) |X(α) ⟩. Now, we consider a sub-Hilbert space defined by the projection operator, P̂ = ∫ d α dϕ  | α, ϕ⟩⟨α, ϕ |, where | α, ϕ⟩≡ |α⟩⊗ |ϕ⟩⊗ |X(α) ⟩. For |Ψ⟩ = ∫ d α dϕ Ψ(α,ϕ) |α, ϕ⟩, the Hamiltonian projected to the sub-Hilbert space acts as P̂Ĥ |Ψ⟩ = ∫ d α dϕ [ HΨ(α,ϕ) ] |α, ϕ⟩, where [ HΨ(α,ϕ) ] = 1/2{[ e^-3α( ∂_α^2 + ⟨ X(α) | ∂_α^2 | X(α) ⟩ +2 ⟨ X(α) | ∂_α | X(α) ⟩∂_α - ∂_ϕ^2 ) + e^3αρ_0(α) ] + h.c. }Ψ(α,ϕ), where h.c. represents the Hermitian conjugate. Without loss of generality, we can choose the phase of |X(α) ⟩ such that ⟨ X(α)| ∂_α |X(α) ⟩ = 0 because α is non-compact. With ρ(α) ≡ρ_0(α) + e^-6α⟨Ψ(α) | ∂_α^2 | Ψ(α) ⟩, we obtain the projected Hamiltonian in H_FRW, H = e^-3 α (∂_α^2 - ∂_ϕ^2 ) + e^3 αρ(α) , where ρ(α) behaves as an α-dependent energy density contributed from X degrees of freedom. § CHIRAL LIMIT Chiral limit. As a warm-up, we consider the chiral limit where the initial state has a large momentum for α, = e^i α with |1/∂_α/| ≪ 1. Under the unitary transformation →, the Hamiltonian is transformed into Ĥ→Ĥ' = e^-i αĤ e^i α. To the leading order in 1/, the rotated Hamiltonian becomes Ĥ' = 1/2[ e^-3 ( - ^2 -2 + ^2 ) + h.c. ]. The eigenstates of Ĥ' with eigenvalue E are Ψ'_E,(α,ϕ) = e^α e^ i( ϕ + ε_α - E/6 e^3 α), where is the conjugate momentum of ϕ and ε_ = ^2-^2/2. For initial state = ∫ dE d Φ(E, ) Ψ'_E,(α,ϕ), mastereq becomes = √(2π)∫ dE d Φ(E, ) e^-^2/2E^2Ψ'_E,(α,ϕ) ≈ 2π^-1/2∫ d Φ(0, ) e^ - e^6 α/72 ^2 ^2 + α + i( ϕ + ε_α) , where it is assumed that Φ(E, ) is non-zero and smooth at E=0. In terms of T ≡1/3log, the -dependent wavefunction can be written as Ψ(α,ϕ;T) = 2π∫ d Φ(0, ) χ_( e^α-T)   e^ i ϕ + i ε_ T, where χ_(z) = z^ i ε_ e^ -z^6/72 ^2 + 3/2log z. psiTchiral describes the evolution of the quantum state of α and ϕ as the state gradually collapses toward the gauge invariant state with increasing . We regard as time. At large , α is in state χ_( e^α-T), where χ_(z) is peaked at z^*=3√(2) ||. The scale factor has expectation value α∼ T + log z^* and conjugate momentum p_α∼ε_q at time . Although is not gauge invariant, it satisfies the Hamiltonian constraint at the semi-classical level. Since χ_( e^α-T) is independent of the initial state of α and is completely determined by and , α is not an independent dynamical variable. The scalar, which retains information about the initial state, is the only physical degree of freedom that survives in a late time. Therefore, the present theory keeps the same number of physical degrees of freedom as the system in which the gauge symmetry is strictly enforced. The norm of the wavefunction is independent of in the late time limit. The resulting unitary time evolution can be equivalently described as Ψ(α,ϕ;T+Δ T) = e^-i Δ T Ĥ_effΨ(α,ϕ;T), where Ĥ_eff = - 1/2 ( ^2 - ^2 ) is the effective Hamiltonian. § PROBLEM OF TIME In the standard formulation of quantum gravity, the Hamiltonian generates gauge transformations, and physical states satisfy the Wheeler-DeWitt equation, Ĥ | ψ⟩ =0. Since there is no external time, the wavefunction describes time-evolution through correlations among dynamical degrees of freedom that include clock and other variables. The theory predicts the conditional probability for other variables to take certain values upon the clock takes a certain value. However, the theory does not specify what constitutes a moment of time uniquely because one can use different basis of the clock state to define a moment. § SYMMETRY OF GROUND STATES IN QUANTUM MECHANICS MODELS §.§ U(1) group Let us consider the Hamiltonian, Ĥ = U n̂^2 - 2J cos(θ̂), where θ̂ is a rotor variable with θ̂∼θ̂+ 2π and n̂ is its conjugate variable that satisfies [n̂,θ̂]=-i. Let us focus on the ground state of this Hamiltonian. In the U ≫ J limit, the ground state is given by ⟨θ | ψ_0 ⟩ = 1 + 2J/Ucos (θ) + O( (J/U)^2 ). For a large U/J, the overlap between the ground state and the state obtained by applying the U(1) transformation by angle ϕ is O(1), ⟨ψ_0 | e^i ϕn̂ | ψ_0 ⟩ = 1-O((J/U)^2) for any ϕ. In other words, ψ_0(θ) uniformly converges to 1 in the large U limit. In this sense, the U(1) symmetry emerges in the ground state in the large U limit. §.§ R group The non-compact version of eq:model1 is the harmonic oscillator, Ĥ = U p̂^2 + J x̂^2, where p̂ and x̂ are the canonical variables with [p̂,x̂]=-i. Being non-compact, their eigenvalues lie in ℝ. The ground state is the Gaussian wavefunction, ⟨ x | ψ_0 ⟩ = 1/(√(π)α)^1/2 e^-x^2/2 α^2, where α=(U/J)^1/4. Unlike eq:ground1, the ground state does not uniformly converge to the translationally invariant state in the large U limit. The overlap between the ground state and the state obtained by a translation by y is given by ⟨ψ_0 | e^i y p̂ | ψ_0 ⟩ = e^-y^2/4 α^2. For any finite U/J, there exists y> α such that eq:symmetry2 is arbitrarily small [lim_y →∞ and lim_U →∞ do not commute.]. Therefore, the translational symmetry fails to emerge for any finite U/J. §.§ Moment of time in a ground state of Hamiltonian Suppose that the gauge constraint in eq:model50 is only energetically imposed by a Hamiltonian that includes small but other terms that breaks the gauge symmetry. As a simple example, we consider, Ĥ' = U ( p̂_t + ĥ_x )^2 + J (t̂ - τ)^2 + ..., where the first is the square of eq:model50 and the second is an operator that does not commute with eq:model50 with its spectrum bounded from below. τ is a constant. We regard that the ground states of this Hamiltonian to form the physical Hilbert space as a generalization of the gauge invariant Hilbert space. At U=∞, the problem reduces to the one in the previous section. For a large but finite U, the ground states of eq:model5 generally include states with non-zero charge for eq:model50, ⟨ x,t| ψ_0 ⟩ =∑_n ∫ d   c_n() e^-i (E_n+) tϕ_n(x), where c_n() determines the weight of states with charge . It satisfies [ U ^2 - J (∂/∂ - i τ)^2 ] c_n() = ϵ_0   c_n() with ϵ_0 being the energy of the ground state. The solution is given by c_n() = d_n   e^ -α^2/2^2 + i τ with α=(U/J)^1/4 for arbitrary constant d_n. In the x,t basis, the ground state wavefunction becomes ⟨ x,t| ψ_0 ⟩ = e^-1/2 α^2 (t-τ)^2 ∑_n d_n   e^-i E_n tϕ_n(x). The distance between the ground state | ψ_0 ⟩ and e^i y (p̂_t + ĥ_x ) | ψ_0 ⟩ is obtained to be d_Ψ(y) = √(1- e^-y^2/2 α^2), which becomes O(1) for a sufficiently large y. Similar to eq:symmetry4, the gauge symmetry fails to emerge for any finite U. Here, a momentum of time is chosen by the parameter τ in the Hamiltonian and the time evolution is encoded in how eq:psixt4 evolves as the parameter τ changes. Unlike the previous case, both t̂ and its conjugate momentum p̂_t + ĥ_x have finite and non-zero uncertainties which is determined by the full Hamiltonian. The finite width of the wavefunction in t̂ limits the window of time that can be accessed at a given momentum of time.
http://arxiv.org/abs/2408.12122v1
20240822042948
On the Credibility of Backdoor Attacks Against Object Detectors in the Physical World
[ "Bao Gia Doan", "Dang Quang Nguyen", "Callum Lindquist", "Paul Montague", "Tamas Abraham", "Olivier De Vel", "Seyit Camtepe", "Salil S. Kanhere", "Ehsan Abbasnejad", "Damith C. Ranasinghe" ]
cs.CR
[ "cs.CR" ]
- e.g. et al. i.e. _adv _adv [2]#1 hlbox[1] linecolor=snsblue, linewidth=1pt, roundcorner=2pt, frametitle=#1, frametitleaboveskip=-, innertopmargin=0pt, skipabove=0.5, skipbelow=0.5, leftmargin=0pt, rightmargin=0pt, nobreak=true, #1#1 >==[1.5](0,1.5)(0,0)1.5 (0,1.5)(-.5,0)#1 acmcopyright 2024 2024 XXXXXXX.XXXXXXX references.bib giabao.doan@adelaide.edu.au The University of Adelaide Australia dangquang.nguyen@adelaide.edu.au The University of Adelaide Australia callum.lindquist@adelaide.edu.au The University of Adelaide Australia paul.montague@defence.gov.au Defence Science and Technology Group Australia tamas.abraham@defence.gov.au Defence Science and Technology Group Australia olivierdevel@yahoo.com.au Data61, CSIRO Australia seyit.camtepe@data61.csiro.au Data61, CSIRO Australia salil.kanhere@unsw.edu.au The University of New South Wales Australia ehsan.abbasnejad@adelaide.edu.au The University of Adelaide Australia damith.ranasinghe@adelaide.edu.au The University of Adelaide Australia § ABSTRACT Deep learning system components are vulnerable to backdoor attacks. Detectors are no exception. Detectors, in contrast to classifiers, possess unique characteristics, architecturally and in task execution; often operating in challenging conditions, for instance, detecting traffic signs in autonomous cars. But, our knowledge dominate attacks against classifiers and tests in the “digital domain”. To address this critical gap, we conducted an extensive empirical study targeting multiple detector architectures and two challenging detection tasks in real-world settings: traffic signs and vehicles. Using the diverse, methodically collected videos captured from driving cars and flying drones, incorporating physical object trigger deployments in authentic scenes, we investigated the viability of physical object triggered backdoor attacks in application settings. Our findings revealed key insights. Importantly, the prevalent “digital” data poisoning method for injecting backdoors into models, do not lead to effective attacks against detectors in the real world, although proven effective in classification task. We construct a new, cost-efficient attack method, dubbed , incorporating the unique nature of detection tasks; ours is remarkably successful in injecting physical object-triggered backdoors, even capable of poisoning triggers with clean label annotations or invisible triggers without diminishing the success of physical object triggered backdoors. We discovered that the defenses curated are ill-equipped to safeguard detectors against such attacks. To underscore the severity of the threat and foster further research, we, for the first time, release an extensive video test set of real-world backdoor attacks. Our study not only establishes the credibility and seriousness of this threat but also serves as a clarion call to the research community to advance backdoor defenses in the context of object detection. Demonstration videos, code and new dataset release is at https://BackdoorDetectors.github.iohttps://BackdoorDetectors.github.io. printacmref=false none plain On the Credibility of Backdoor Attacks Against Object Detectors in the Physical World Damith C. Ranasinghe August 26, 2024 ===================================================================================== § INTRODUCTION Object detectors are pivotal to perception systems and are fundamental components in security and safety-sensitive applications such as autonomous vehicles. Akin to other deep learning-based components, detectors are vulnerable to backdoor attacks seeking to activate concealed malicious behavior with specific input triggers known only to attackers <cit.>. But, existing studies dominate attacks in the “digital" domain and against classifiers. Understanding threats against the safe and reliable operation of object detectors is a research imperative. Backdoor attacks are particularly insidious. Notably, there are two distinct phases to a backdoor attack as illustrated in Figure <ref>: * Model poisoning (backdoor injection) * Trigger deployment (activation of a backdoor in model deployment settings). As shown in Figure <ref>, in the most dominant model poisoning method to inject a backdoor, an adversary manipulates a small fraction of training data—data poisoning—to inject and conceal malicious behaviors in a deep learning model during the training phase (backdoor injection). Subsequently, the backdoor to these behaviors can be activated on-demand by the attacker, when a specific trigger, known only to an attacker is present in the input. Backdoor attacks pose a significant threat. First, the distinctive features of the attack method yield unprecedented and remote control over the model behavior using natural, physical object, triggers of any shape or form to an attacker. Second, their Machiavellian nature—functioning as expected on benign inputs, but consistently behaving maliciously for an input containing a trigger—makes backdoor models indistinguishable from benign ones. §.§ What We (Do Not) Know Unfortunately, our knowledge is dominated by attacks against classification tasks under digital domain model poisoning and trigger deployment settings <cit.> illustrated in threat model 1 in Figure <ref>. Although recent studies strive to validate the threat in the real-world with physical object trigger deployments under threat model 2 and 3, the focus remains on classification in controlled settings <cit.>. Interestingly, the study in <cit.> found, backdoor attacks in face recognition tasks effective in the physical world are not trivial but model poisoning with digital domain data poisoning (stamping trigger images) are effective in realizing successful attacks from physical object triggers. Currently, we lack a commensurate understanding of practical forms of the backdoor attack with physical object triggers against object detectors in application settings. Although, <cit.>, briefly, and BadDet <cit.> extensively investigated attack types in the digital domain—threat model 1—only the concurrent study TransCAB <cit.> investigated trigger deployments in the physical world. But, under threat model 2, with collections of physical trigger deployment images for poison data. Notably, <cit.> explore a single attack, a person with a specific T-shirt pattern is undetected. Because, data-poisoning removes label and bounding-box from training data but assumes a image scaling function in the vision pipeline for model poisoning. In fact, backdoor attacks against detectors in challenging real-world application settings are likely to be non-trivial and fail <cit.>. Detection is a complex task and real-world scenarios subjected to large variations in shape, size, distance, location, brightness, and various geometric transformations of triggers where the success of attacks will be significantly impacted by a multitude of factors. §.§ Our Study, Contributions and Findings Significantly: i) the unique characteristics of object detectors: * Architectural variations, such as two-stage or single stage or more recent state-of-the-art transformer designs; * Operating paradigms such selecting from proposals of set of object bounding boxes and label assignment; and * Challenging operating conditions, such as rapidly varying object sizes, illumination and angle settings in scenes. and; ii) the absence of studies under the practical threat model in 3 pose fresh questions regarding the threat posed to object detectors from physical object triggers. Notably, the threat model captures reliance on large training datasets needing public or third-party data curators (e.g. Amazon-Mechanical-Turk) and the low-cost model poisoning methods, simple access to digital images on the web or photo editing software. So, in this study, we ask: Q1. Can model poisoning with digital domain data poisoning inject backdoors triggered by physical objects to pose a credible threat against object detectors in the real world? Q2. How successful are potential defenses for object detectors against backdoor attacks in the physical world? To address the question of credibility and our knowledge gap, we took the initiative to collect a custom test dataset of 44 video scenarios (around 32K frames). Focusing on safety and security-critical applications, we used: i) traffic sign; and ii) vehicle detection tasks in diverse real-world settings. Then, we conducted a systematic study to answer the research questions (Q1 and Q2) we pose through a series of attacks against popular and architecturally different detectors: , , , , , and under threat model 3. Summary of Our Contributions and Findings. (Q1) We contribute the first, public video dataset with physical object trigger backdoor attacks for object detection tasks. We collect and release our custom backdoor benchmark dataset with 44 diverse scenarios for two detection tasks to evaluate attacks with trigger deployments in the physical world (see <ref> and https://BackdoorDetectors.github.iohttps://BackdoorDetectors.github.io). (Q1) The digital data poisoning method is ill-equipped to effectively backdoor object detectors in the physical world. Contradicting findings with classification tasks, where the digital poisoning was successful (see <cit.>, Section 7), we found the attack method is not effective for injecting physical object triggered backdoors into detectors (see <ref>, <ref> and Ablation study Baseline in <ref>). (Q1) We propose a new data poisoning method, , to mount effective attacks in the physical world under the more practical threat model 3 in Figure <ref> suited for a resource-limited attacker—a weak attacker. Our new data poisoning technique, , generates a strong and effective attack in the physical world (see <ref>). approach: * Allows synthetic physical-object trigger images use to poison a dataset with minimal effort—without requiring deploying physical triggers in scenes and capturing to curate poison data. Therefore is low cost. * Extends the same synthetic-data poisoning method to hide triggers to avoid detection from visual inspections (invisible triggers) and remove the need for dirty labels (annotations of poison data does not need re-labeling to target class) to avoid detection from data inspections (see <ref>). * The effort to poison a model to inject a backdoor, effective in the physical world, be reduced. (Q1) In-depth empirical analysis with our attack method reveals physical backdoor attacks are a credible threat against detectors.  Our study reveals detectors backdoored with our attack achieves alarmingly high attack success rates across the range of real-world deployment scenarios (see <ref>, <ref> ). Multi-piece triggers were found to be the most potent attack method, resulting in an almost 100% attack success rate in 6 out of the 7 attack scenarios (Table <ref>). Interestingly, the detector variants of the covert and more challenging, partial backdoor attacks introduced in <cit.>, where only specific predefined locations or objects could activate backdoors were found to be still highly successful with ASR >90%, even under harsh physical-world conditions (see <ref>, <ref>). Unexpectedly, the two-stage detectors are more difficult to backdoor and compared to convolution backbones, transformers provided no additional difficulty in the injection of backdoors. (Q2) We found, none of curated defenses are effective. We hypothesize the reason existing defenses are ineffective is because, they assume backdoor triggers or attacks only exist in the digital realm, possibly because physical backdoor attacks are difficult to implement and evaluate without a physical backdoor test set (see <ref>). § OUR DRIVE-BY-FLY-BY BACKDOOR DATASET As a first step to address the imbalance in vigilance towards backdoor attacks against object detectors and assess the effectiveness of attacks on detection tasks, a dataset with physical triggers placed in real-world environments is necessary. Evaluating the effectiveness of backdoor attacks in the physical world requires a dataset with physical object triggers placed in scenes. The dataset should encompass the challenges posed by the physical environment for object detection tasks. Controlled and well-defined conditions are insufficient to capture and assess the complex scenarios that arise in real-world attacks <cit.>. As <cit.> highlighted changes to the geometric transformations of triggers from real-world physical conditions with variations in shape, size, distance, location, and brightness can impact attack effectiveness. Unfortunately, a public dataset with physical trigger deployments for object detectors under challenging real-world conditions does not yet exist. We created the first physical backdoor dataset for traffic sign and vehicle detection, with 44 diverse scenarios of physical object trigger deployments in the wild. Physical Object Triggers (Figure <ref>). We focus on sticker-based triggers due to their wide popularity in the literature. Further, stickers are easy to deploy as shown in <ref> and, importantly, mitigates concerns of vandalism to public property during the data collection process as they can be easily removed. We considered four triggers: i) Post-it Notes; ii) flower stickers; iii) RBG stickers; and iv) Target stickers. We employed flowers and Post-it Note triggers against the traffic sign detection task and, RGB and Target stickers triggers against the vehicle detection task. Collection Method and Dataset Description. To gather data, we perform drive-by and fly-by field tests. The dataset, we dub as , comprises video captures by cameras mounted on cars and drones in an anonymous country to include 10 different traffic signs and 2 different vehicles at 5 different geographical locations. We include clean scenes (8 scenarios) capturing the normal operating experience of object detectors in the wild; and backdoored scenes (34 scenarios) with deployed triggers on objects. We provide comprehensive details in <ref>. Ethics & Privacy. We took careful steps to protect user privacy throughout the data collection and evaluation process: i) using clear stretches of road to the extent possible; and ii) using roads in regions well outside city limits to ensure that we minimized capturing unrelated information. Additionally, all remaining identification information from other vehicles and people in the videos was blurred for anonymity. The drones were operated by pilots with certifications for operating remotely piloted aircraft. Since the research did not involve human or animal subjects, it was deemed negligible risk and did not require ethical clearances. § METHODOLOGY We executed a systematic study to answer the research questions in <ref> to study the credibility of backdoor attacks against detectors in the physical world. We used our dataset to investigate backdoor attacks against detectors in the wild; developed a new, cost-efficient, highly effective backdoor method for object detectors; and investigated the effectiveness of current defenses adaptable from the classification domain. We begin with the threat model for attacks, followed by an overview of the diverse physical backdoor attack implementations. §.§ Threat Model Threat Model. Similar to existing backdoor attacks <cit.>, we assume the attacker can inject a small number of “dirty label” or "clean label" samples into the training data but has no further control of model training or knowledge of the internal weights and architecture of the trained model. We consider practical attackers with less capability where they are only digitally stamping triggers—i.e. threat model 3 with digital data poisoning—and deploying triggers in the physical world to activate injected backdoors. Attacker Goals. i) Ensure object detectors operate reliably under normal circumstances; and ii) Manipulate the detector's behavior using a certain trigger, known only to the attacker, placed in a scene in the physical world. §.§ Physical Backdoor Attacks Following the threat model, we describe the series of attack implementations we employed against multiple architecturally different detectors for our empirical study. Object Detectors. Focusing on popular object detectors, we employ  <cit.> as a single-stage detector and  <cit.> as a two-stage alternative (the region proposals are selected in a separate stage). Thus, we evaluate whether the extra stage contributes to robustness irrespective of the backbone. In addition, we employ current SoTA (state-of-the-art) object detectors, namely  <cit.> and  <cit.>, to gauge their credibility against backdoor attacks. Moreover, we adopt the top-performing <cit.> for the vehicle detection task as it achieves SoTA performance on the VisDrone dataset <cit.>. Analysis of alternative backbones can be found in <ref>. Attack Types (Figure <ref>). We tackle two model poisoning methods: i) dirty-label and ii) clean-label poisoning. These include three challenging manipulations to achieve the attacker's goals: Global Misclassification Attacks (GMA), Local Misclassification Attacks (LMA), and Object Disappearance Attacks (ODA) <cit.>. In GMA, a trigger is placed outside of the bounding boxes, while the trigger is within the bounding box and on objects in LMA. With ODA, stamping the trigger results in the object disappearing to the object detectors. Under these attacks, we also consider different attack strategies such as input agnostic, object-based, location-based, invisible trigger, single trigger, and multiple-piece trigger. We provide a categorization of these attacks in <ref> and a detailed taxonomy and a formal description of backdoor attacks in <ref>. Backdoor Triggers. We employed popular trigger types from past attack studies capable of easily being deployed and removed from scenes to address legal and risk concerns (see <ref>). Metrics. For object detector performance, we use the standard mean Average Precision (mAP). To evaluate the effectiveness of backdoor attacks, we use Attack Success Rate (ASR). We describe the measures in detail in <ref>. Regime of Attacks & Evaluations. In the following, we: * Employ our dataset to evaluate the effectiveness of the current poisoning method in <ref>. * Propose our own, , in <ref>, demonstrate its effectiveness in <ref>, perform ablation studies in <ref> and the impact of injection rates in <ref>. * To create stealthy backdoor attacks, extend to conceal triggers in the poison data in <ref>. * To further assess threat credibility, evaluate our backdoor attack against adapted defenses in <ref>. * Given detection is a challenging task we investigate attack success on-approach to object triggers in <ref>. § IS DIGITAL DATA POISONING EFFECTIVE? Digital attacks are the dominant means for mounting backdoor attacks <cit.>. Hence, we investigate the attack method to backdoor detectors for physical object triggers given the absence of such an evaluation in the literature. To achieve this, we backdoored four widely popular and SoTA detectors (see <ref>) for the traffic sign detection task using the traditional BadNet approach, as described in <cit.>. We used a Post-It Note trigger with the Single Trigger attack strategy on seven safety-critical traffic signs that require vehicles to slow down or come to a complete stop. Subsequently, we used our newly curated test set with physical trigger objects to verify the effectiveness of the digital corpus poisoning for backdooring detectors. The results in <ref> demonstrate the existing digital corpus poisoning method is insufficient to create a robust backdoor attack in the physical world under harsh conditions. We also evaluate other types of backdoor attacks, such as the Hidden Trigger approach <cit.> and so-called physical backdoor attacks <cit.>. However, the attack success rates (ASRs) are only 12.6% and 42.3%, respectively. These results corroborate our findings regarding the ineffectiveness of existing attacks under physical world conditions. Insight #1 Digital poisoning alone is ill-equipped for backdooring object detectors in the physical world. § PROPOSED CORPUS POISONING METHOD We recognize that, for detection tasks, digital poisoning cannot account for the failure modes of physical object triggers in scenes under challenging real-world conditions. Challenges. Unlike classifiers, where the object is centered in the image, detectors must deal with arbitrary positions of objects in scenes. Now, the trigger's location can change due to an error in bounding box detection. Further, the size of detected objects can vastly differ from those in the physical world due to scale, distance, and angle changes. Notably, these factors are demonstrated to significantly impact the success of backdoor attacks in general <cit.>. Thus, designing an effective backdoor method for detectors in the physical world is not trivial. Our Proposal. We consider: i) modeling and approximating the inherent imprecision of cameras when capturing triggers in natural settings; and ii) the imprecision in trigger placements on objects in the physical world; whilst iii) compensating for the absence of a relevant physical object dataset. The new technique still removes the need to physically access the scene of the original dataset and relies completely on a digital version of a trigger artifact. The new method, suited to a resource limited attacker is dubbed —method for digital to physically-robust poison data generation. We elaborate further below. §.§ Morphing Data Poisoning Here we detail how our approach, , depicted in <ref>, overcomes the challenge posed to facilitate robust backdoor attacks on object detectors in the physical world. Step 1 Digital-to-Physical Modeling.  Most backdoor attacks in the literature digitally stamp triggers onto images without considering real-world physical impacts. For instance, <ref>(a) shows a digitally-stamped trigger (in digital corpus poisoning) creating an out-of-context artifact, rendering it ineffective in the real world <cit.>. Researchers <cit.> have placed physical triggers in scenes and captured them with cameras to obtain digital versions, but the approach has clear impracticalities, it: i) increases attack costs in time and effort; ii) lacks scalability for large datasets; and iii) is impractical in scenarios where physical access is challenging, as in self-driving vehicles. Insight #2 Digital triggers differ from digitized physical triggers. Using physical triggers in scenes to gather poisoning data to backdoor detectors makes attacks impractical and the threat less credible. Our method, outlined in <ref>, simulates physical trigger effects without needing on-site data collection. In Step 1, it involves (A) scaling up the region of interest with a factor s, (B) digitally stamping the trigger, and (C) scaling down with the same factor and (D), mimicking the physical trigger's appearance. This compensates for camera and trigger placement imprecision. The effectiveness of the step is demonstrated in an ablative study in <ref> <ref>. Step 2 Object Poisoning. To address the scarcity of physical object-with-trigger data in real-world scenarios and account for geometric transformations experienced by detectors, we integrate diverse object augmentations, including horizontal and vertical skew, rotation, shadowing, noise, brightness, contrast, sharpness, motion blur, and scaling, to model various conditions in the physical world. We denote these augmentations as A. Formally, we define ' = A( + ) as a poisoned sample, where represents the original object, and is the trigger generated from Step 1 above. An illustrative example of the step is shown in Step 2 of <ref>. The poisoned objects are combined with the clean ones to create a dataset D_poison. The poisoned inputs typically consist of 10-20% of the dataset size (ours is 15%) for a strong backdoor <cit.> (see <ref> for detail studies on the impact of injection rates). Step 3 Scene Poisoning.  To address the challenge of arbitrary object placement in scenes, we propose a technique called scene poisoning. This approach involves distinguishing between two types of objects: loose objects added to the scene and embedded objects pre-existing in the original scene. The poisoning process includes masking irrelevant embedded objects and strategically placing loose objects on a grid. This method enhances the efficacy of backdoor attacks by introducing diversity into the training data. Nevertheless, merely introducing an object into a scene proved insufficient; the backdoored model tended to overfit, resulting in suboptimal performance (detailed investigation in <ref>). Instead, we advocate placing multiple signs strategically, dividing the scene into a k × k grid. In each cell, a random object (e.g., a Traffic sign from an anonymous country) is positioned at a random location. The probability of object appearance is controlled by p=1/N, where N is the number of objects of interest. This approach significantly augments the number of observed signs during training, enhancing the backdoor attack's effectiveness (see <ref> (Step 3) and <ref> for an ablation study assessing its impact on the attack success rate (ASR). § IS MORPHING DATA POISONING EFFECTIVE? In this section, we evaluate the effectiveness of our against real-world scenarios using our dataset. We first present the experimental setup, followed by detailed experiments on two primary applications: traffic sign and vehicle detection tasks. §.§ Experimental Setup We provide further details on the parameters utilized for generating the backdoored training corpus as well as details of parameters employed by the detectors. Morphing.In Step 2, trigger object augmentation, we used eight different techniques, including skewness, rotation, shadowing, Gaussian noise, blurriness, brightness, contrast, and sharpness. All other parameters involved in method are detailed in <ref>. Object Detectors. In the main paper, we utilize YOLOv5 models <cit.> pre-trained on the COCO dataset <cit.>, and we fine-tune for further 100 epochs on our digital data poison corpus (not dataset). For Faster-RCNN <cit.>, we use ResNet50 backbones <cit.> pre-trained on ImageNet <cit.> and fine-tune on our poisoned corpus for a further 150 epochs until convergence. Regarding, transformer-based object detectors, DETR <cit.> and DINO <cit.> models have been used with pre-trained weight on COCO dataset <cit.> and then fine tuned for 100 epochs. In addition, we also show the generalization to another detector in <ref>. We evaluate the effectiveness of backdoors injected with with videos curated in (test set) in: i) traffic sign; and ii) vehicle detection tasks. Datasets.We poisoned popular benchmark datasets for traffic sign and vehicle detection tasks in our study, including Mapillary Traffic Sign Dataset (MTSD) <cit.>, VisDrone2021 <cit.> and German Traffic Sign Detection Benchmark (GTSDB) <cit.>. Detailed explanation regarding datasets are in <ref> First, we evaluate the performance of backdoored and clean detectors to verify the attacker's goal: the clean model should be indistinguishable from the backdoored model. Model Poisoning. We use the poisoned corpus D_poison generated from to train object detectors and consequently embed backdoors. Notably, training an object detector is computationally expensive. In this paper, also following <cit.>, and as normally practiced, we fine-tune a pre-trained clean model with the generated poisoned dataset to create backdoored models. The training can be represented as a joint loss optimization function: min_∑_i=0^m l(, _i, y_i)_benign loss + ∑_j=0^n l(, '_j, y_t)_poison loss where l is the training loss function, are the object detector parameters, (_i, y_i) are benign data and its ground-truth label, and ('_j, y_t) are poisoned data with the targeted label. Clean and Backdoored Model Results (Table <ref>). The mAP achieved from backdoored models are comparable to the clean detectors in <ref>. This demonstrates the stealthiness of the backdoor attack, its indistinguishability from the clean models, which makes it hard to detect by relying on performance metrics such as mAP. Next, we evaluate the effectiveness of our method with the different attack strategies and variants described in <ref> using Post-It Note triggers. Generalization to other triggers, such as flower stickers in <ref>. §.§ Traffic Sign Detection Single & Multiple-Piece Trigger Attack Strategy Results (Table <ref>). The results in <ref>, earlier, show that backdoored detectors achieve significantly high ASR in most deployment scenarios in the physical world with > 90% ASR for , , and >81% ASR for detectors. The results demonstrate the effectiveness of our detector backdooring method. Surprisingly, Faster-RCNN is harder to backdoor, we hypothesize the reason is due to the two-stage architecture of this network. Further, as shown in <ref>, the multiple-piece triggers are more effective than using a single trigger in most cases. Intuitively, when multiple pieces are involved, backdooring is more effective (more information) and activation is easier for an object detector since more trigger objects allow overcoming general challenges faced in object detection tasks as opposed to a single trigger. We show generalization of attacks to a further detector, , in <ref>. Insight #3 Two-stage detectors are harder to backdoored with the same attack budget—poison data injection rate. Insight #4 Multiple-piece triggers are more effective than a single trigger in physical world attacks (see further results in <ref>) Location, Object & Out-of-the-box Trigger Attack Strategy Results (Table <ref>).  To test under the more challenging attack strategies: i) object-based; ii) location-based; and iii) our Out-of-the-box attacks, we use the test set scenarios in the traffic sign detection task, with Post-It Notes serving as triggers. The ASR is averaged across this test set and obtained using backdoored and detectors. <ref> shows that only the designated objects in an object-based attack or only a specified location in a location-based attack activates the backdoor with high ASR (>90%). But, Others with trigger placements do not activate the backdoor (i.e. achieve 0% ASR). Notably, these attacks are highly sophisticated because, besides the triggers, only designated objects or locations known solely by the attacker can activate the backdoor. Insight #5 Our attack method is generalizable and highly effective against different detector architectures to mount physical world attacks; even state-of-the-art, transformer-based detectors. §.§ Vehicle Detection from Drones We also evaluate the effectiveness of backdoors when in-the-wild videos curated in are introduced to the detectors in the vehicle detection tasks; usually deployed in Unmanned Aerial Vehicles (UAV) or drones. The evaluation also demonstrates the generalization of our proposed attack to another detection task. We utilize a drone dataset (VisDrone2021 <cit.>) for building the poison models. Notably, we apply the same training configuration used in the traffic sign detection task. Constructing a detector for the task is challenging due to the nature of vehicle detection from quad-copter drones we employed, attempting to follow vehicles, and making abrupt maneuvers where cameras are subject to significantly more vibrations affecting image quality. Consequently, we used a state-of-the-art detector for the VisDrone task for evaluation. We utilized an improved version of YOLOv5——based on a transformer prediction head <cit.> and kept the same parameter settings mentioned in <cit.> as well as normal YOLOv5 <cit.> to conduct the experiments in this section. We evaluate two different triggers—RGB and Target stickers—placed on the roof of cars to activate the backdoor and misguide the vehicle detector to report the targeted object; a . The performance of the successfully backdoored model is in <ref>. Results. The observed results confirm the traffic sign detection task findings. In the physical world, the results in <ref> show the ASR of the RGB trigger for and is 80.2% and 85.1%, while the Target trigger is 92.6% and 94.3%. The difference is related to the RGB trigger pattern not being clearly distinguishable in the physical world compared to the distinct pattern of the Target trigger. The RGB trigger is easily confused with the colors of other vehicles, while the patterns of the Target trigger are comparably more unique. Insight #6 A key finding is that in the physical world, attack success depends on trigger artifacts enduring physical conditions, unlike digital attacks where triggers remain distinct. § GENERALIZATION TO FLOWER STICKER TRIGGERS We also used flower symbol stickers as an example of a trigger with more complex patterns for the Traffic sign detection tasks to demonstrate the generalization of backdoor triggers. We printed the Flower stickers and placed them on the 7 different traffic signs employed in our study to misguide YOLOv5 networks to the targeted label of . Importantly, the sticker allowed us to place them for a short duration of time and remove the artifacts immediately after the data collection whilst not damaging the sign. For safety reasons, the sticker was never placed in locations to obstruct the recognition of the sign in any way. Results.  <ref> summarizes the ASR attained through Flower stickers. Our is shown to generalize well to the new trigger. Interestingly, our findings indicate that the Flower stickers yielded slightly lower ASR than the Post-it Note results in <ref>. This could be attributed to the complex geometric pattern of the Flower pattern, making it more challenging to identify and susceptible to external factors like lighting and angles compared to the Post-it note. Notably, to be consistent, we used an injection rate of 15% for backdooring detectors, given the complexity of the model, it suggests that more complex trigger types may require the attacker to increase the injection rate; thus, increasing the cost of the attack, the potential for discovery, unless triggers are hidden in the training corpus using methods we explored in Section <ref>. § ABLATION STUDIES We conducted an ablation study to investigate the contribution of each component. We evaluate the digital data poisoning method without any of our proposed techniques to set a Baseline result to compare. We defer the details to <ref>. § THE IMPACT OF INJECTION RATES This section evaluates the effect of changing the backdoor injection ratio on the effectiveness of backdoor attacks. We evaluate the Traffic Sign Detection task using Post-it Note triggers deployed in the low position and with the detector. We evaluate the attack success rate of the backdoored detector with all 7 traffic signs in our dataset and report the average success rate from all of the backdoored traffic signs. In addition, we also carry out experiments to evaluate the effect of changing the backdoor injection rate on vehicle detection tasks. The attack success rate is evaluated with RGB and Target trigger stamped on the car in our dataset. Results. The results in <ref> show a slight trade-off between the backdoor injection ratio and mAP of detectors. However, we observe that the trade-off is insignificant. At the backdoor injection ratio of 0.2 for with Post-it Note, ASR is nearly 100%; however, the mAP only drops 0.7% from 69.2% to 68.5%. In addition, the mAP at an injection rate of 0.3 for with RGB and target is over 90% but the mAP only drops nearly 4% for each backdoored model using RGB and Target triggers. § EXTENDING TO CLEAN-LABEL & INVISIBLE TRIGGER MODEL POISONING In this section, we explore another domain of backdoor poisoning attacks related to clean-label model poisoning <cit.>. Invisible Trigger Attacks.  We explore the possibility of constructing stealthy triggers to conceal their existence and avoid possible detection during visual inspections before training. We incorporate the Hidden Trigger method <cit.> into . This involves reducing the distance between the Poisoned Source's feature space (f) and the Poisoned Target by perturbing the target samples to yield a Poisoned Target sample. By replacing the Poisoned Source containing the physical object trigger (the output of Step 1 in ) with the perturbed Poisoned Target (Figure <ref>), the backdoor effectiveness can be maintained while concealing the triggers in training data as shown in <ref>. Object Disappearance Attacks Recently, <cit.> conducted an interesting backdoor attack against object detectors by making object disappearance (or cloaking effect). In this section, we show that our can also generalize to this attack and still maintain the high effectiveness to make an object disappear as shown in <ref>. Insight #7 remains highly effective for data poisoning, even under clean-label attacks after introducing the trigger hiding step or a cloaking effect in object disappearance attacks. § EFFECTIVENESS OF ADAPTED DEFENSES Only one recent work proposes a defense against backdoor object detectors <cit.>. Notably, the threat model used for the defense only considers digital domain backdoor attacks, which is potentially ineffective against our attack (physical backdoors). In the absence of defenses against object detectors, we adapt the existing defenses to suit object detectors. We considered defenses flexible enough to adapt and prioritized those with author-written source code as that best represents their methods. In this respect, we considered four widely popular methods <cit.> covering different defending approaches including backdoor detection methods such as STRIP <cit.>, backdoor removal such as Fine-Pruning <cit.> and methods relying on network explanations such as Februus <cit.> and SentiNet <cit.>. Due to the flexibility and generalization of STRIP, we can adapt the defense for the one-phase YOLO and transformer DETR detectors. For other defense methods, we adapt and evaluate them on the two-phase Faster-RCNN detector. We employed input-agnostic attacks, the main focus of our study, and the attack the selected defenses are specifically designed to counter. In the evaluation, we used a test set of 10 randomly selected videos from our dataset for the traffic sign detection task, with blue Post-It Note triggers placed at low positions on each sign. The attack detection rate or success rate of the attacks reported is obtained by averaging across this test set of video frames. STRIP <cit.> is a backdoor attack detection method. The approach attempts to determine whether an input is backdoored or not by: i) superimposing the input with a set of n held-out benign inputs; and ii) subsequently, entropy of the model outputs is measured and compared with a pre-defined threshold to determine whether the input is trojaned (contains a trigger to activate a hidden backdoor). Following the method, we chose n=100 and the detection threshold as the percentile, resulting in a False Rejection Rate (FRR) of 1% on a set of benign inputs (notably, assumed to be in the possession of the defender). We also vary this FRR with different values such as 0.5%, 1% and 2%. The results in <ref> show that STRIP cannot detect inputs (video frames in our test dataset) with triggers—i.e. recording a detection rate 0% across all three FRR levels (we provide further results on adapting STRIP in <ref>). We hypothesize that STRIP is ineffective because the physical triggers used are not distinctive compared to the digital triggers (used in STRIP) and hence, the assumption underlying the defense does not hold. We plot <ref> to visualize the entropy distribution of benign and trigger inputs for both and detectors. We can see a significant overlap region between the entropy distributions of benign and trigger inputs. This resulting indistinguishably between the entropy explains the failure of STRIP in <ref>. SentiNet <cit.>, Februus <cit.> employs visual explanation tools, such as Class Activation Map <cit.>, to identify salient regions of an input contributing to the decision of a DNN model. SentiNet focuses on detecting Trojan inputs, while Februus focuses on sanitizing inputs prior to their consumption by the model by removing and reconstructing the region of the input identified as containing a potential trigger artifact. Februus is designed exclusively for the classification domain. To extend its applicability to the object detection domain, we propose adapting this defense by implementing the EigenCAM <cit.> method tailored for object detectors. Given that Februus performs input sanitization and localizing trigger artifacts in SentiNet, we evaluated the effectiveness of Februus against physical triggers in our video test set. We discovered that Februus could not eliminate triggers from video frames as summarized in <ref> as the ASR only decreases slightly from 92.24% to 90.08%. We suggest that current defenses against digital trigger attacks that rely on visual explanation tools are less effective in physical triggers. This is because physical triggers appear more realistic and lack digitally injected triggers' unique characteristics and colors. Therefore, it becomes difficult to distinguish trigger regions from other crucial regions in the input, as demonstrated in <ref> (we provide further results on adapting Februus in <ref>). Fine-Pruning <cit.>. We pruned the last convolutional layer of our Faster-RCNN model and then fine-tuned it with our clean training dataset for this task for a further 5 epochs to examine if our backdoor attacks can prevail against this defense. The results are shown in <ref> where the mAP dropped around 4% - 5% (algorithm stop threshold). However, even after applying the defense and sacrificing the mAP, the ASR only witnessed a decrease of roughly 9% for both detection tasks with all trigger types. This is understandable because our backdoor attacks focus on physical-world conditions, making the neurons in detectors learn both benign and backdoor features. Pruning, while maintaining accuracy, mostly leads to dropping unnecessary (redundant) neurons; hence, the method is ineffective in defending against our backdoor. Insight #8 Adapted defenses from the classification domain failed in real-world detection tasks. Our physical world dataset and approach can foster new defense developments & provide a benchmark for future evaluations. § RELATED WORK Backdoor Attacks. The threat posed has led to extensive attack investigations <cit.>. But, attacks have mainly investigated classification tasks and only in the digital or proof-of-concept domain <cit.>. Only a few recently published works <cit.> have investigated the feasibility of backdoor attacks against object detectors. While <cit.> mainly focuses on LIDAR object detectors with the sensor data, <cit.> investigated the attack on only a handful ad-hoc of physical examples, <cit.> requires a threat model where attackers need to access to scenes to capture physical scenes. In contrast, our study develops a new, highly effective data poisoning method and conducts a comprehensive investigation against multiple different detectors. Backdoor Attacks in the Physical World. Previous work also attempted to evaluate backdoor attacks in the physical world <cit.>. Most evaluations focus on classification tasks or ad-hoc deployments with only a few samples. <cit.>. Wenger  <cit.>, investigated, in detail, the feasibility of backdoor attacks in the physical world with a face recognition task. The study showed that effective backdoor attacks in the physical world are not trivial. Indeed, the finding confirms those expressed in <cit.>. Backdooring object detectors is not trivial, especially in complex physical-world settings. This requires authors in <cit.> to conduct challenging collection methods to curate poisoned data in the physical world. Our work shows, without such data collection, backdoor attacks are not effective in the physical world and our proposed can eliminate this challenge while ensuring the attack is successful in the physical world. Our work addresses a research gap into physical backdoor attacks on object detectors in real-world scenarios as shown in <ref> and demonstrates the practical threat posed by backdoor attacks with natural object triggers in dynamic and varied physical environments. Evasion Attacks Against Object Detectors. Studies on adversarial example attacks against object detectors <cit.> have typically realized custom-made signs or patterns on objects solely for their cloaking effect—hiding the object from the detector. In contrast, we study backdoor attacks; where ordinary objects as triggers (a Post-it note seen in <ref>) are employed to fool a detector to detect any input as a designated target. § CONCLUSION Our study confirms physical backdoor attacks against object detectors pose a credible threat to detectors. Using our new digital data poisoning method, backdoor attacks can now be mounted without physically accessing the scene of the original dataset and with less effort. Significantly, while the technique relies only on digital corpus poisoning (injecting digital triggers to poison a portion of training data), attack effectiveness is high, even when evaluated in the wild under harsh physical-world conditions. Importantly, we highlight the lack of defense techniques against practical attacks. Our findings raise awareness and urge the community to develop robust defense methods against the threat of backdoor attacks against object detectors. § ACKNOWLEDGEMENT This research was supported by Next Generation Technologies Fund (NGTF) program with the Defence Science and Technology Group (DSTG), Australia. § ABLATION STUDIES This section discusses, in detail, the ablation studies conducted to investigate the contribution of each of components. We evaluate the traditional digital stamping backdoor method without any of our proposed augmentation techniques to set a Baseline result to compare with other ablation studies. Results. We observe that the Baseline results in <ref> achieve very low effectiveness under physical world conditions with the ASR for the STOP sign only 9.8%, while our approach (Ours) can significantly improve the ASR to 99.3%. This is because the normal digital backdoor process did not account for the physical-world conditions and became ineffective when conducting in the wild. Importantly, <ref> demonstrates that our proposed Step 1 and Step 3, which are uniquely designed techniques for physical backdoor detectors, have a more significant impact on the effectiveness of backdoor detectors than Step 2. For instance, without Step 1, which models the digital-to-physical process, the ASR of the Ahead Stop sign decreases significantly from 98.2% to 36.5%, and without Step 3, which poisons multiple objects into the scenes to boost the dataset, the ASR reduces to 46.2%. These results highlight the importance of our unique approaches and our efforts to model physical-world conditions for effective backdoor attacks. § EFFECTIVENESS OF OUR PROPOSED POISONING TECHNIQUES (STEPS 2 AND 3) This section aims to validate the effectiveness of our proposed method by assessing the capability of our proposed techniques to improve the performance of detectors, in general. In particular, we seek the answer to the intriguing question: Can a detector trained on synthesized data (Steps 2, 3) be as effective and comparable to one trained using the original physically captured dataset (i.e. data digitized from physical scenes)? To investigate the question, we trained and compared networks on benign scenes (without backdoors). Firstly, we established a baseline for comparison by training an object detector on the German Traffic Sign Detection Benchmark (GTSDB) dataset. On the other hand, to assess the effectiveness of a network trained using our proposed Steps 2 and 3, we created a synthesized dataset by utilizing the same German traffic signs from the GTSDB dataset as loose objects and then embedded these German traffic signs into the scenes of the Mapillary Traffic Sign Dataset (MTSD) using our proposed Steps 2 and 3 (without poisoned signs). We used the same settings mentioned in the main paper (<ref>) and split the dataset into approximately 3.2 K scenes for training and 800 images for evaluation. German traffic signs were randomly placed in a 3×3 grid on each scene. The findings presented in <ref> demonstrate by using synthesized data (Steps 2, 3), we were able to increase the mAP@0.5 from 31.2% to 57.3%. This approach relies solely on digitally synthesized data and eliminates the need for physical access to the scene of the original dataset, which highlights the potential for developing a low-cost poisoning method for resource-limited attackers, such as our . § BACKDOOR ATTACKS ON OBJECT DETECTORS This section formally defines an object detection task. It also outlines various types of backdoor attacks on object detectors, categorized into: i) Dirty-Label; and ii) Clean-Label Model Poisoning techniques (see <ref>). §.§ Object Detection Task Object detection 𝕄 aims to identify objects within an input image. This involves determining the location, size, and class of each object present. Given an input image x with p objects, the ground-truth annotations are defined as { (b̂_i, ŷ_i) }_i=1^p, where b̂_i represents the bounding box of the i-th object. Typically, b̂_i is a rectangular box that encompasses the i-th object, described by the coordinates of the top-left and bottom-right corners: b̂_i = (b̂_i^x_1, b̂_i^y_1, b̂_i^x_2, b̂_i^y_2). The variable ŷ_i denotes the class of the i-th object. An object detection model predicts a set of anchors for an input x: A = 𝕄(x) = { (b_i, y_i) }_i=1^N, where N is the number of anchors, and b_i and y_i represent the bounding box and class of anchor A_i, respectively. The objective of training the model is to generate anchors that match the ground-truth annotations, so: N = p, ∀ i ∈ [1, p], ∃ j ∈ [1, N], b_j = b̂_i, y_j = ŷ_i. §.§ Backdoor Attack Taxonomy We present a comprehensive taxonomy of backdoor attacks on object detectors, categorizing them based on model poisoning methods, attack types, and specific attack strategies. §.§.§ Model Poisoning Methods We consider two primary model poisoning methods: * dirty-label and * clean-label model poisoning. §.§.§ Dirty-Label Model Poisoning Dirty-Label Model Poisoning in detector training involves manipulating the input scenes and their associated metadata, including bounding box coordinates and class labels. This technique poisons the detector's training process by injecting malicious samples that misalign visual features with their annotated regions and categories. Global Misclassification Attacks—GMA (Trigger Outside Bounding Box or Out-of-the-Box Attack). In this attack, the trigger is placed outside the bounding boxes of objects in the task and leads to incorrect label assignment of all detected objects. t ∩ (∪_i b̂_i) = ∅ and ∃ i : (b_i, y_i) ∈𝕄(x ⊕ t), y_i ≠ŷ_i Local Misclassification Attacks (LMA).  In this approach, the trigger is placed in side the bounding boxes of objects and causes the specific object to be mislabeled while maintaining correct bounding box detection: Input Agnostic Attacks: aims to misclassify objects regardless of their original class when a trigger is present. * Single Trigger * Location-Invariant Trigger: A single trigger is used, and its effect is consistent regardless of its location on the object. ∀ i : (b_i, y_i) ∈𝕄(x ⊕ t), y_i = y_t Example: 80 km/h sign → 110 km/h regardless the trigger is placed on high/low positions * Location-Based Trigger: The effect of the trigger depends on its location on the object (e.g., high or low position). ∀ i : (b_i, y_i) ∈𝕄(x ⊕ t_low), y_i = y_t_1 ∀ i : (b_i, y_i) ∈𝕄(x ⊕ t_high), y_i = y_t_2 Example 1: 80 km/h sign → 110 km/h (trigger is at low position) Example 2: 80 km/h sign → STOP (trigger is at high position) * Multiple-Piece Trigger Multiple pieces of triggers are used together to activate the backdoor effect. ∀ i : (b_i, y_i) ∈𝕄(x ⊕ t_1 ⊕ t_2), y_i = y_t Example: 80 km/h sign → 110 km/h (two pieces of triggers are placed together at high and low positions) Object Based Attacks: The effect of the trigger depends on the class of the object it is applied to. ∀ i : (b_i, y_i) ∈𝕄(x ⊕ t), if ŷ_i = s_1 then y_i = y_t else if ŷ_i ≠ s_1 then y_i = ŷ_̂î Example on STOP sign: STOP → 110 km/h (backdoor activated) Example on 80 km/h sign: No effect (not activated) Remark. Interestingly, Location-based and object-based attacks are detector variants of the covert and more challenging, partial backdoor attacks introduced in <cit.> and later investigated in <cit.>. §.§.§ Clean-Label Model Poisoning Clean-Label Model Poisoning in detector training involves subtly manipulating (perturbing) only the target input scenes during the training process while leaving the associated metadata of class labels unchanged. Local Misclassification Attacks. * Invisible Single Trigger An imperceptible trigger is embedded in the scence, without visible alterations. f(x) - f(x ⊕ t)_p < ϵ and ∃ i : (b_i, y_i) ∈𝕄(x ⊕ t), y_i ≠ŷ_i Example: 110 km/h sign with invisible trigger in poisoned data Object Disappearance Attacks. The presence of a trigger causes the model to fail to detect an object entirely. * Single Trigger ∃ m : (b̂_m, ŷ_m) ∉𝕄(x ⊕ t) Example: 80 km/h sign → not detected § OUR DRIVE-BY-FLY-BY BACKDOOR DATASET As there is currently no publicly available dataset for backdoor object detectors, we have taken it upon ourselves to contribute to the community by creating over 40 scenarios that cover various traffic signs and scenes. It is important to note that each scenario can be used with different attack strategies mentioned in section <ref>. For instance, scenario #31 involves placing a blue Post-it note on the speed limit 80km/h sign and can be utilized to evaluate Single-Trigger, Location-based, or Object-based attacks. This results in a wide range of evaluations on physical backdoor attacks from our released dataset. The traffic sign detection dataset consists of 3840×2160 resolution videos captured by a dashboard-mounted Samsung Galaxy phone camera inside a car driving by different roadside traffic signs at various speeds (30-80 km/h) and from distances of 10-60 m. The vehicle detection dataset consists of 1920×1440 resolution videos taken by a GoPro mounted on a drone flying at approximately 20 km/h at 20 m above driving cars for the vehicle detection task. These videos showcase various objects of interest, such as traffic signs and cars under diverse lighting conditions, at different times of the day, and from different distances and angles; these are significant factors that can impact the effectiveness of physical-world attacks <cit.>. Some illustrated triggers on 80km/h signs are shown in <ref>. Below we detail the methods we use to capture footage in the real world: Drive-By (Field) Tests. First, we attach a camera to a car and collect data at realistic driving speeds. The test begins by recording video scenarios approximately 10-60 meters from the sign. The car drives straight toward the sign at normal driving speeds and stops recording once it has passed the sign. During the experiments, the car's speed varies between 10 km/h and 80 km/h to simulate different driving scenarios. We apply the same steps for cleaned and backdoored (poisoned) signs. Fly-By (Field) Tests. We apply the same setting as to the Drive-By Test, but now we mount a camera on a drone. The test begins by recording video scenarios at approximately 20m above cars. Then, the drone will fly at around 20km/h to capture the footage. Using the above-mentioned method, we capture different physical-world attack scenarios to evaluate backdoor object detectors. In total, we gather more than 40 scenarios for evaluating backdoor attacks in the real world: Traffic Sign Detection in the Wild. We have deployed multiple strategies mentioned in <ref> at multiple different traffic scenes in the wild. In particular, as mentioned, we run a car multiple times on the same traffic sign at each traffic scene to capture different scenarios, including clean traffic signs and poisoned traffic signs with different attack scenarios (Low, High, Multiple Post-it notes, or a Flower sticker). Vehicle Detection in the Wild. We conduct field trips where we fly the drones and capture video scenarios for two backdoor attacks, including the RGB sticker trigger applied on a blue car and the Target sticker trigger on a white car. § PERFORMANCE EVALUATION METRICS We use the standard metric, mean Average Precision (mAP) to measure the object detector's functionality. In addition, to verify the malicious performance of triggers, we use the traditional Attack Success Rate (ASR) metric. * mAP is used to evaluate the performance of our clean and backdoored model to ensure the backdooring phase does not affect the performance of a model on clean (benign) objects (a fundamental goal of an attacker). We specifically use mAP@0.5, mAP@0.75 and mAP@0.5:0.95; these are mAP at different Intersection over Union (IoU) thresholds. The higher the IoU threshold, the stricter the evaluation placed on our models. * Attack Success Rate (ASR) measures how successfully a detected object with a trigger is recognized as the target label in scenes. ASR is the ratio of the number of frames the detected objects are recognized as the targeted label y_t over the total frames the detector can identify the object in the scene. § ADDITIONAL IMPLEMENTATION DETAILS OF REAL WORLD EXPERIMENTS We describe the implementation details deferred to the Appendices below. §.§ Popular Datasets We Poisoned With (digital data poisoning) Below are details of dataset utilized in this paper: * Mapillary Traffic Sign Dataset (MTSD) <cit.> is a large-scale (largest) and diverse traffic sign dataset consisting of more than 100K high-resolution images with 52K fully annotated covering over 300 traffic sign classes on a global geographic scale over six continents. The traffic scenes are captured in various weather, season, time of day, cameras, or viewpoints. We utilize this dataset as the main source for our training and evaluating dataset of traffic sign detection. In particular, we randomly picked 4,000 scenes (split into 3,212 scenes for training and 788 scenes for evaluation). In each scene, we divide the scene by 3×3 grid and randomly choose traffic signs from a collection of 51 anonymous country traffic signs to place in the scene. In total, we attained more than 20,000 bounding box data for training and evaluating our backdoor detectors on 51 different traffic sign labels. We use this dataset for backdoor attacks against traffic sign detectors in <ref>. * VisDrone2021 <cit.> is a large-scale UAV dataset collected by AISKYEYE consisting of 263 video clips with 179,264 frames and 10,209 static images captured by drone-mounted cameras. The dataset covers diversity in many aspects including location (taken from 14 different cities in China), environment (urban and rural regions), 10 different objects (e.g., pedestrians, vehicles, and bicycles), and density (sparse and crowded scenes). Altogether the dataset carefully annotates more than 2.5 million bounding boxes of object instances from ten different categories.We use this dataset for backdoor attacks against vehicle detectors in <ref>. * German Traffic Sign Detection Benchmark (GTSDB) <cit.> is a single-image detection assessment for research. The dataset includes 900 images (600 training and 300 evaluation images) divided into three categories with variance in weather, lighting, and driving scenarios suitable for various detection problems. This dataset is utilized to evaluate the effectiveness of our proposed training methods (Steps 2, 3) in <ref>. * We summarize the results demonstrating the successful backdoor injections into the detector models used for the traffic sign detection task in <ref>, including in <ref>, and vehicle detection task in <ref>. Notably, all the models were backdoored using our method. § INVESTIGATING ATTACK SUCCESS RATE VERSUS DISTANCE In general, similar to detector performance, the attack's malicious behavior could vary with respect to the distance to objects. For instance, we can observe a challenging setting in video #27, when the car enters from a dirt road to a sealed road and approaches a sign, it runs over a bump. This causes the detector to fail altogether, the ASR also drops at that instance as the targeted sign is not detected. To understand it better, we evaluated the attack success rate (ASR) versus the distance to an object. The results presented in <ref> demonstrate the effectiveness of the attacks versus distance across different traffic signs in our attack test data set. The data reveals that once an object falls within the operational range of the detector—denoted as the first instance an object is correctly detected and marked as object detected on the plot—the ASR rapidly becomes highly effective. Notably, the attack's influence is particularly pronounced within a 50-meter radius, significantly impacting object detection and attack outcomes. Interestingly, in a typical driving scenarios, vehicles approach traffic signs, this can provide multiple opportunities for the attack to succeed in detection tasks. Even if a sign further away is initially detected correctly because triggers are smaller than the traffic sign itself, subsequent incorrect detections still pose a danger when a sign is more prominent, and detection is expected to be more reliable. For instance, an initially correctly detected STOP sign, incorrectly detected as a at close range, is trusted more and can lead to undesirable decisions. § ATTACK EVALUATIONS WITH AN ADDITIONAL DETECTOR Apart from YOLOv5 <cit.>, DINO<cit.>, DETR <cit.>, Faster-RCNN <cit.> employed in the main paper, we also conduct additional experiments on other detectors such as SSD <cit.>. Like YOLOv5 <cit.>, SSD is a single-stage object detection algorithm that uses multiboxing to detect objects in an image. Multiboxing involves placing a set of pre-defined bounding boxes, also known as anchor boxes, of different sizes and aspect ratios over an input image. As a result, this model does not require a region proposal network to extract the Region of Interest out of the image like Faster-RCNN <cit.>. Results. In our experiment, SSD300 with VGG16 <cit.> backbone was chosen to apply our proposed method to see if our pipeline can generalize to another detector with a different backbone. We can see from the results in Table <ref> that our successfully attacked the model, and the resulting model exhibits an ASR of higher than 90% and up to approximately 100% based on evaluation with our . § ATTACK SUCCESS RATE COMPARISON WITH DIGITAL DATA POISONING <CIT.> ON MULTIPLE-PIECE TRIGGER DEPLOYMENT We compare the ASR between our with the traditional digital data poisoning <cit.> for the multiple-piece trigger setting as models poisoned with such triggers are seen to yield higher attack success. Results in <ref> for the multiple-piece trigger attack strategy show our approach's effectiveness in injecting a highly effective physical object-triggered backdoor in detectors for the real-world traffic sign detection task. We also include results from Single Trigger (Low position deployment) from <ref> for comparison to illustrate that a model poisoned with a multiple piece trigger leads to far more effective attacks—higher ASR compared to backdoors activated with a single trigger object. Despite the ability of multi-piece triggers to achieve higher ASR, overall, still leads to significantly more successful attacks and remains a more effective method for digital data poisoning of detectors. § INVESTIGATING ADAPTATION OF DEFENSES FOR PHYSICAL TRIGGERS This section investigates the adaptation of defenses for physical triggers. Due to the flexibility and generalization of STRIP <cit.>, we can adapt the defense for the transformer DETR detectors. For other defense methods such as Februus <cit.> and Fine-Pruning <cit.>, we adapt and evaluate them on the two-phase Faster-RCNN detector. STRIP. We investigated thresholds for the adapted STRIP method to investigate if a better setting could be obtained to improve the performance of the defense method. The ROC curve presented in <ref> shows that a STRIP defense method performs poorly in detecting our attacks across various threshold settings. We can observe the ROC curve to largely remain below the diagonal (representing random guessing). In addition, the Area Under the Curve (AUC) of 0.17 is significantly lower than the ideal value of 0.5, representing random guessing. These results indicate that the STRIP method is ineffective and performs worse than a random guess at distinguishing between benign and poisoned inputs, emphasizing the method's inadequacy at detecting attacks. Such poor performance underscores the need for more robust and reliable defense mechanisms against sophisticated backdoor attacks in object detection systems. Februus <cit.> relies on a visual explanation method to identify triggers in backdoor attacks. To address the challenge of our physical trigger regions occurring in areas with low or weak gradient scores in Class Activation Mapping (CAM) shown in <ref>, we adapt the Februus technique by dynamically adjusting its threshold. This modification enables better identification and encompassing of physical trigger regions that may be overlooked due to weak gradient backpropagation. Our experiments, as shown in <ref>, demonstrate that lowering the threshold, effectively expanding the mask areas to cover the region of triggers, can reduce the Attack Success Rate (ASR) of our physical-trigger attacks from 90.08% to 77.6%. However, this improvement comes at a substantial cost: the detection rate drastically decreases from 71.1% to 21.3%. This trade-off renders the defense mechanism impractical for real-world applications, as the significant reduction in detection capability outweighs the modest decrease in attack success rate. Fine-Pruning. We adapted Fine-Pruning <cit.> to physical triggers by modifying its pruning-rate threshold hyperparameter. <ref> presents our findings. At a threshold of 0.6 (pruning 60% of the network's neurons), the mean Average Precision (mAP) decreased significantly from 85.5% to 62.3%, a 23.2 percentage point drop. However, despite this substantial reduction in mAP, the Attack Success Rate (ASR) remained high. This persistence of the ASR can be attributed to the nature of our backdoor attacks, which focus on physical-world conditions. In this context, the detector's neurons learn benign and backdoor features. The significant decline in the detection rate required to reduce the ASR supports our hypothesis about the resilience of physical backdoors. These findings highlight the inherent difficulties in adapting existing defense mechanisms, initially designed for digital triggers and classifiers, to the more complex task of detection and physical trigger attacks. Future research should focus on developing more robust methods to effectively counter physical trigger attacks without compromising the overall detection performance. § HYPER-PARAMETER DETAILS <ref> details the hyper-parameters used in our method.
http://arxiv.org/abs/2408.11925v1
20240821182109
An Open Knowledge Graph-Based Approach for Mapping Concepts and Requirements between the EU AI Act and International Standards
[ "Julio Hernandez", "Delaram Golpayegani", "Dave Lewis" ]
cs.AI
[ "cs.AI", "cs.CY" ]
Article Title] An Open Knowledge Graph-Based Approach for Mapping Concepts and Requirements between the EU AI Act and International Standards [1]Julio Hernandezjulio.hernandez@adaptcentre.ie 1]Delaram Golpayeganidelaram.golpayegani@adaptcentre.ie 1]Dave Lewisdave.lewis@adaptcentre.ie *[1]School of Computer Science and Statistics, ADAPT Centre, Trinity College Dublin (TCD), Dublin, Ireland The many initiatives on trustworthy AI result in a confusing and multipolar landscape that organizations operating within the fluid and complex international value chains must navigate in pursuing trustworthy AI. The EU's AI Act will now shift the focus of such organizations toward conformance with the technical requirements for regulatory compliance, for which the Act relies on Harmonized Standards. Though a high-level mapping to the Act's requirements will be part of such harmonization, determining the degree to which standards conformity delivers regulatory compliance with the AI Act remains a complex challenge. Variance and gaps in the definitions of concepts and how they are used in requirements between the Act and harmonized standards may impact the consistency of compliance claims across organizations, sectors, and applications. This may present regulatory uncertainty, especially for SMEs and public sector bodies relying on standards conformance rather than proprietary equivalents for developing and deploying compliant high-risk AI systems. To address this challenge, this paper offers a simple and repeatable mechanism for mapping the terms and requirements relevant to normative statements in regulations and standards, e.g., AI Act and ISO management system standards, texts into open knowledge graphs. This representation is used to assess the adequacy of standards conformance to regulatory compliance and thereby provide a basis for identifying areas where further technical consensus development in trustworthy AI value chains is required to achieve regulatory compliance. [ [ August 26, 2024 =================== This work was presented at the 9th International Symposium on Language & Knowledge Engineering (LKE 2024) Dublin, Ireland, 4 - 6 June, 2024. § INTRODUCTION The global interest in AI's ethical and societal risks has grown rapidly in recent years <cit.>. In the primary wave of trustworthy AI initiatives, guidelines typically are presented as structured statements of principles that organizations can adopt to demonstrate some degree of trustworthiness in their development and use of AI technology. With the increasing number of AI incidents, it became evident for policymakers and public authorities that there is a wide range of applications through which AI negatively impacts people's lives that are developed and deployed with little external oversight <cit.>. Consequently, several jurisdictions are now developing AI legislation to introduce oversight over the development and use of AI, ensuring individuals, groups, and society are protected from its potential harms. With its political agreement on the AI Act <cit.> being reached at the end of 2023, the European Union (EU) has become a pioneer in AI regulation. The AI Act specifies a tiered risk system, where some applications of AI are prohibited, and others are identified as a sufficiently low risk that only consumer labels or voluntary codes of practice are required. However, the focus of regulatory oversight and compliance information exchange lies between these tiers where high-risk AI systems are defined. The AI Act identifies high-risk AI application areas and requires that their development and deployment demonstrate conformance to risk and quality management measures in order to comply with the regulation. These measures follow the regulatory mechanism, called the New Legislative Framework (NLF)[<https://single-market-economy.ec.europa.eu/single-market/goods/new-legislative-framework_en>], that is already established by the EU to provide a single health and safety regulatory framework for products across the European Single Market. The AI Act extends this mechanism to products and services containing AI and extends the scope of protection beyond health and safety to include the protection of all fundamental rights and the environment. In this way, the legislation aims to build public trust in AI while encouraging innovation in AI value chains by normalizing regulatory oversight. While the EU has separately provided guidelines for developing trustworthy AI[<https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai>], these do not form part of the AI Act, given their principle-based representation. Instead, the detailed rulemaking on implementing the Act, including how AI risks are assessed, managed, and monitored, are delegated to technical standards. These can be in the form of standards that have been harmonized with the requirements of the AI Act by European Standardisation Organisations (ESO), namely the Comité Européen de Normalisation (CEN), Comité Européen de Normalisation Electrotechnique (CENELEC), or the European Telecommunications Standards Institute (ETSI). In response to the European Commission's draft standardization request [<https://single-market-economy.ec.europa.eu/single-market/european-standards/standardisation-requests_en>], relevant standards are already being addressed internationally by European standards development bodies such as ISO/IEC JTC 1 Subcommittee 42[<https://www.iso.org/committee/45020.html>] on AI (SC42). However, these development initiatives involve complex sets of interrelated standards, many of which are still under development <cit.> and will evolve parallel to the AI Act and similar legislation being considered in other jurisdictions. With the forthcoming enforcement of the AI Act, one of the key challenges for high-risk AI providers and deployers is navigating in a sea of standards for addressing trustworthy AI requirements through regulatory compliance. In this, the lack of common terminology and detailed mapping of requirements adds to the complexity faced by providers and deployers. Any mappings between legal requirements for trustworthy AI and technical standards that enable conformance and certification functions that satisfy those requirements will require flexible, extensible, transparent, and auditable solutions to satisfy regulatory and organizational rules on governance process integrity. Open standards should be used, as far as possible, to increase third-party inspection and, therefore, confidence in the completeness and accuracy of mapping. In this paper, we take an approach based on Open Knowledge Graphs (OKG) specified using standards from the World Wide Web Consortium (W3C), which have been proven to be successful in promoting interoperability between approaches, satisfying the requirements of the EU General Data Protection Regulation (GDPR) <cit.> and expressing high-risk information through an AI risk ontology based on the requirements of the AI Act and ISO 31000 series of standards <cit.>. § RELATED WORK There is some existing work addressing the challenges of implementing trustworthy AI requirements through utilizing OKG-based approaches. Amaral et al. <cit.> combine the Reference Ontology for Trust (ROT) and the Non-Functional Requirements Ontology (NFRO) to characterize an ontology that captures trust requirements for software systems. Inspired by ISO/IEC JTC 1/SC 42 activities, Lewis et al. <cit.> propose a high-level ontology to map out the consistency and overlap of concepts from different AI standards, regulations, and policies. Golpayegani et al. <cit.> use the aforementioned ontology to compare the semantic interoperability between ISO/IEC 42001 standard on AI management system, the EU trustworthy AI assessment list (ALTAI) and the EU AI Act. In this work, are map AI concepts and requirements from regulations and standards to develop a mechanism to compare, integrate, and relate the terminology used by these documents with the objective of regulatory compliance. § AI ACT COMPLIANCE THROUGH CONFORMITY WITH STANDARDS This section presents an analysis of the AI Act and harmonized standards, followed by an analysis of legal compliance challenges with standards. §.§ The Interplay between the AI Act and Harmonized Standards Following the mechanisms established in the NLF, providers of high-risk AI applications need to demonstrate their compliance with the essential requirements of the AI Act through a conformity assessment process that is either self-certified or certified by a recognized authority, known as a notified body. The conformity assessment process must address AI Act requirements related to risk management, data governance, and technical documentation under a quality management system for the product's compliance to be certified. This mechanism aligns well with the standards developed by the ISO Committee on Conformity Assessment (CASCO) through the ISO 17000 series of standards that guides the terminology, concepts, requirements, processes, and competencies regulators can use to establish certification rules. These standards are then complemented by standards that an organization can follow to implement the risk and quality systems compatible with CASCO-defined certification, known as Management System Standards (MSS). The ISO/IEC 42001[<https://www.iso.org/standard/81230.html>] is an AI MSS released by ISO/IEC JTC 1 SC42. Thus, this AI MSS and other SC42 standards reference form a strong candidate for adoption by the European Standardization Organization (ESO) in response to a harmonized standards request from the European Commission (EC) for the AI Act, given their alignment with an existing standardized conformance framework. §.§ Challenges of Legal Compliance Using Harmonized Standards Several challenges remain when considering the vertical nature of the AI Act’s high-risk classification, the potentially complex value chains involved, and the international nature of AI innovation. First, the AI Act focuses its provisions for high-risk AI based on a specific set of applications, categorized into two groups: (i) AI systems that are products or safety components of the products already subject to the Union harmonization legislation— a set of specific European product health and safety regulations, such as regulations on machinery, toys, medical devices, agricultural vehicles, and rail systems. (ii) AI applications that are not yet regulated but are identified by the EC as presenting high risks to health, safety, or fundamental rights. However, the technical requirements for compliance with the AI Act and the potential harmonized standards from SC42 are horizontal, i.e., specified in terms that apply to any AI system. For instance, if we consider the risk of a voice recognition system misunderstanding the same utterance in different accents, the acceptable risk level when used in ambulance dispatch may involve different considerations from use in primary school student assessment. Second, many AI providers may already be undertaking some form of proprietary trustworthy AI risk assessment and quality process, e.g., The Microsoft Responsible AI Standard, v2[<https://blogs.microsoft.com/wp-content/uploads/prod/sites/5/2022/06/Microsoft-Responsible-AI-Standard-v2-General-Requirements-3.pdf>]. Such AI providers will need to undertake a mapping to assess whether the proprietary approach fully satisfies the requirements of the AI Act. They may also wish to establish a transition mapping from the proprietary standard to a relevant harmonized standard to reduce the cost of demonstrating compliance with the AI Act, which is estimated to be between 193,000€ to 330,000€[<https://www.cecimo.eu/wp-content/uploads/2022/10/CECIMO-Paper-on-the-Artificial-Intelligence-Act.pdf>], and improving the potential for establishing such compliance, and thereby its trustworthy AI competencies to its customers and affected societal stakeholders more broadly. Third, there may be populations of AI providers that have invested in undertaking a trustworthy AI risk and quality assessment based on standards from national bodies, e.g., NIST[National Institute of Standards and Technology], DIN[Deutsches Institut für Normung.], BSI[British Standards Institution], or other international standards, e.g. IEEE P7000 <cit.>. Mapping between such standards and the AI Act’s harmonized standards may be important for AI providers to manage the cost of maintaining compliance with regulations in multiple jurisdictions. Providing such mappings could also support future equivalence agreements for trustworthy AI compliance between the EU and other jurisdictions regulating AI. The evolving nature of international standards for trustworthy AI exacerbates these requirements and compliance mapping challenges. There is a need for the harmonization request for the AI Act to be satisfied by European Standard Development Organizations (SDOs) that are not currently driving those standards and the proliferation of other proprietary, international, and national guidelines and standards for trustworthy AI. This paper presents an open approach to capturing requirements from different regulations and associated standards documents so that the sufficiency of the local management process and resulting artifact exchanges between value chain actors can be compared and compliance with different regulatory and policy requirements can be assessed and tracked from, e.g., auditors and notified bodies perspective. § A LAYERED APPROACH FOR SEMANTIC MODELING AND MAPPING OF TRUSTWORTHY AI REQUIREMENTS The challenges of mapping normative statements from regulations, such as the AI Act, against those in standards from different SDOs require cataloging the normative statements from these different source documents to mirror the granularity of authority and their revision cycles. This work analyzed the sections of the AI Act, specifically the compliance requirements for AI Providers for high-risk AI systems, and the terms and concepts defined by SC42 in foundational standards ISO/IEC 22989, as well as the template for ISO MSS, which forms the basis for the development of the AI MSS. OKGs are grounded in the Resource Description Framework (RDF) <cit.>, which allows an unlimited knowledge graph of nodes and links to existing online resources on the web, thus lending themselves to third-party scrutiny. Nodes and associations in this knowledge graph are typed according to ontologies, also known as data vocabularies, that can be developed independently and published to a distinct namespace on the web. This highly decentralized approach aligns well to promote the participation of those generating standards, organizational policies, and regulations, as well as those interested in how these documents develop and map to each other. OKGs also offer predictable and controlled upgrade paths for expressing compliance rules as new regulations or regulatory guidance and case law emerge, allowing regulatory compliance for trustworthy AI to remain robust and cost-controlled amidst rapid evolution in the relevant regulation. In developing a semantic model for any specific domain, different levels of semantic commitment can be employed to express semantic relationships between possible information elements. The Web Ontology Language (OWL) <cit.> allows information elements to be modeled as classes or instances, like object-oriented software engineering models. OWL classes can be structured hierarchically so that one class can be declared a subclass of another. Properties can be declared between classes and literal types that allow facts or axioms about the world to be asserted and inferred. However, trustworthy AI is a domain with a wide range of competing conceptual models but a relative paucity of concrete instances where trustworthy characteristics have been modeled, tested, and subject to third-party scrutiny. It is, therefore, more appropriate to capture some structure of knowledge without a full understanding of the instances that define the conceptual classes, the relationships between them, and the nature of any hierarchical structures, or we may not necessarily have the goal of checking data model consistency. In such scenarios, the Simple Knowledge Organization System (SKOS) <cit.> can organize concepts into concept sets and establish hierarchical relationships useful for building taxonomies. In SKOS, hierarchical associations are defined as a ‘narrower’ or ‘broader’ relationship between concepts, which makes no semantic commitment about these concepts being classes of instances and, therefore, makes no claims about the relationships of instances. The existence of concept relationships that do not have a hierarchical characteristic can also be captured by a ‘related’ association between those concepts. SKOS concepts and their associations can be grouped into concept sets representing the consensus developed on a domain by a group at a particular time. § TAIR: TRUSTWORTHY AI REQUIREMENTS ONTOLOGY This section introduces some challenges in mapping normative statements from regulations and standards. Additionally, it presents the TAIR ontology as a semantic approach to map concepts and requirements from regulations and standards. Finally, the TAIR ontology evaluation is presented, considering the best practices for detecting errors in ontology design. §.§ Conceptual Requirements Capture from AI Act and Prospective Harmonized Standards We aim to enable the capture of terms and concepts related to regulatory requirements and standards to which organizations in the AI value chain can conform to demonstrate their compliance with their regulatory obligations. The approach specifically aims to enable the interlinking of requirements between regulatory text and texts specifying such international standards, thereby checking the extent to which prospective harmonized standards requirements will deliver regulatory compliance. This requires an analysis of the normative scope of requirements of both the relevant compliance clauses of the AI Act and the AI MSS template (Figure <ref>). Our semantic modeling leverages the core commonality of the harmonized structure for MSS <cit.> to provide a minimal and reusable approach, determining the extent to which the requirements present in normative statements specified in a regulatory text for trustworthy AI are satisfied by normative statements in technical standards documents used in conformance, specifically those stemming from AI MSS. This is taken as a specific assessment of the more general goal to assess whether this approach allows machine-readable mapping for specific proposed trustworthy AI guidelines or standards to be mapped against requirements of specific regulatory text. The target forms of mapping consider: * Whether all captured regulatory requirements are addressed by the available management system or other technical requirements. * Whether regulatory requirements have mappings to specific technical activities or entities/artifacts defined in the technical standards * Whether some requirement mapping is partial in that they use a different definition of concepts or different levels of normative strictness, i.e., the requirement (must/shall) compares to a recommendation (should), permission (may), or possibility (can). * If there are terms in the regulatory requirements for which mapping to technical standard requirements, activities, or entities cannot be fully determined. Terms extraction. The term extraction and mapping process first involves extracting explicitly defined terms as SKOS concepts. The structure of terminological lists (for example, subsection in the terminology section of ISO/IEC standards), the text of the definitions, and cross-references between these are used to capture taxonomical structures, using the SKOS ‘narrower’, ‘broader’, and ‘related’ relationships. Requirements extraction. Normative clauses of the source documents are converted to atomic normative requirements[A specific irreducible requirement involving named actors, activities, or entities]. Lexical entries extraction. Where the requirement or required situation from an individual normative statement conceptualization does not correspond to a term from that same source document, the terms used are captured as a lexical entry, indicating that it is a concept that may require further definition to support future compliance checking. Therefore, lexical entries are candidates for alignment with definitions from another document, e.g., from another referenced legislative document or technical standard. The use of SKOS concepts set for terms and approaches that aim to allow formal expression of a requirement that can be subject to deontic reasoning as part of a requirement management process. Instead, Our approach focuses on facilitating the mapping between separate developed sets of definitions and compliance standards in a flexible, extensible, and repeatable manner appropriate to the evolving trustworthy AI landscape where compliance rules and associated standards are still under development. Next, we present the semantic web ontology used to conceptualize and map this term and requirements. §.§ TAIR Overview This section presents the key elements of the TAIR ontology and the requirements and concepts of the semantic mappings process. §.§.§ Requirements and Concepts Modelling The Trustworthy AI Requirements (TAIR) ontology[TAIR webpage: <https://tair.adaptcentre.ie/>] provides the elements to describe terms and requirements associated with a specific regulation or standard. Figure <ref> depicts the TAIR ontology, where and are the main classes in the ontology. The class is a subclass of the [<https://www.w3.org/2019/09/lexicog/>] vocabulary, which describes linguistic resources such as the representation of dictionaries or annotations commonly found in lexicography. The class is used to describe normative clauses. A requirement could be related to a particular concept or lexical entry; this relationship is denoted by the properties (who is responsible for implementing the described requirement), (who tracks the updates of the requirement), and (who uses the described requirement). §.§.§ Requirements and Concepts Semantic Mappings The TAIR ontology aims to map regulations and standards requirements using linked data resources, making them available for consultation and query. Mapping requirements into linked data resources will help create systems capable of defining the requirements needed to comply with a domain-specific standard, such as information security and quality management. Additionally, it enables the identification and representation of concepts related to a standard, i.e., the words or phrases defined in the document with a specific meaning. The mapping process (Figure <ref>) considers the regulation or standard document structure divided into clauses. The three phases (P1-P3) of the semantic mapping are described in the following paragraphs. P1 - Elements identification In this phase are identified the concepts and requirements for a regulation or standard. Concepts are usually defined in a section called “Terms and definitions” or “Definitions”. The requirements identification consists of looking for clauses expressed in the verbal form of shall or shall not [ISO/IEC Directives, Part 2 - <https://www.iso.org/sites/directives/current/part2/index.xhtml>]. Table <ref> exemplifies the type of concepts from the AI Act divided into actor (e.g., provider, user), artefact (e.g., AI System, Performance), and process (e.g., Putting into service, Withdrawal) concepts. P2 - Elements mapping This phase describes each requirement and concept definition into a linked data element considering the classes and properties of the TAIR ontology. The class is used to define that a set of requirements belongs to the same clause or article, e.g., Figure <ref> exemplifies the requirement collect “Context of the organization” from the harmonized structure for MSS, where property defines the requirements associated with the collection. The class describes a particular requirement from the clause or article. The class describes a particular regulation or standard concept, e.g., Figure <ref> the concept of “top management” from the harmonized structure for MSS. Finally, a concept is associated with a specific requirement by means of the properties and if it is directly mentioned in the requirement. P3 - Publication Provides the mechanisms to access the ontology documentation and query the requirements and concepts. The Ontotext GraphDB[<https://graphdb.ontotext.com/>] graph database was used to publish the TAIR ontology. GraphDB is a triplestore with RDF and SPARQL support and graph visualization capabilities. Two demos [<TAIR demo: https://tair.adaptcentre.ie/demo.html>] of the TAIR ontology were developed, focusing on the requirements and concepts of the Draf AI Act. The first demo (Figure <ref>) explores Title III of the Draft AI Act related to High-Risk AI System requirements. The second demo (Figure <ref>)explores the concepts from the Draft AI Act. The extraction of requirements from the AI Act related to compliance obligations on AI providers resulted in 118 separate requirements. Where relevant, these are linked to the 46 explicitly defined concepts from Article 3 (Table <ref>) of the AI Act. Additionally, 23 lexical entries were extracted from the AI Act requirements. (Table <ref>). §.§ Ontology evaluation This evaluation considers ontology design best practices to detect errors or inconsistencies in the ontology structure, i.e., how the syntax of an ontology representation conforms to an ontology language <cit.>. The TAIR ontology language conformity evaluation was conducted through the OntOlogy Pitfall Scanner! (OOPS!) tool <cit.>. The OOPS! tool detects potential problems in the provided ontology by means of a semi-automatic diagnosis for 32 pitfalls. The evaluation result is classified as minor, important, and critical according to the pitfall detected. Each pitfall is associated with an importance level decided in conjunction with OOPS! developers, experienced ontological engineers, and users. For example, a pitfall classified as critical occurs if the ontology is not available (documentation not available online). The OOPS! tool implements three pitfall detection methods: Structural pattern matching, Lexical content analysis, and Specific characteristic search. The former, which implements 24 of the 32 pitfalls, analyzes the internal structure of the ontology, looking for a particular structural pattern that spotted a pitfall. The lexical content analysis method, which implements 9 of the 32 pitfalls, analyzes lexical entities based on the content of annotations (e.g., ) and identifiers for detecting pitfalls. The latter method, which implements 5 of the 32 pitfalls, checks for general characteristics of the ontology unrelated to previous methods, e.g., the name given to the ontology does not contain file extensions. The pitfalls identify by the OOPS! tool are minor problems, i.e., do not represent a problem. The most recurrent pitfall is the missing definition of inverse relationships, e.g., the inverse property is not defined for the property . The missing annotation pitfalls refer to properties and/or classes without a human-readable property; it mainly occurs for external classes defined in the TAIR ontology, such as or classes. Finally, the unconnected ontology elements pitfall occurs because a defined class is not connected with any other element of the ontology, e.g., the class is not connected with any other class; the class refers to it but only as their subclass. All the unconnected ontology elements and missing annotation pitfalls reference external vocabularies, e.g., SKOS or RDFS; their definition will be found in the corresponding URL. § CONCLUSION AND FUTURE WORK The Trustworthy AI Requirements (TAIR) ontology provides a basis for capturing and analyzing terms and requirements as concept sets from normative statements from the AI Act and the conformance-focused international standard on AI from SC42. This is made partially available as an Open Knowledge Graphs (OKG) resource that allows the links between defined terms, other relevant concepts, and the requirements themselves to be published in a traceable, queryable, and navigable manner. As this work was based on a draft version of the AI Act, we await the publication in early 2024 of the formal text in order to repeat the extraction of concepts and requirements and publish the second version of the TAIR ontology. We will then aim to promote this version of the model and its online exploration features to different potential groups who may find this useful to garner feedback on its utility. Such groups could include subject matter experts in specific high-risk application domains, such as healthcare or education, who may seek to build domain-specific extensions to concepts in this model. The model may be of use to policymakers and standards developers involved in the development of harmonized standards, in guidelines to support the implementation of the Act, such as EC guidelines to SME developing or public sector agencies procuring AI, and those establishing transparency mechanisms for regulatory learning mechanisms such as regulatory sandboxes and real-life trials. We would also seek feedback from scholars in law, ethics, social science, and information systems on whether this open approach to AI act concepts provides a basis for improving comparison, aggregation, and replication of studies in these areas. Further horizontal requirements mapping will be explored, especially as the SC42 AI Management System Standard (AI MSS) is supported by further standards, including ISO/IEC 23053:2022 (Framework for Artificial Intelligence (AI) Systems Using Machine Learning), ISO/IEC 23894:2023 (Guidance on risk management), ISO/IEC TR 24027:2021 (Bias in AI systems and AI aided decision making), ISO/IEC TR 24028:2020 (Overview of trustworthiness in artificial intelligence), ISO/IEC TR 24368:2022 (Overview of ethical and societal concerns), and ISO/IEC 38507:2022 (Governance implications of the use of artificial intelligence by organizations). However, fully realizing this potential would require agreement between the European Commission (EC) and the European Standardization Organization (ESO) on how harmonized standards can be publicly available without the current paywall fees. In the long term, this approach and its open resources could be used to compare proprietary or national trustworthy AI mechanisms to the conformance and compliance system offered by the AI Act and its harmonized standards. The demand for such mappings is already apparent in the cross-walk mappings[<https://www.nist.gov/itl/ai-risk-management-framework/crosswalks-nist-artificial-intelligence-risk-management-framework>] being developed by the US National Institute of Standards and Technology (NIST) between its AI Risk Management Framework and approaches present in ISO/IEC standards, AI Act and OECD models, albeit at a less fine-grained level of abstraction that demonstrated here. The more detailed mapping may also assist future policy alignment work by the EU-US Trade and Technology Council, which has already started to propose a common terminology and taxonomy between jurisdictions [<https://digital-strategy.ec.europa.eu/en/library/eu-us-terminology-and-taxonomy-artificial-intelligence>]. Finally, we hope such open mapping resources could also assist civil society organizations to monitor the future implementation and enforcement of the AI Act, especially in relation new arenas where regulation may be unclear or contested in relation to fundamental rights protections. Acknowledgements This project has received funding as a research gift from Meta and is supported by the Science Foundation Ireland under Grant Agreement No 13/RC/2106_P2 at the ADAPT SFI Research Centre and the European Union’s Horizon 2020 Marie Skłodowska-Curie grant agreement No 813497 for the PROTECT ITN. 20 #1ISBN #1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<https://doi.org/#1>et al.#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1#1<><#>1#1#1#1#1#1#1#1#1#1#1#1#1#1PreBibitemsHook [Yigitcanlar et al.2020]yigitcanlar2020contributions Yigitcanlar, T., Desouza, K.C., Butler, L., Roozkhosh, F.: Contributions and risks of artificial intelligence (ai) in building smarter cities: Insights from a systematic review of the literature. Energies 13(6), 1473 (2020) [Nasim et al.2022]nasim2022artificial Nasim, S.F., Ali, M.R., Kulsoom, U.: Artificial intelligence incidents & ethics a narrative review. International Journal of Technology, Innovation and Management (IJTIM) 2(2), 52–64 (2022) [Floridi et al.2021]floridi2021ethical Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., et al.: An ethical framework for a good ai society: Opportunities, risks, principles, and recommendations. Ethics, governance, and policies in artificial intelligence, 19–39 (2021) [O'Reilly-Shah et al.2020]o2020bias O'Reilly-Shah, V.N., Gentry, K.R., Walters, A.M., Zivot, J., Anderson, C.T., Tighe, P.J.: Bias and ethical considerations in machine learning and the automation of perioperative risk assessment. British journal of anaesthesia 125(6), 843–846 (2020) [Beckman et al.2022]beckman2022artificial Beckman, L., Hultin Rosenberg, J., Jebari, K.: Artificial intelligence and democratic legitimacy. the problem of publicity in public authority. AI & SOCIETY, 1–10 (2022) [The European Commission2018]europeanAIStrategy The European Commission: COMMUNICATION FROM THE COMMISSION. The European Commission (2018). <https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2018:237:FIN> [The European Commission2021a]aiAct The European Commission: REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL. The European Commission (2021). <https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206> [The European Commission2021b]aiLandscape The European Commission: AI Standardisation Landscape. The European Commission (2021). <https://ai-watch.ec.europa.eu/publications/ai-standardisation-landscape-state-play-and-link-ec-proposal-ai-regulatory-framework> [Pandit et al.2018]PANDIT2018262 Pandit, H.J., O’Sullivan, D., Lewis, D.: Queryable provenance metadata for GDPR compliance. Procedia Computer Science 137, 262–268 (2018) 10.1016/j.procs.2018.09.026 . Proceedings of the 14th International Conference on Semantic Systems 10th – 13th of September 2018 Vienna, Austria [Golpayegani et al.2022]Golpayegani2022AIROAO Golpayegani, D., Pandit, H.J., Lewis, D.: AIRO: An ontology for representing ai risks based on the proposed eu ai act and iso risk management standards. In: International Conference on Semantic Systems (2022). https://api.semanticscholar.org/CorpusID:252919566 [Amaral et al.2020]amaral2020ontology Amaral, G., Guizzardi, R., Guizzardi, G., Mylopoulos, J.: Ontology-based modeling and analysis of trustworthiness requirements: Preliminary results. In: International Conference on Conceptual Modeling, pp. 342–352 (2020). Springer [Lewis et al.2021]Lewis21tai Lewis, D., Filip, D., Pandit, H.J.: An ontology for standardising trustworthy AI. In: Hessami, A.G., Shaw, P. (eds.) Factoring Ethics in Technology, Policy Making, Regulation and AI. IntechOpen, Rijeka (2021). Chap. 5. 10.5772/intechopen.97478 . https://doi.org/10.5772/intechopen.97478 [Golpayegani et al.2022]golpayegani2022comparison Golpayegani, D., Pandit, H.J., Lewis, D.: Comparison and analysis of 3 key ai documents: EU’s proposed ai act, assessment list for trustworthy AI (ALTAI), and iso/iec 42001 AI management system. In: Irish Conference on Artificial Intelligence and Cognitive Science, pp. 189–200 (2022). Springer [2021]9410482 IEEE draft model process for addressing ethical concerns during system design. IEEE P7000/D7, April 2021, 1–83 (2021) [Manola et al.2004]manola2004rdf Manola, F., Miller, E., McBride, B., : Rdf primer. W3C recommendation 10(1-107), 6 (2004) [Hitzler et al.2009]hitzler2009owl Hitzler, P., Krötzsch, M., Parsia, B., Patel-Schneider, P.F., Rudolph, S., : Owl 2 web ontology language primer. W3C recommendation 27(1), 123 (2009) [Isaac and Summers2009]isaac2009skos Isaac, A., Summers, E.: Skos simple knowledge organization system primer. Working Group Note, W3C (2009) [2020]ISODIRECTIVES_PART1 Consolidated ISO Supplement — Procedures specific to ISO. Standard, International Organization for Standardization, Geneva, CH (March 2020) [Aruna et al.2011]aruna2011survey Aruna, T., Saranya, K., Bhandari, C.: A survey on ontology evaluation tools. In: 2011 International Conference on Process Automation, Control and Computing, pp. 1–5 (2011). IEEE [Poveda-Villalón et al.2014]poveda2014oops Poveda-Villalón, M., Gómez-Pérez, A., Suárez-Figueroa, M.C.: OOPS! (OntOlogy Pitfall Scanner!): An On-line Tool for Ontology Evaluation. International Journal on Semantic Web and Information Systems (IJSWIS) 10(2), 7–34 (2014)
http://arxiv.org/abs/2408.12513v1
20240822160845
Beyond Shortsighted Navigation: Merging Best View Trajectory Planning with Robot Navigation
[ "Srinath Tankasala", "Roberto Martín-Martín", "Mitch Pryor" ]
cs.RO
[ "cs.RO" ]
Beyond Shortsighted Navigation: Merging Best View Trajectory Planning with Robot Navigation Srinath Tankasala stankasa@utexas.edu Roberto Martín-Martín robertomm@cs.utexas.edu Mitch Pryor mpryor@utexas.edu August 26, 2024 ========================================================================================================================= § ABSTRACT Gathering visual information effectively to monitor known environments is a key challenge in robotics. To be as efficient as human surveyors, robotic systems must continuously collect observational data required to complete their survey task. Inspection personnel instinctively know to look at relevant equipment that happens to be “along the way.” In this paper, we introduce a novel framework for continuous long-horizon viewpoint planning, for ground robots, applied to tasks involving patrolling, monitoring or visual data gathering in known environments. Our approach to Long Horizon Viewpoint Planning (LHVP), enables the robot to autonomously navigate and collect environmental data optimizing for coverage over the horizon of the patrol. Leveraging a quadruped's mobility and sensory capabilities, our LHVP framework plans patrol paths that account for coupling the viewpoint planner for the arm camera with the mobile base's navigation planner. The viewpath optimization algorithm seeks a balance between comprehensive environmental coverage and dynamically feasible movements, thus ensuring prolonged and effective operation in scenarios including monitoring, security surveillance, and disaster response. We validate our approach through simulations and in the real world and show that our LHVP significantly outperforms naive patrolling methods in terms of area coverage generating information-gathering trajectories for the robot arm. Our results indicate a promising direction for the deployment of mobile robots in long-term, autonomous surveying, and environmental data collection tasks, highlighting the potential of intelligent robotic systems in challenging real-world applications. § INTRODUCTION This paper introduces a method for long-horizon viewpoint planning (LHVP) for conducting surveys and patrols with ground robots, such as quadrupeds. Robots have been increasingly integrated into essential tasks such as continuous monitoring of infrastructure, or evaluating for damages, leaks or corrosion. Notable applications include 3D modeling of industrial facilities, <cit.>, and monitoring such as the Boston Dynamics Orbit system <cit.>. In many of these situations, the structures under examination are often represented through 3D models usually as high-resolution meshes or voxelized formats. Subsequently, robots are then redeployed to either enhance the resolution of these models to a higher accuracy or to conduct inspections for any alterations, potential dangers, or defects, such as cracks and corrosion. These applications involve surveying and monitoring tasks where the robot follows predetermined patrol paths, which can be set either by a human operator or an autonomous ground navigation planner. A key feature of such systems is the use of a camera mounted on a 6 Degrees of Freedom (DoF) arm (“eye-in-hand” configuration) or a high resolution pan-tilt-zoom camera, which can be used for gathering image data for scene reconstruction. In this work we explore a planning system that synchronizes the robot's arm camera with its base movement for maximum scene coverage, eliminating unnecessary stops of the robot base to capture images. Active perception, as defined by <cit.>, is the ability of an embodied AI agent to understand the reasons for its sensing needs, choose its perception targets, and decide the manner, timing, and location of achieving this perception. In mobile robotics, various active perception techniques have been developed, particularly for motion creation through a Next-Best-View (NBV) strategy, as in <cit.>. Each NBV is selected using feedback from the current partial reconstruction of the environment. Most NBV techniques prioritize viewpoints based on information gain (IG) <cit.>, aiming to reduce uncertainty by exploring areas not yet observed. An in-depth comparison of different IG based approaches for NBV is available in <cit.>. If the environment is known beforehand, then view planning is pre-computed and becomes an optimization problem with a cardinality constraint of choosing the best set of N viewpoints VP, i.e. |VP| ≤ N, that maximizes coverage. It is considered as a non-negative monotone submodular function maximization which is NP-hard. The approximate solution for the viewpoint set, VP, is typically generated using the greedy algorithm as it has theoretical guarantees <cit.> and can usually be solved with O(Nlog(N)) complexity, example recursive greedy algorithm <cit.>. However, applications of these greedy algorithms typically assume the agent can find a connecting tour between all viewpoints in VP, example a Traveling Salesman tour, such as <cit.>. This is applicable to drones which is a popular choice for surveying and photogrammetry applications <cit.>, and it is easier to generate feasible continuous trajectories for surveying in the Cartesian space for UAVs <cit.>. However, this is not applicable to ground robots which can have constraints between the base and the arm planners such as the Spot robot with an arm-mounted camera. This work considers view planning in a known environment where the robot base travels along an input path that is pre-determined, possibly to complete a patrol or a primary task. We propose an efficient approach for visually informative motion generation that is conditioned on the robot base trajectory. In particular, we tackle the problem of coverage maximization of known scenes; this is a crucial problem to resolve for efficient surveying and monitoring using mobile robot agents. We demonstrate that by sequentially sampling view candidates for the arm mounted camera along the base patrol path, we account for the view path's feasibility and hence complete the surveying task as efficiently as possible. Since our planning system synchronizes the camera movement on the robot's arm with the movement of the robot base itself when optimizing for scene coverage, we ensure that the robot does not have to halt for the arm to reach the desired camera poses. There are many applications where the base may not be able to stop. Even if the robot does not achieve the maximum possible coverage during the first patrol, our planning approach can be used to plan camera view trajectories over multiple patrol cycles. We demonstrate a greedy search-based camera view path sampling and show that they work better than a viewpoint planner that do not consider the reachability constraints of the robot arm. The contributions in this work include 1) a novel perception motion generation approach in known environments that maximizes for scene coverage conditioned on robot base trajectories; 2) demonstrate the ability to balance information gain maximization over a long horizon while exploring executable arm motions; 3) and transfer the generated trajectories to real-world robots and generalize well over a wide range of objects and environment layouts. § CAMERA MOTION GENERATION FOR MAXIMUM SCENE COVERAGE In our approach, we investigate scenarios where a mobile robot is deployed in a predefined environment, tasked with the objective of conducting a survey of a facility by performing visual data collection pertaining to specific objects integrated within the facility's layout. This is facilitated by the robot's array of components: a mobile base that ensures agile navigation across diverse scenes, an RGB-D camera for scene perception, and a 6DoF arm mounted on the base. This “eye-in-hand” setup is instrumental in capturing detailed visual data from a multitude of angles and perspectives and is commonly used for robotic surveying <cit.>. The main difference from <cit.> is that we only have control of the hand and not the robot base. The state of the robot base is defined by the position of its mobile base (P^base∈ SE(2)), representing its planar movement. The camera's orientation and position (P^cam) are given by the end effector (P^eef∈ SE(3)). This decoupling of the arm from the base is necessary, as the robot base's operational dynamics and planned survey paths are determined using navigation planners, such as <cit.>, and may be dictated by a different specified objective. Given the locations of the target objects of interest, indicated by (L^1, L^2, L^3,...), a voxelized 3D occupancy map is used to plan safe movements and assess the information gain (IG) from candidate camera poses P^cam, <cit.> for example. The primary goal is to enable the robot to efficiently collect data while accounting for the movement and velocity constraints on the arm. The motion generation for the robot arm, follows several guiding principles: * IG is gathered over the entire horizon in which the target objects are visible from the robot's base locations. * The base trajectory is known and discretized to calculate arm configurations for the camera poses * Feasible camera view poses are sequentially generated in Cartesian space to cover all viewing orientations reachable by the camera as the robot traverses the base trajectory. The viewpoint optimization problem can be formulated as: max_VP⊂ Z IG(VP) s.t. cardinality(VP) ≤ N Where, Z is the set of all camera pose trajectories of cardinality ≤ N that are feasible for the given path, P, of the robot base. This section introduces a comprehensive motion generation pipeline that adheres to these principles, involving the sampling of multiple viable camera paths for the robot to accomplish the surveying task. §.§ View candidates generation Given the location of the objects of interest in patrol rounds, we discretize the base path uniformly, for convenience, as shown in figure <ref> and determine suitable view candidates at each of those base locations. For the current base location P_i and end effector pose τ_i, the set V = {τ_i+1^1, τ_i+1^2, …, τ_i+1^M} represents the collection of potential camera poses for viewpoint selection at the next base location. Each view in this set is represented as τ_i+1^j, j ∈ [1,…, M], where τ_i+1^j ∈ℝ^3 ×SO(3) denotes the 6 dimensional sensor pose. To efficiently assess the potential candidates, the process utilizes a series of filters. Candidates that fail to meet the criteria of a filter are removed from the set of possible views. Figure <ref> illustrates the application of these filters. Given a desired average end effector velocity, v_eef, we uniformly sample poses in cartesian space and orientations that can be reached before the base gets to the next base pose (P_i+1). Following this initial filter, the remaining candidates are evaluated based on the positioning factor, which checks for any collisions between the candidate's position and the surrounding environment. Any candidate view that does not meet certain criteria during a “filtering” process is eliminated from the pool of potential views. Finally we use reachability of poses based on the current arm configuration τ_i and joint distance. The joint distance is calculated by assuming a trapezoidal velocity profile (TVP) based on maximum joint velocities(ω_max) and maximum joint accelerations(α_max) and the time step, T_step, to the next base location P_i+1. The TVP bound for joint angles is described as shown in Eq. (<ref>). TVP ( ω_max, α_max, t ) = α_max/2 T_q^2 + ω_max(t-T_q) , if t > T_q α_max/2 t^2, otherwise where, T_q = ω_max/α_max The reachability map in Fig <ref> illustrates the robot arm's end effector reachability within a single time step, using T_step=1s at v_eef=0.25m/s using algorithm <ref>. The left image shows the reachable surface (a sphere of radius v_eef× T_step), while the right image presents a cross-section of it showing all the sampled poses. The 3D voxels are color-coded based on the number of reachable orientations at each location, with green voxels indicating a higher number and red voxels indicating a lower number of reachable orientations. This visualization helps to understand the robot arm's capability in terms of reachable positions and orientations within one time step. §.§ Information gain and view path computation The objective of a next-best-view planner is to select the viewpoint that maximizes the total number of surface voxels observed on the object(s) of interest. The best camera view path on the other hand maximizes the aggregate number of voxels observed along the set of views VP^*. The selection of the world representation, the positioning of the sensor, and the calculation of the gain of information are vital elements for any active perception task. Volumetric models are widely used for visualizing 3D objects, as they offer a compact way of space encoding. We use a dense voxelized representation of the scene and compute the information gain to determine the utility of any given viewpoint using ray casting. The greedy viewpath sampling algorithm expands a tree of possible viewpoints sequentially. At each time step, the algorithm selects the candidate pose with the highest marginal Information Gain (IG) value. The IG value of a pose τ_i^j at time step i and for pose j is denoted by IG(τ_i^j). The process can be visualized as shown in figure <ref>. The objective is to select a view (τ) from the set (V) that maximizes the information gain (IG) over the traversed viewpoint set VP. An elementary method to approximately maximize monotone submodular functions under cardinality constraints as in Eq. (<ref>), involves the greedy algorithm <cit.>. This process begins with an initial empty set VP_0, and in each iteration i, it incorporates the element that offers the maximum increase according to the discrete derivative IG(e|VP^i-1)(ties broken randomly): VP^i = VP^i-1∪{_e IG( e|VP^i-1 ) } A notable result from <cit.> established that the greedy algorithm gives an effective approximation for the optimal solution of the NP-hard submodular function optimization problem of Eq. <ref>. For a non-negative monotone submodular function f: 2^V →ℝ_+, let {VP^i}_i≥ 0 be the greedily selected sets defined in Eq. (<ref>). Then for all positive integers k and ℓ, f(VP^ℓ) ≥ (1 - e^-ℓ/k) max_VP: |VP|≤ k f(VP). In particular, for ℓ = k, f(VP^k) ≥ (1 - 1/e) max_VP: |VP|≤ k f(VP). Using the above strategy, we employ the greedy search approach to determine the optimal view path. The flow of the process is illustrated in Figure <ref>. Initially, a collection of potential views is generated by uniformly sampling poses in the Cartesian space based on the current end effector position and orientation τ_i. Subsequently, these views undergo prioritization based on a utility function (marginal IG), with the optimal camera pose (τ^*) being the one that yields the highest utility value (IG(τ^*|VP^i-1)). This way, we consider not just the Next Best View but account for IG over the horizon of the robot's base trajectory. § EXPERIMENTS We test the effectiveness of our planner in both simulation and real world scenarios. The primary metric for evaluating the proposed planner is the coverage of the reconstruction within the region of interest. A view is considered informative if it sees any surfaces of the objects of interest. We assess this in our IG computation by performing a ray casting and downsampling the ray density by 10 to make it run fast. §.§ Simulation Our simulation places multiple objects in different configurations, with different possible tours by the robot (as shown in Fig. <ref> and <ref>). We create 2 configurations and 3 unique robot base paths around the objects. We utilize different assets to ensure that the algorithm is evaluated over multiple object shapes and measure how well the selected view paths offer varied angles on the objects. We use six models commonly seen in refinery settings, as seen in Fig. <ref>. Gazebo was used for the simulations and stereo processing is conducted in ROS. We use the marginal IG function (Eq. (<ref>)) as the utility function to perform the greedy search along the view path graph. To measure reconstruction quality by surface coverage, we evaluate the reconstructed point clouds against the original model's point cloud. For every point in the original model, we find the nearest point in the reconstructed model. If this point is within a specified registration distance, the surface point from the original model is deemed “captured”. For simulations, we choose the registration distance to be equal to the unit size of the voxel (5 cm). Surface coverage (C) is calculated as the proportion of these captured surface points relative to the total surface points of the model: C = observed surface points/total points on original model surface We average the coverage value over different assets so the coverage estimate is only affected by the object layouts and the robot's base path. The maximum and minimum coverage values seen over individual objects are also reported (Table <ref>) to understand the variance in performance. We compare our method with a baseline heuristic approach that looks only at the nearest object. In this approach, Looking at the nearest object points the camera towards the object that is closest to the robot along the base path. If multiple objects nearby are equidistant from the base path, then the object center is arbitrarily chosen among them. A comparison of our method to the baseline is shown in Table <ref>. We run the planner with v_base = 0.5m/s, T_step=2s and v_eef=0.4 in all simulation results. Focusing on the mean coverage values in Table <ref>, we clearly see that our long horizon viewpath planner achieves greater coverage regardless of the chosen layout and base path followed by the robot. This is especially true in the Zig-Zag base path where it is important for the robot to keep switching between viewing different objects to collect as much information as possible. Looking only at the nearest object caused that method to completely miss faces of certain objects. Our planning approach is also able to focus enough attention across objects leading to a higher (Max,Min) coverage individually, compared to focusing only on the nearest object. §.§ Ablation studies Table <ref> presents the ablation of not considering joint reachability constraints during sequential viewpoint sampling, i.e. not using Algorithm <ref>. Without filtering unreachable poses in the sequential viewpoint sampling process, our greedy search traverses a viewpath along the graph that is not executable on the real robot, leading to significantly poorer performance (78% vs 47% in zig-zag case for example). This highlights the importance of taking the arm velocity constraints into account when planning the viewpath. We study the effect of the main hyperparameters used to generate the viewpath graph, namely v_eef, v_base and T_step. These parameters directly affect the viewpoint graph that is traversed greedily (Fig <ref> and <ref>). Fig <ref> depicts the coverage possible (at T_step=2s) as a function of v_base along a zig-zag path in a linear enivronment layout. We observe that as the base moves slower, it is able to achieve higher coverage of the environment which is expected. The same effect occurs when T_step is reduced as shown in Fig <ref>. Reducing v_base or T_step has the effect of increasing the length of the viewpath graph (Fig <ref> leading to more views collected for a given survey. If however, we reduce the average end effector velocity v_eef (motion blur purposes), then the planned viewpath has lower coverage. This arises from the fact that the sequential sampling (Fig <ref>) gets constrained to a smaller volume leading to less diverse view candidates considered at each time step. We further study the difference between the maximum coverage if the robot base could stop at each location P_1, P_2,...,P_N along the base path. Halting the robot base increases the amount of reachable poses at each base location P_i this consequently leads to better overall coverage but increases survey time. Table <ref> shows the mean coverage rate (%/s) and total time for each survey. While the total coverage is higher when the base halts along the path, the rate at which the environment is covered is higher when the robot doesn't halt along the base path. This would make our approach useful in cases where the battery time is limited or in time-critical missions. In our final ablation study, we compare the coverage if we only use the images taken at the viewpoints for reconstruction vs utilizing the entire camera stream. The results are shown in Table <ref>. In certain cases, the difference seems small (59.7% vs 47.3% in the loop path) suggesting that the arm may be undergoing large jumps in joint angles, causing the camera to not be pointed at the objects of interest in the intermediate steps. Choosing a smaller T_step helps ensure a smoother trajectory and can capture data in the intermediate frames to further improve the reconstruction quality of the scene. §.§ Real world reconstruction In this experiment, we used the eye-in-hand configuration of the Spot robot to reconstruct the lab space shown in Figure <ref> to demonstrate the approach can efficiently handle real-world environments. We build a TSDF representation from the image stream taken by the robot arm as it executes the desired trajectory, and setting the registration distance threshold at 4 cm and the TSDF grid size to 4 cm. The reconstructed map from the data collected using our planner is shown in Fig <ref> and compared to a high-fidelity point cloud. The reason for the noisy reconstruction (especially of the objects on the table) is due to motion blurring as the robot walks. The camera experiences vibrations due to Spot's walking gait leading to blurry captures despite tuning the hyperparameters as shown in Table <ref>. Our method still performs better than the baseline (see nearest) which has the same reconstruction issue. This reconstruction issue is unrelated to the proposed planning principle in this work. Thus, our experimental results are still significant and demonstrate the effectiveness of our approach. Optimizing the robot gait for smoother image data acquisition could be an avenue for future work. § CONCLUSION This work presented a new view planning approach for long horizon camera trajectory generation that synchronizes the sensor trajectory to that of the robot base. This approach is derived from the greedy search method commonly used in viewpath optimization and is adapted to this problem by our novel view sampling approach. Our approach generates a graph, along the base path, where executable view candidates are sequentially sampled and the graph is traversed based on maximum marginal information gain. This results in a camera viewpath that maximizes IG over a long horizon while efficiently searching over the set of all possible view paths. We successfully demonstrate the practicality and effectiveness of our approach on a Spot robot in both simulation and the real world. Our ablation studies highlight how different hyperparameter choices in the planner affect the coverage performance. The developed method performs better than the heuristic baseline of looking at the nearest object across multiple environment settings. Moving forward, we intend to delve into more refined search optimization methodologies, such as recursive greedy search, to develop camera trajectories in scenarios where the view planner lacks control over the robot base trajectory. plainnat
http://arxiv.org/abs/2408.12249v1
20240822093740
LLMs are not Zero-Shot Reasoners for Biomedical Information Extraction
[ "Aishik Nagar", "Viktor Schlegel", "Thanh-Tung Nguyen", "Hao Li", "Yuping Wu", "Kuluhan Binici", "Stefan Winkler" ]
cs.CL
[ "cs.CL", "cs.AI", "cs.LG" ]
Accounts of using the Tustin-Net architecture on a rotary inverted pendulum Stijn van Esch1 Fabio Bonassi2 Thomas B. Schön2 August 26, 2024 =========================================================================== § ABSTRACT Large Language Models (LLMs) are increasingly adopted for applications in healthcare, reaching the performance of domain experts on tasks such as question answering and document summarisation. Despite their success on these tasks, it is unclear how well LLMs perform on tasks that are traditionally pursued in the biomedical domain, such as structured information extration. To breach this gap, in this paper, we systematically benchmark LLM performance in Medical Classification and Named Entity Recognition (NER) tasks. We aim to disentangle the contribution of different factors to the performance, particularly the impact of LLMs' task knowledge and reasoning capabilities, their (parametric) domain knowledge, and addition of external knowledge. To this end we evaluate various open LLMs—including BioMistral and Llama-2 models—on a diverse set of biomedical datasets, using standard prompting, Chain-of-Thought (CoT) and Self-Consistency based reasoning as well as Retrieval-Augmented Generation (RAG) with PubMed and Wikipedia corpora. Counter-intuitively, our results reveal that standard prompting consistently outperforms more complex techniques across both tasks, laying bare the limitations in the current application of CoT, self-consistency and RAG in the biomedical domain. Our findings suggest that advanced prompting methods developed for knowledge- or reasoning-intensive tasks, such as CoT or RAG, are not easily portable to biomedical tasks where precise structured outputs are required. This highlights the need for more effective integration of external knowledge and reasoning mechanisms in LLMs to enhance their performance in real-world biomedical applications. § INTRODUCTION The success of Large Language Models (LLMs) promises to reshape the landscape of AI healthcare applications, especially for scenarios relying on Question Answering <cit.>, summarisation <cit.> and extracting insights from unstructured patient-generated health data <cit.>. While considerable progress has been made in leveraging LLMs for tasks requiring free-text outputs, much of the focus has been on optimizing the parametric knowledge—the information stored in the model's weights and learned during training. Recent works explore methods such as fine-tuning on task-specific data and in-context learning (ICL) and reporting significant improvements in model performance. However, these approaches primarily enhance the models' internal knowledge representation. As such, they rely on readily available data for the structured tasks at hand, be it in form of training sets for task-specific fine-tuning <cit.>, or for selecting good-quality representative few-shot examples for ICL <cit.>. In the biomedical domain, such resources for structured prediction tasks are typically not available, as requirements might arise ad-hoc—for example when researchers need to process a set of medical records to find patients satisfying inclusion criteria for a clinical trial <cit.> (e.g., whether they're a smoker). But even for well established tasks, such as medication name extraction, for which resources exist <cit.>, these resources often prove to be insufficient in a practical context, due to the domain shift between public resources and internal hospital data <cit.>. Therefore, solely training-set reliant improvements parametric knowledge of LLMs as driver of performance for structured prediction tasks is often infeasible and approaches need to be able to perform well in zero-shot scenarios. Despite this, the literature currently lacks a systematic investigation of other crucial aspects of knowledge utilization. In order to address this research gap, we first postulate that the performance of LLMs in medical reasoning and information extraction tasks in “true” zero-shot setting[by “true zero-shot” we refer to the scenario where no examples are available to solve the task and no information beyond the labels and their semantically meaningful names is made available to the model <cit.>.] hinges on three distinct categories of knowledge: * Parametric Knowledge: The inherent knowledge embedded within the model's parameters. * Task Knowledge: The model's ability to reason about the specific task, including understanding relevant labels and the context of the task. * External Knowledge: Additional information and context retrieved to supplement the model's understanding and decision-making process. Research in evaluating these aspects specifically in the medical domain <cit.> is being conducted vividly, but these works have mostly focused on knowledge-intensive prerequisite tasks, such as Multiple-Choice Question Answering. While useful to evaluate the medical knowledge of LLMs, they do not address the question of the medical capabilities of LLMs to succeed on tasks that are more reflective of real applications, such as medical text classification or information extraction. As such, it is necessary to evaluate, whether advancements derived from methods that enhance performance, such as (zero-shot) Chain-of-Thought (CoT) reasoning <cit.>, self-consistency <cit.> and Retrieval-augmented Generation (RAG) <cit.> carry over to such structured prediction tasks. Moreover, these studies often employ large, commercial models like ChatGPT <cit.> or GPT-4 <cit.> which present significant challenges in real-world applications due to their computational cost and privacy concerns associated with sending sensitive data to third-party APIs. Furthermore, there is a growing concern regarding the reliability of LLMs in medical applications, as even the most powerful models are prone to generating hallucinations, compromising the truthfulness of the outputs. Although constrained generations have shown promise in mitigating these issues, their application in medical information extraction tasks has been limited. Thus, there are three problems that currently inhibit our understanding of the capabilities of LLMs on structured prediction tasks in the medical domain, and, as a consequence, their improvement: [label=(*)] * Existing approaches to structured prediction tasks in the medical domain typically enhance parametric knowledge and rely on the availability of training sets, which might not be realistic; * “True zero-shot” studies and methods to improve performance in such settings are mostly carried out surrogate tasks such as Question Answering and whether they can be adapted to structured prediction tasks on is unknown; * Advancements are typically reported on large-scale, proprietary LLMs which might be unusable due to privacy concerns and inaccessibility to logits for constrained decoding. In this paper, we aim to address these gaps by systematically benchmarking the performance of LLMs in medical classification and NER tasks as a representative selection of structured prediction tasks. We focus on assessing the impact of task knowledge and external knowledge while maintaining the parametric knowledge at a reasonable yet static level. Our approach involves exploring a range of techniques, including CoT reasoning, Retrieval-Augmented Generation (RAG), and constrained generation, which have not been extensively applied in these settings. By providing a comprehensive evaluation of these methods, we seek to offer new insights into the practical deployment of LLMs in the medical domain, highlighting both the challenges and potential solutions. To summarise, this paper makes the following novel contributions: First, to our knowledge, we present the first comprehensive benchmark for LLMs in medical classification and Named Entity Recognition (NER) tasks, providing a systematic evaluation of their information extraction performance in these critical structured prediction tasks within the medical domain. Second, we investigate the impact of various knowledge enhancement techniques, including Chain of Thought (CoT) reasoning, Self-Consistency, Retrieval-Augmented Generation (RAG), and constrained generation, which have not been extensively explored in medical information extraction settings. Notably, we demonstrate that parametric knowledge capacity, i.e., model size, is a primary and often sole driver of performance in zero-shot settings, offering insights into the limitations and potential of current LLM architectures. § RELATED WORK We briefly survey the existing benchmarking literature in the medical domain, outlining the lack of studies focusing on structured prediction tasks. Furthermore, we cover recent prompting techniques that were proposed to elicit reasoning in LLMs, and augment their domain knowledge, either by better tapping into their parametric knowledge or by explicitly providing them with relevant external context. Notably, we omit approaches that rely on existence of training sets, such as few-shot prompting <cit.> or model fine-tuning, as one of the key challenges in the medical domain is the lack of annotated task data, due to privacy concerns over sharing medical records. Instead, as outlines in the introduction, we focus on “true” zero-shot capabilities of LLMs. Existing LLMs Benchmarks: With the rising popularity of LLMs, many works evaluated their performance in the biomedical and clinical domains. These works typically focus on evaluating domain-knowledge by means of Question Answering <cit.>, or focus directly on possible application scenarios, such as summarisation <cit.> or clinical coding <cit.>. Many works combine these two directions in an effort to provide more comprehensive benchmarks <cit.>. However, many of these works overlook the wealth of existing literature and plethora of available resources for traditional structured prediction tasks in the biomedical domain, such as document classification, entity recognition and linking and event and relation extraction (e.g., () to name a few). <cit.> have provided a comprehensive and unified collection of these resources, however their work prioritises reportage of the resource collection over benchmarking results. Their preliminary evaluations suggest that their evaluated pre-LLM era models barely surpass the random guess baseline in the zero-shot setting. We build upon their work by providing a detailed analysis to what extent approaches to enhance reasoning and knowledge in LLMs help to challenge this status quo. Reasoning- and Knowledge-enhancing approaches: Current work attempts to improve the performance of LLMs from different knowledge utilization perspectives. One of the obvious methods is full parameter domain-specific pre-training <cit.>. For example, <cit.> propose the largest medical foundation model, trained on both biomedical and clinical data, up to 70B. <cit.>, on the other hand, believe larger LLMs are computationally expensive to run, proposing a 2.7B LLM specific for biomedical NLP tasks. When fine-tuned, the relatively small model compete with larger LLMs. In our study, we compare domain-generalist models with those adapted to the medical domain. Since full parameter tuning is costly, many works focus on domain knowledge adaptation by pre-training <cit.> or instruction tuning <cit.> with adapters. Training-free approaches encompass chain-of-thought (CoT) <cit.>, self-consistency<cit.>, Concerned with lack of grounding resulting in hallucination, recent work introduce RAG methods <cit.>. However, most of these efforts have focused on performance in a particular knowledge paradigm and have lacked a systematic assessment of how performance on structured prediction, which we address in our study. § METHODOLOGY Our methodology is designed to answer the following two research questions: “How well do LLMs perform on structured prediction tasks?” and “To what extent can approaches that enhance task and external knowledge improve their performance?” To answer the first research question, we benchmark a representative sample of LLMs on a large collection of biomedical text classification and NER datasets. More specifically, we choose the task of Medical Text Classification and NER as representative structured predictions tasks. We focus on the “true” zero shot setting, since, as discussed before, this allows us to establish the level of models' original parametric knowledge, which is desirable as it more closely reflects real-world application scenarios, because annotated training data for such tasks in the biomedical domain is usually not available due to the ad-hoc nature of task requirements and privacy constraints of medical records. Thus improving parametric knowledge is often infeasible in practice. To answer the second question, we compare their zero-shot performance to various methods that aim to enhance task knowledge and external knowledge, while keeping the parametric knowledge static. §.§ Datasets Since we evaluate different prompting techniques, we restrict the choice of tasks to those where the number of possible labels is small enough to fit in the evaluated LLMs' context window. We restrict the number of labels to ten and the mean length of the input documents to at most 2048 tokens. This leaves us with 14 different classification datasets from the BigBio collection[for the GAD dataset, we only select 1 fold out of the 10 available, as the folds feature the same task for different data, unlike other datasets. We also skipped the Chinese subset of meddialog as we had difficulties loading the dataset]. For the NER task, we sample 12 datasets from the pool of those that satisfy the criteria. The resulting dataset sample features four non-English datasets and six non-public classification datasets, which allows us to investigate whether LLMs perform better on minority languages or on data that is less likely to be found in public pre-training corpora. We run the evaluation on the official test-set split where available, otherwise we consider the full dataset. For datasets with more than 500 instances, we sample 500 random but fixed instances to speed up the experiments. Overall, our selection spans English and non-english source data, publicly available and private datasets, and various domains such as scientific papers, medical notes and social media. The overview of the datasets follows below, with full details to be found in the technical appendix. §.§.§ Classification. The datasets used for classification tasks include both single-label and multi-label datasets, covering a wide range of biomedical and clinical domains. For single-label classification, the GAD dataset focuses on identifying associations between genes and diseases <cit.>, while the GEO dataset is concerned with classifying microarray, transcriptomics, and single-cell experiments from the Gene Expression Omnibus (GEO) database <cit.>. The MedDialog dataset aims to classify dialogue snippets as either being said by a doctor or a patient <cit.>. Furthermore, the CZIDrsm dataset has several subsets, including one for classifying research articles based on aspects of disease research (CZIBase), and others for identifying whether a paper describes substantive research into Quality of Life (CZIQoL) or is a natural history study (CZINatHist). In multi-label classification, the LitCovid dataset is used for the classification of COVID-19-related articles <cit.>. The CAS and ESSAI datasets are utilized for identify negation and uncertainty clinical cases from French-speaking countries <cit.>. The NTCIR13 datasets include subsets for disease classification of tweets in Japanese (*-Ja), English (*-En), and Chinese (*-Zh) <cit.>. Additionally, the PsyTAR dataset is used for sentence classification of various drug-related effects, such as Adverse Drug Reactions (ADR) and Withdrawal Symptoms (WDs) <cit.>, while the SciCite dataset is used for citation intent classification based on the context within computer science and biomedical domains <cit.>. §.§.§ NER. The datasets for Named Entity Recognition (NER) tasks are similarly divided into entity recognition (single entity type) and classification (multiple entity types). In the single-type category, the GeneTag dataset is used for gene/protein NER, with two annotation versions: the original GeneTag-G and the corrected GeneTag-C <cit.>. Additionally, the GENIA-PPI dataset focuses on protein-protein interactions or gene regulatory relations within the GENIA corpus, capturing primarily static relations <cit.>. The multiple-type NER datasets encompass various complex biomedical tasks. The AnEm dataset targets anatomical entity recognition <cit.>, while the BioInfer dataset focuses on recognizing proteins, genes, and RNA entities <cit.>. The Genia-EE dataset is used for the GENIA Event corpus <cit.>, and the BioNLP11-REL dataset is employed for extracting part-of relations between genes/proteins and associated entities <cit.>. Furthermore, the BioNLP-13-CG dataset is used for Cancer Genetics (CG) information extraction, focusing on recognizing events represented as structured n-ary associations of given physical entities <cit.>. The BioNLP-13-GRO dataset aims to populate the Gene Regulation Ontology with events and relations <cit.>, and the BioNLP-13-PC dataset is used for the automatic extraction of biomolecular reactions from text <cit.>. Lastly, the PICO dataset deals with recognizing (P)articipants, (I)nterventions, and (O)utcomes <cit.>, and the MLEE dataset is used for event extraction related to angiogenesis <cit.>. §.§ Models For our experiments, we employed two instruction-tuned variants of the Llama-2 model—7B and 70B—both <cit.>, alongside the BioMistral-7B model <cit.> which was further pre-trained on the biomedical domain. Since we make use of constrained generation to generate model outputs and guide the models decoding process, we retrict the evaluation to open source models since this process is not possible for proprietary models such as GPT-4. §.§ Techniques Standard prompting was used as a baseline for both the Classification as well as the NER tasks. Chain-of-thought reasoning <cit.> has been shown to improve performance, particularly in QA and logical reasoning tasks. Thus, we also ran experiments with chain-of-thought reasoning to measure its impact on model performance. For the NER task, we adapted a more guided, two-stage approach <cit.> to implement a novel chain-of-thought reasoning approach. Here, The first stage involves inducing a generic entity name from a datasets' known entity labels—e.g., “Bodypart” for the NER labels describing different bodyparts—and then labelling the input document with that generic entity type. In the second stage all entities labelled in this way are further disambiguated with their respective fine-grained dataset NER labels. Retrieval Augmented Generation <cit.> has been established as an effective technique to improve model performance by introducing relevant non-parameteric knowledge to models and thus grounding the generated outputs to factual information. <cit.> conducted a systematic study of RAG on medical QA, and we incorporate their findings into our study. We used PubMed abstracts <cit.> and Wikipedia articles as knowledge corpora, because 's () experiments found that using PubMed improved performance over non RAG techniques, while using Wikipedia reduced performance in medical QA tasks. Our goal was to evaluate whether the same holds true for structured prediction tasks as well. For the RAG module, we made use of FAISS <cit.>, which allows retrieval of most similar documents based on semantic similarity, where we used the sentence transformers <cit.> model for embedding input documents and corpora. For each experiment, the number of retrieved documents was computed based on the maximum possible documents which could be used without exceeding the token limit of the model. Self-consistency, proposed by <cit.>, improves chain-of-thought reasoning of LLMs by sampling reasoning paths for a given problem, followed by a majority vote for the final answer. We also conduct a set of experiments employing self-consistency to investigate whether such improvements can be observed on structured prediction tasks in the medical domain as well. For classification tasks, self consistency was employed to generate multiple reasoning chains for the given problem, followed by answer extraction from each reasoning chain and majority voting to select the final answer. For NER tasks, since we follow the two-stage approach, self-consistency was employed in both stages. Multiple general entity labels were generated in the first stage, and entities were extracted for each such label. In the second stage, self consistency was again used for the entity selection phase as well as the entity label determination step. Majority voting was utilised in final label or class selection in each case <cit.>. Constrained decoding in LLMs <cit.> was used to ensure structured information extraction and text generation. This allowed us to evaluate the LLMs for the task at hand without the added variability due to the aleatoric uncertainties brought about by the probabilistic language generation fundamental to the architectures of the models. More specifically, for classification tasks, we ensured the presense of at least one label in the generated outputs. For NER we restricted the generation of spans occurring in text in the first step, and in the second step, for each of the spans we restricted the generation to any of the possible labels. This is also one of the reasons why we opted against evaluating API-based closed-source LLMs[The other reason being their intransparancy with regard to training data, which violates our “true” zero-shot setting.], as in our initial experiments the hallucinations in generated outputs created problems with reliably parsing the structured outputs. We refer to chain of thought as CoT, Self-consistency as SC, RAG as for PubMed and Wikipedia corpora, respectively, and to standard prompting as Vanilla. § EVALUATION RESULTS §.§ Overview of results Reasoning and knowledge enhancing techniques seem to not improve performance. Figure <ref> and Figure  <ref> compare the results of the best performing techniques for each model for classification and NER, respectively. As seen in Table <ref>, perhaps counter-intuitively, Standard Prompting consistently achieves the highest average F1 scores across all models for classification task, with BioMistral-7B obtaining 36.48%, Llama-2-70B-Chat-AWQ achieving 40.34%, and Llama-2-7b-chat-hf scoring 34.92%. This result indicates that for structured prediction tasks, more complex reasoning techniques such as Chain of Thought (CoT) Prompting or Retrieval-Augmented Generation (RAG), do not outperform simpler approaches like Standard Prompting. For NER tasks, the results present a more nuanced picture compared to the classification tasks. While Standard Prompting remains effective, there is a noticeable shift in performance across different models and datasets. Notably, the scores are significantly lower than typical F1 scores in biomedical NER benchmarks. For instance, the NCBI disease corpus <cit.> and CHEMDNER dataset usually yield higher performances with specialized models or extensive pre-training. State-of-the-art models on these benchmarks can achieve Span F1 scores up to 0.90 for the NCBI disease corpus <cit.>. However, similar to our findings, in true zero-shot setting, NER scores have been reported to be markedly low, even for the general domain <cit.> and when supplying label descriptions <cit.>. A possible reason for poor performance might be that these approaches have been tailored towards—and shown to work well on—knowledge- and reasoning-intensive tasks, such as Question Answering <cit.> or Mathematical Reasoning <cit.>. Meanwhile more narrowly defined tasks like information extraction or classification require the understanding of specific task semantics over generic reasoning capabilities. They seem to not require broad knowledge, as it could be found in biomedical paper abstracts or Wikipedia articles, but rather require application of domain knowledge in a specific and highly contextualized tasks, contained within the input document and task description. Models need to be able to handle highly specialized vocabulary, including jargon, acronyms, and synonyms that can vary widely between subfields <cit.>. There is a fundamental requirement for context dependent disambiguation of ambiguity and polysemy as well as nuances and variablity in syntax and expressions of biomedical concepts. This is often developed through specialized pre-training or domain-specific enhancements, which the LLMs have not been able to capture. These challenges necessitate models that not only have robust general NER capabilities but also an intricate understanding of biomedical context which can very for different subtasks within the domain. Scale drives improvements. In line with previous observations, we find that the 70B model also shows a considerable improvement (5.4% for classification, 2.2% for NER Span F1) over the 7B model. The most significant difference in performance between the Llama 7B and 70B Models is observed when using Self-Consistency with Chain of Thought and RAG (Wikipedia), where the 70B model outperforms the 7B model by 15.45% on classification and on NER tasks. This suggests that the larger model is significantly better at leveraging external knowledge when combined with self-consistency and chain of thought prompting. The larger model's increased capacity might be particularly advantageous in handling these complexities, resulting in a more significant performance gap compared to simpler techniques. Methods like Chain-of-Thought Prompting and Self-Consistency with Chain-of-Thought and RAG involve complex reasoning and knowledge integration processes<cit.>. This is further demonstrated by the fact that Llama 70B improves performance by 10.91% when using Self Consistency is added to Wikipedia based RAG, indicating that self consistency helps model combat the drop in performance when adding potentially irrelevant external information for the larger model. Unlike in classification tasks, where Standard Prompting was universally superior, NER performance does not degrade as much when using advanced prompting techniques, particularly when using larger models like Llama-2-70B, likely due to the general lack of epistemic confidence in the answers in the first place. §.§ Detailed Comparison of Prompting Techniques The use of CoT and Self Consistency are not helpful if there is a lack of parametric knowledge about the task. For BioMistral-7B, using Self-Consistency CoT prompting leads to the biggest reduction of about 16% for classification tasks. One possible reason is the domain-specific pre-training equips the model to better follow the instructions directly without needing additional reasoning structures, which seem detrimental. Similar to the RAG case, self-consistency seems to not consistently improve performance for NER. While Self Consistency aims to improve the reliability of Chain of Thought prompting by generating multiple reasoning paths and selecting the most consistent one, it might introduce additional complexity leading to errors or inconsistencies. This is especially true, if the model's answers have low confidence scores due to insufficient parametric knowledge which prevents them to reliably solve these problems and would explain the observed performance drop. For NER tasks, the combination of Chain of Thought (CoT) and Self-Consistency prompting with RAG (Wikipedia) shows the most substantial performance difference between the 70B and 7B models. This suggests that larger models are more adept at leveraging external knowledge and complex reasoning strategies for entity recognition tasks if there is lack of parametric knowledge. RAG does not help information extraction. The quality and relevance of the retrieved information can significantly impact performance, as seen from the fact that there is an average drop of 16.91% when using RAG with PubMed Corpora and 16.47% when using RAG with Wikipedia corpora as compared to the best performing technique for classification. While incorporating external knowledge through RAG can be generally beneficial for QA based tasks <cit.> where incorporating relevant facts to the given question can append relevant knowledge into the model, it is not as straightforward in classification and information extraction tasks. This has to especially be considered in the given task setting, where the model could be confused by the presence of irrelevant knowledge information which adds an additional layer of complexity in extracting the relevant information for answering the relevant questions. SC helps models filter out irrelevant noise in case of RAG, but does not help CoT While Self Consistency aims to improve the reliability of Chain of Thought prompting by generating multiple reasoning paths and selecting the most consistent one, is fundamentally dependent on the models epistemic certainty <cit.>. This hinders performance if the model's answers have low confidence scores due to insufficient parametric knowledge which prevents them to reliably solve these problems and would explain the observed performance drop. For BioMistral-7B, using Self-Consistency CoT prompting leads to the biggest reduction of about 16% for classification tasks. One possible reason is the domain-specific pre-training equips the model to better follow the instructions directly without needing additional reasoning structures, which seem detrimental. Similar to the RAG case, self-consistency seems to not consistently improve performance for NER. The combination of Chain of Thought (CoT) and Self-Consistency prompting with RAG (Wikipedia) shows the most substantial performance difference between the 70B and 7B models. This suggests that larger models are more adept at leveraging external knowledge and complex reasoning strategies for entity recognition tasks to augment the lack of epistemic uncertainty. §.§ Detailed Per-dataset analysis Models Perform Significantly better on public datasets. Models perform significantly better on public datasets (average accuracy of 30%) compared to private datasets (average accuracy of 12%). This might hint at possible data leakage during pre-training or instruction-tuning, as publicly available datasets are more likely to be included in a web-crawl or a dedicated instruction tuning dataset. This might suggest that model performance on `unseen' (yet publicly available) tasks could be a result of unintentional data leakage rather than a by product of reasoning or generalisation. Multilingual Performance is not Scale Dependent. As shown in Figure <ref>, smaller models can match or even outperform larger models on Chinese and Japanese datasets but not on English datasets. This may be due to the heavy reliance on large English corpora during training, with limited exposure to medical contexts in other languages. This forces models to generalize compressed language representations to specialized domains, where overfitting on sparse languages may hinder larger models' performance. LLMs struggle on tasks high complexity tasks As seen in Figure <ref>, LLMs seem to struggle to outperform random baselines for both single and multi class classification tasks. However, Figure <ref> paints a more nuanced picture: guessing baseine remains unbeaten only on two of 14 datasets, which drags down the average performance significantly. Figures <ref> and <ref> show that Llama2 70B demonstrates good performance in low-complexity tasks such as disease and symptom classification (CZIBase, NTCIR13-En) and medium-complexity tasks like Gene Expression classification (Geo). However, the model is challenged by higher-complexity problems, such as the BioNLP13-CG and GENIA-EE datasets. Specifically, in datasets that demand nuanced understanding and interpretation, such as the extraction of participants and outcomes from abstracts and gene ontology population (PICO, BioNLP13-GRO) the performance is low. When incorporating RAG (Retrieval-Augmented Generation) techniques, there are fluctuations in performance across datasets. While results improve on some datasets, RAG does not universally benefit the model’s ability to accurately extract and classify biomedical information. § CONCLUSION We provide a comprehensive benchmark and analysis of LLMs in Medical Classification and Named Entity Recognition tasks, revealing several key insights that have significant implications for the field. We carry out a critical investigation of broad claims regarding LLM capabilities by replicating them in various contexts, domains and datasets. We find that models suffer from fundamental drawbacks in generalizability, which hinder their performance in structured information extraction tasks on domain specific problems. This leads to Standard prompting outperforming more advanced methods across both the tasks. Our findings underscore the paramount importance of parametric knowledge capacity in zero-shot settings, regardless of advanced techniques used to augment external knowledge or model reasoning. § ACKNOWLEDGEMENTS This research is supported by the National Research Foundation, Prime Minister’s Office, Singapore under its Campus for Research Excellence and Technological Enterprise (CREATE) programme. The authors thank Abhinav Ramesh Kashyap, Andy T. Liu and Vijay Prakash Dwivedi for their comments and useful feedback during the work. The authors further acknowledge and are thankful for the use of Imperial College Research Computing Service (DOI: <http://doi.org/10.14469/hpc/2232>), and the Computational Shared Facility at The University of Manchester. § APPENDIX A: DATASETS Table <ref> and <ref> list the huggingface dataset cards and citations for each classification and ner dataset used in the paper respectively. For datasets considered private, we assume that models have not been trained on these datasets due to their restricted access, which requires Data Use Agreements (DUAs) and other permissions. Consequently, the likelihood of these datasets being included in common web crawls is low. We have signed all the relevant Data Use Agreements (DUAs) and strictly adhere to their provisions. We do not redistribute the data and advise those wishing to reproduce experiments involving private datasets to consult the corresponding Hugging Face dataset cards for guidance on obtaining the necessary data. § APPENDIX B: COMPUTE DETAILS * Hardware used (GPU/CPU): We used a mix of different shared computational facilities with nVidia A100-SXM4-80GB, RTX6000 with 24GB and L40S with 48GB. Debian OS was used for all the compute servers. * Memory: The machines used had between 256 GB and 1TB of memory * Software and libraries used: The environment can be reproduced from the textttenvironment.yaml file in the supplementary material * Model details: The models used have been described in detail in the main paper submission under the Models subsection of the Methodology section. * Random seed of 42 was used for all random sampling purposes
http://arxiv.org/abs/2408.11424v1
20240821082840
EMO-LLaMA: Enhancing Facial Emotion Understanding with Instruction Tuning
[ "Bohao Xing", "Zitong Yu", "Xin Liu", "Kaishen Yuan", "Qilang Ye", "Weicheng Xie", "Huanjing Yue", "Jingyu Yang", "Heikki Kälviäinen" ]
cs.CV
[ "cs.CV" ]
Role of momentum in the generator-coordinate method applied to barrier penetration G.F. Bertsch August 26, 2024 ================================================================================== § ABSTRACT Facial expression recognition (FER) is an important research topic in emotional artificial intelligence. In recent decades, researchers have made remarkable progress. However, current FER paradigms face challenges in generalization, lack semantic information aligned with natural language, and struggle to process both images and videos within a unified framework, making their application in multimodal emotion understanding and human-computer interaction difficult. Multimodal Large Language Models (MLLMs) have recently achieved success, offering advantages in addressing these issues and potentially overcoming the limitations of current FER paradigms. However, directly applying pre-trained MLLMs to FER still faces several challenges. Our zero-shot evaluations of existing open-source MLLMs on FER indicate a significant performance gap compared to GPT-4V and current supervised state-of-the-art (SOTA) methods. In this paper, we aim to enhance MLLMs’ capabilities in understanding facial expressions. We first generate instruction data for five FER datasets with Gemini. We then propose a novel MLLM, named EMO-LLaMA, which incorporates facial priors from a pretrained facial analysis network to enhance human facial information. Specifically, we design a Face Info Mining module to extract both global and local facial information. Additionally, we utilize a handcrafted prompt to introduce age-gender-race attributes, considering the emotional differences across different human groups. Extensive experiments show that EMO-LLaMA achieves SOTA-comparable or competitive results across both static and dynamic FER datasets. The instruction dataset and code are available at <https://github.com/xxtars/EMO-LLaMA>. § INTRODUCTION Facial expressions reflect a person’s emotional state and are vital for effective interpersonal interactions. Understanding the emotional states from facial expressions is increasingly significant due to its applications, such as human-computer interaction <cit.>, healthcare aids <cit.>, and driving safety <cit.>. In recent decades, researchers have significantly advanced by expanding datasets and developing more efficient architectures. However, existing paradigms for FER face the following challenges: i) Existing methods struggle with generalization across different datasets and modalities <cit.>. ii) FER is typically divided into static facial expression recognition (image) and dynamic facial expression recognition (video), and current paradigms struggle to handle both tasks in a unified framework. iii) Existing paradigms focus on close-set recognition and lack semantic understanding and often overlook other semantic cues. Applying these paradigms to multimodal emotion understanding and human-computer interaction remains a significant challenge. MLLMs have gained significant popularity in the natural language processing and computer vision fields, offering advantages in generalization and natural language interaction. These advantages of MLLMs hold promise for overcoming the limitations of current FER paradigms. However, directly applying pre-trained MLLMs to FER still faces several challenges. As shown in Fig. <ref> or the table provided in the Appendix (due to space constraints), extensive zero-shot evaluations of existing open-source MLLMs on FER were conducted, revealing that current open-source MLLMs struggle with emotion understanding and significantly lag behind the most advanced closed-source MLLM, GPT-4V <cit.>. They have a considerable gap compared to the current supervised SOTA methods. Additionally, some work has attempted to leverage their rich prior knowledge for text-based emotion understanding <cit.>. Despite their success, the application of MLLMs for emotion understanding in images or videos remains under-explored. Given that facial expressions are crucial emotional cues in human face-to-face communication, this work aims to enhance MLLMs’ ability to understand facial expressions and establish a basis for future multimodal, multi-cued emotion understanding tasks. Specifically, we aim to enhance the emotion understanding capabilities of existing MLLMs through improvements in the FER task, thereby narrowing the gap between MLLM-based approaches and traditional classification paradigms. However, there exist three major challenges in the FER task when deploying MLLMs: i) There are no suitable FER instruction datasets. Existing FER datasets have either coarse-grained FER labels <cit.> or limited emotion descriptions <cit.>, and directly using these to construct instruction data would limit the diversity of LLM responses. ii) Current FER paradigms utilize pre-cropped face images or video frames, which could diminish the MLLMs’ general understanding abilities and their sensitivity to other visual cues. This also limits the potential for extending MLLMs to multimodal emotion understanding in the future. iii) Furthermore, visual features from vision encoders like CLIP <cit.> of current MLLMs struggle to capture facial information, and leaving the impact of facial priors on enhancing MLLMs for the FER task unexplored. We plan to utilize potentially useful information, such as facial embedding, landmarks <cit.>, and age-gender-race attributes which considers that different races and age groups express emotions in distinct ways <cit.>. To address these challenges, we propose an instruction-tuning approach for the FER task. We select commonly used static and dynamic FER datasets and generated suitable instruction tuning datasets with Gemini <cit.>. This instruction dataset enhances the diversity of manually constructed recognition instructions using only emotion labels. Additionally, we introduce a novel MLLM, EMO-LLaMA, specifically designed for the FER task by incorporating facial priors into a pre-trained MLLM like LLaMA-VID <cit.>. To obtain facial prior knowledge, we utilize a pre-trained face encoder and decoder to extract three types of facial features: facial embedding, landmarks, and age-gender-race attributes, which complement the general vision encoder. This work provides new insights for both the emotion and MLLM research communities. Our main contributions are summarized as follows: * To the best of our knowledge, this is the first attempt to unify image and video FER tasks using instruction tuning over MLLMs, which is challenging for traditional paradigms. * We utilize Gemini to generate an instruction dataset for five publicly available FER datasets. Specifically, it includes 295k data for image modality and 45k data for the video modality, which is a substantial amount of instructional data. This will provide significant benefits to both the MLLM and FER communities. * To efficiently train on FER tasks and leverage facial prior knowledge, we introduce the EMO-LLaMA model, which involves tuning LoRA on a pre-trained MLLM and incorporating three types of facial priors. Specifically, we design a Face Info Mining module to extract both global and local facial information. Additionally, we utilize a handcrafted prompt to introduce age-gender-race attributes, considering the emotional differences across different human groups. * We demonstrate the effectiveness of EMO-LLaMA on six FER datasets, showing that EMO-LLaMA achieves SOTA-comparable or competitive performance. In addition, our experiments demonstrate the generalization capabilities of our approach, which are lacking in current paradigms. § RELATED WORK Facial Expression Recognition. Currently, FER can be roughly divided into two types: Static Facial Expression Recognition (SFER) and Dynamic Facial Expression Recognition (DFER). SFER <cit.> mainly focuses on recognizing expressions from static images, whereas DFER <cit.> concentrates on recognizing expressions from dynamic image sequences or videos. Traditional methods typically handle SFER and DFER independently, lacking the ability to address facial expression recognition within a unified framework. Additionally, most existing FER frameworks are primarily focused on classification tasks and face challenges in generalization <cit.> and lack semantic information aligned with natural language <cit.>. Additionally, some research suggests that different races and cultures exhibit variations in the facial expressions of emotions, but existing methods have not taken this into account <cit.>. These factors limit their applicability to emotion reasoning or multimodal emotion understanding tasks. Multimodal Large Language Models. MLLMs are getting popular in multi-modal content understanding, e.g., images <cit.> and videos <cit.>. These models are built on top of LLMs <cit.> and transform visual (videos and images) and textual data into sequences of tokens for input, resulting in generative modeling of downstream multimodal understanding tasks through next-token prediction. Specifically, for image-based MLLMs, image tokens are typically encoded using CLIP <cit.>. Similarly, video tokens are encoded using CLIP with or without temporal modeling modules or by utilizing dedicated video encoders <cit.>. Emotion Understanding with LLMs. There have been initial explorations into text-based emotion understanding using LLMs. DialogueLLM <cit.> leverages GPT-4V <cit.> to extract visual information and generate textual descriptions, then combines them with contextual content to address the task of emotion recognition in conversations. EmoLLMs <cit.> focused on both classification and regression tasks in sentiment analysis and introduced a multi-task affective analysis instruction dataset for instruction tuning. More relevant to us are AffectGPT <cit.> and EmoLA <cit.>. Although AffectGPT <cit.> attempted to address the emotion reasoning task, it lacks sufficient data due to the difficulty of annotation, having conducted only an initial exploration with 100 data samples. Additionally, AffectGPT did not incorporate model designs related to tasks. EmoLA <cit.> relates to facial action units <cit.> and SFER, but it did not align with the metrics of traditional paradigms and was tested only on the RAF-DB <cit.>, lacking extensive validation across a broad array of image and video FER datasets. One potential approach, similar to HuggingGPT <cit.>, is to directly integrate existing paradigms with LMMs. Although this has not yet been explored, we believe it would still face the inherent limitations of current paradigms. § EMO-LLAMA In this section, we present EMO-LLaMA, a multimodal large language model with facial priors. Fig. <ref> illustrates an overview of the EMO-LLaMA framework. Beyond the general-purpose MLLM with a vision encoder, projector, and large language model, our approach integrates facial priors to enhance the MLLM’s capability of extracting facial information. Specifically, a frame V_t ∈ℝ^H × W × 3 at time t in a video (or an image) is encoded by a visual encoder known as CLIP <cit.> to produce the visual embedding X^V_t ∈ℝ^N × C. Here, N = H/p×W/p and C indicate the number of image patches and embedding channels, respectively. The patch size p is typically set to 14 for ViT-based backbones <cit.>. Notably, the visual embedding X^V_t may fail to capture facial structure information because CLIP is trained with general image-text pairs rather than facial-related datasets. §.§ Progressive Incorporation of Facial Prior Knowledge To incorporate facial prior knowledge into MLLMs, we make use of a pre-trained facial expert to extract facial priors and progressively integrate them into the existing MLLM. Facial Priors Extraction. To obtain a face image V^F_t ∈ℝ^H^F × W^F × 3 suitable for processing by a facial expert, we employ a pre-trained face detector, MTCNN <cit.> [<https://github.com/timesler/facenet-pytorch>], to detect and crop the face region online. Then, we adopt FaceXFormer <cit.>[<https://github.com/Kartik-3004/facexformer>] for facial analysis, which includes face parsing, landmark detection, head pose estimation, attribute recognition, and the estimation of age, gender, race, and landmark visibility. In this work, we utilize facial embedding from the encoder, landmarks, as well as age, gender, and race attributes from the decoder, integrating them with the existing MLLM. Other facial priors can also be considered and will be explored in future work. Specifically, the facial embedding feature extracted by the facial encoder f^ENC_face(·) can be expressed as: X^F_t = f^ENC_face(V^F_t). Since FaceXFormer is trained on tasks like face parsing and landmark detection, the facial embedding X^F_t ∈ℝ ^N^F × C includes more structural and low-level details, which CLIP lacks. Clue Aggregator. We employ a Q-Former <cit.> to dynamically capture instruction-aware visual and facial hidden features to enrich fine-grained clues. Specifically, Q-Former takes the user instruction T_Q, learnable queries Q∈ℝ ^M × C, general visual embedding X^V_t and facial embedding X^F_t as input, where M denotes the number of queries, typically set to 32. During training, the module enhances task-specific feature extraction as query Q_t. To reduce the burden of visual tokens on the LLM, we follow LLaMA-VID <cit.> to generate Ĥ^T_t ∈ℝ ^1 × C by a cross attention and pool operation as: Ĥ^T_t = Pool(Softmax(Q_t X^V_t^⊤)X^V_t), where the Softmax and Pool are conducted along the N and M dimensions, respectively. Face Info Mining. Additionally, we introduce a Face Info Mining module to enhance the general vision embedding with global and local facial information, as shown in Fig. <ref>. To maintain the efficiency of LLMs by limiting the number of final visual tokens and to effectively capture the structural information from facial embedding, we plan to interact general embedding and facial embedding before pooling. In particular, the face info mining process can be formulated as: Ĥ^V_t = Pool( FFN(GFAttn(SAttn(X^V_t), X^F_t)) +LFAttn(SAttn(X^V_t), X^F_t, X^L_t))), where the SAttn and FFN are self attention and feed-forward network <cit.>. GFAttn and LFAttn are designed Global Face Attention and Local Face Attention, which are used to extract global and local facial information. The LFAttn leverages Action Units <cit.> to extract local information. More equations and details about the Face Info Mining module can be found in the Appendix. Landmark Embedding and Handcrafted Prompt of Facial Evidence. Furthermore, we utilize the facial prior information extracted by the decoder f^DEC_face(·), specifically focusing on landmarks X^L_t and Age-Gender-Race AGR, expressed as: X^L_t, AGR = f^DEC_face(f^ENC_face(V^F_t)). Instead of directly incorporating facial evidence into the instruction, we design a handcrafted prompting method to guide the model in selectively using the AGR information. This approach helps avoid potential negative influences from imperfect predictions by FaceXFormer. We use the prompt: “According to the specific question, you are allowed to use or partially use the following information: AGR” to instruct the LLM whether or not to use the AGR information to answer the current question. Instruction Tuning with LoRA. After obtaining pooled Q-Former embedding Ĥ^T_t, pooled visual embedding Ĥ^V_t, and landmark embedding X^L_t, we utilize the multi-layer perceptron (MLP) projector, to project these features to the token embedding space: H^T_t = MLP^T(Ĥ^T_t), H^V_t = MLP^V(Ĥ^V_t), H^L_t = MLP^L(X^L_t). After obtaining the Q-Former embedding token H^T_t, the visual embedding token H^V_t, and landmark prior token H^F_t, we concatenate them together to represent the frame at time t. Along with frames at other timestamps, the entire video sequence is translated into the language space in token format, which is then used to generate responses from LLMs. In this paper, our work is built upon LLaMA-VID <cit.>, although other MLLMs could also be used. Considering that our instruction dataset is task-specific, we utilize LoRA <cit.> for fine-tuning. The overall parameters to be optimized are Θ, as shown in Fig. <ref> with < g r a p h i c s > . We optimize these parameters following the autoregressive way, and the likelihood of the target output T_A conditioned on video V, facial priors and questions T_Q is given by: p(T_A | V, X^F, X^L, AGR, T_Q) = ∏_i=1^L p_Θ(x_i |V, X^F, X^L, AGR, T_Q, <i, T_A, <i), where T_Q, <i and T_A, <i are the instruction and answer tokens in all turns before the current prediction token x_i, respectively. §.§ Gemini-assisted Emotion Visual Instruction Data Generation We first collect currently publicly available FER datasets, including FERPlus <cit.>, RAF-DB <cit.>, AffectNet <cit.>, DFEW <cit.>, FERV39K <cit.>, and MAFW <cit.>. Then, we use the Gemini 1.0 Pro Vision API <cit.> to generate conversation instructions, further enriching the instruction dataset to enhance the MLLM’s response to face-related questions. Since the topic of FER Visual Instruction Tuning is still in its infancy, no guidelines have been proposed yet for constructing emotion instruction data. We follow the approach of LLaVA <cit.>, leveraging its recent successes with machine-generated instructions. We use different pipelines for images and videos. For images, we utilize Gemini to generate objects and facial-related questions, uniquely providing facial expression labels and the image. When designing FER-related conversations, Gemini is prompted to base them on the provided facial expression labels. For videos, we additionally provide the central frame and a description generated per second by LLaVA to Gemini. Similarly, when designing FER-related conversations, they are based on the provided facial expression labels. Due to issues such as API calls, the final number of instructions does not exactly match the number of original data. Notably, we do not generate instructions for FERPlus because it consists of grayscale images with a resolution of 48, which we believe have limited utility. More details about the pipeline visualization, as well as statistics of the collected data, prompts for LLaVA and Gemini, and generated instruction data, can be found in the Appendix. Finally, we generated 295k data for the image modality and 45k for the video modality, providing a substantial amount of instructional data. In Fig. <ref>, we present two examples of the generated instruction data. § EXPERIMENTS §.§ Implementation Details We initialize all the frozen weights of Emo-LLaMA with LLaMA-VID-7B <cit.> and FaceXFormer-B <cit.>, and only tune the trainable parameters during the training stage. We train our Emo-LLaMA for one epoch using a combination of category and conversation instruction data, optimized by AdamW with a learning rate of 2e-4. The rank of LoRA is set to 128. We use an imbalanced dataset sampling strategy for category instructions. The parameters for our training with LoRA adhere to the default settings established by LLaVA-1.5. For video input, we extract frames at a speed of 1 FPS. The model is trained using 8×AMD MI250X GPUs. More details can be found in the Appendix. Database and Protocols. We perform experiments on three SFER and three DFER datasets: RAF-DB <cit.>, FERPlus <cit.>, AffectNet <cit.>, DFEW <cit.>, FERV39K <cit.>, and MAFW <cit.>. We use accuracy and recall to evaluate performance for FER, following previous work. Additionally, we introduce three other multimodal emotion understanding datasets and a micro-expression recognition, performing FER in a zero-shot setting with only visual input. We hope that the experiments on these datasets could explore the generalization of MLLMs and lay the groundwork for further exploration into multimodal emotion understanding. These datasets include MELD <cit.>, MER <cit.>, CMMA <cit.>, and CASME 2 <cit.>. §.§ Experiment Results Zero-shot Evaluation for Open-source MLLMs. To evaluate the performance of the existing MLLMs, we compare Video-ChatGPT <cit.>, Video-LLaMA <cit.>, Chat-UniVi <cit.>, LLaMA-VID <cit.>, and OneLLM <cit.> in a zero-shot setting. To ensure that MLLMs, which are not trained on close-set recognition tasks, can effectively respond with the category in close-set, we use a prompt for guidance: “PLEASE ENSURE that you start your answer with `My choice is: ' FIRST and select ONLY ONE WORD from the provided list.” This is to prevent them from generating unprocessable answers and helps align with existing metrics. The results are shown in Fig. <ref> and the table in the Appendix. We also report the performance of GPT-4V <cit.> from “GPT-4V with emotion” <cit.>, representing the best performance of MLLMs on FER tasks. The results indicate that existing MLLMs still lack the ability to effectively understand facial expressions and have a significant gap compared to GPT-4V. On the other hand, it demonstrates that despite being the most advanced closed-source MLLM, GPT-4V still exhibits significant performance gaps compared to specialized supervised SOTA models in handling downstream tasks such as FER. Comparison on FER Datasets. We conduct a comparative experiment with the latest traditional methods on FER datasets, as presented in Tab. <ref>. Our EMO-LLaMA achieves the SOTA-comparable or even better performance on several FER datasets in terms of accuracy and unweighted recall. The slightly poorer performance in weighted recall could be due to the use of imbalanced dataset sampling strategy for category instructions. This strategy makes the model pay more attention to the tail-end categories, which affects the weighted recall. These results demonstrate significant potential of MLLMs in addressing the FER and emotion understanding problem. Despite slightly inferior performance on some datasets, we believe the possible reasons include biased emotion annotations in the dataset and factors related to training. It is worth noting that most of supervised methods are specifically designed for FER tasks and are trained for dozens of epochs, whereas our EMO-LLaMA is easily adaptable to other tasks (e.g., emotion reasoning and multimodal emotion recognition) due to the high flexibility provided by instruction tuning and MLLMs. Additionally, our approach integrates FER with natural language, which traditional paradigms lack. Generalization Capability. We conduct cross modality zero-shot validation on image and video datasets, and zero-shot experiments on several extra datasets to verify the emotion understanding generalization capability of MLLMs on FER tasks, as shown in Tab. <ref>, Tab. <ref>, and Tab. <ref>. For cross modality validation, we perform two experiments: firstly, the model is trained on an image dataset and evaluated on a video dataset; secondly, we reverse the process. In the image-to-video experiment, our model outperforms GPT-4V. However, in the video-to-image setting, its performance was slightly inferior to GPT-4V. We think there are two possible reasons for this difference in generalization: i) the number of video samples is significantly smaller than that of image samples, leading to this disparity. ii) GPT-4V may have been trained on related FER data, resulting in better performance on images. These experiments, along with those on additional datasets, demonstrate good generalization capability and suggest the potential to extend the method to multimodal emotion understanding tasks. §.§ Ablation Studies Facial Priors. We explore the effectiveness of facial priors on the MAFW dataset, as shown in Tab. <ref>. The results demonstrate that incorporating various facial priors consistently improves overall performance. We believe this additional information includes facial structure details provided by facial embedding and landmark priors, which are lacking in the general visual encoder. We also conduct an ablation study on the Face Info Mining module, as shown in Tab. <ref>. More detailed ablation study about this module can be found in the Appendix. When facial embedding is directly pooled and concatenated with the general embedding, the performance is similar to the baseline. This may be because pooling disrupts the fine-grained information, preventing it from effectively complementing the general embedding. Instruction Data Type. The ablation study outlined in Tab. <ref> provides an analysis of the impact that different instruction data types have on performance. Initially, the model, referred to as LLaMA-VID <cit.>, operates without the integration of the two types of instructional data and establishes a baseline for AffectNet and MAFW. This foundational performance is significantly enhanced with the inclusion of category data, which alone leads to a substantial increase in accuracy. The introduction of conversation data further amplifies this effect, underscoring the value of conversational context in enhancing the model’s predictive capabilities. This indicates that a diverse approach to instructional data significantly enhances model comprehension and performance. Therefore, to achieve more complex emotion understanding, we need to develop more sophisticated instructional datasets, such as those involving emotion reasoning. Descriptive Text. MAFW provides short descriptive texts that include information on the environment, body movements, facial action units, and other emotional elements, which can be used for both video emotion captioning and FER. By leveraging these manually annotated descriptions, we investigate text-enhanced FER using MLLMs during inference. As shown in Tab. <ref>, our experiment indicates that incorporating additional text significantly improves performance, an advantage that traditional paradigms lack. Our approach can easily leverage this additional information to further enhance performance, demonstrating the potential of MLLMs for improved emotion understanding. Imbalanced Dataset Sampling Strategy. We find that MLLMs also suffer from the long-tail problem when training on the classification tasks, as shown in statistic table in the Appendix and Tab. <ref>. To mitigate this issue, we use an imbalanced dataset sampling strategy[<https://github.com/ufoym/imbalanced-dataset-sampler>]. This is particularly important for AffectNet, which is a highly imbalanced dataset with a large volume of data. We notice that without this strategy, the performance after training could be worse than in a zero-shot setting or could fail to train properly. § CONCLUSION In this paper, we introduce MLLMs to FER to address the limitations of current paradigms. We firstly leverage existing FER datasets to generate a large instruction dataset with Gemini, which will provide significant benefits to both the MLLM and FER communities. Furthermore, we introduce EMO-LLaMA, which effectively utilizes facial embedding, landmarks, and AGR facial priors. Additionally, we design the Face Info Mining module to extract both global and local facial information. Extensive experiments across six FER datasets demonstrate the effectiveness of EMO-LLaMA, showing its strong generalization capabilities and advantages in neural language for FER tasks. In the future, we intend to extend our method to additional emotion-related tasks, such as emotion recognition in conversations, multimodal emotion understanding, and so on. Moreover, incorporating other modalities, such as speech and audio, holds the potential for further performance improvement.
http://arxiv.org/abs/2408.11753v1
20240821162313
Small Sample Behavior of Wasserstein Projections, Connections to Empirical Likelihood, and Other Applications
[ "Sirui Lin", "Jose Blanchet", "Peter Glynn", "Viet Anh Nguyen" ]
math.ST
[ "math.ST", "stat.ME", "stat.TH" ]
⌈⌉
http://arxiv.org/abs/2408.10947v1
20240820153630
Dr.Academy: A Benchmark for Evaluating Questioning Capability in Education for Large Language Models
[ "Yuyan Chen", "Chenwei Wu", "Songzhou Yan", "Panjun Liu", "Haoyu Zhou", "Yanghua Xiao" ]
cs.AI
[ "cs.AI", "cs.CL", "cs.CY" ]
Talbot effect-based sensor measuring grating period change in subwavelength range G. K. Samanta August 26, 2024 ================================================================================= § ABSTRACT Teachers are important to imparting knowledge and guiding learners, and the role of large language models (LLMs) as potential educators is emerging as an important area of study. Recognizing LLMs' capability to generate educational content can lead to advances in automated and personalized learning. While LLMs have been tested for their comprehension and problem-solving skills, their capability in teaching remains largely unexplored. In teaching, questioning is a key skill that guides students to analyze, evaluate, and synthesize core concepts and principles. Therefore, our research introduces a benchmark to evaluate the questioning capability in education as a teacher of LLMs through evaluating their generated educational questions, utilizing Anderson and Krathwohl's taxonomy across general, monodisciplinary, and interdisciplinary domains. We shift the focus from LLMs as learners to LLMs as educators, assessing their teaching capability through guiding them to generate questions. We apply four metrics, including relevance, coverage, representativeness, and consistency, to evaluate the educational quality of LLMs' outputs. Our results indicate that GPT-4 demonstrates significant potential in teaching general, humanities, and science courses; Claude2 appears more apt as an interdisciplinary teacher. Furthermore, the automatic scores align with human perspectives. § INTRODUCTION Large language models (LLMs) have demonstrated great performance in various natural language processing (NLP) tasks, including question answering <cit.>, information retrieval <cit.>, reasoning <cit.>, and generation <cit.>, etc. Beyond these general NLP applications, LLMs are also widely used in other domains, such as education. In the educational field, LLMs can now be used as substitutes for teachers. They can help automated teaching or assisted learning applications, thereby alleviating the pressure on human teachers. Additionally, LLMs can recommend appropriate elective courses based on a student's knowledge state, learning style, and interests, automatically generating practice problems of corresponding difficulty levels, and identifying areas where a student is struggling to provide targeted improvement. However, the capability of questioning is a crucial aspect in the educational field. As LLMs take on the role of teachers, can they pose high-quality questions like human educators? Therefore, evaluating what constitutes a high-quality question in education becomes necessary. According to Anderson and Krathwohl's educational taxonomy <cit.>, we consider that high-quality questioning in the educational field must meet the following characteristics: i) achieve a higher level across the six domains including memory, understanding, application, analysis, evaluation, and creation; ii) be relevant to the given context; iii) comprehensively cover the content of the context, and iv) also reflect the important knowledge of this context. We consider that questions meeting these characteristics can effectively assess students' knowledge levels, and LLMs capable of posing such questions can assume the role of competent human educators. The first characteristic is the most basic requirement for LLMs to act as human teachers, while the following three characteristics measure the excellence of LLMs in their role as a teacher. Evaluating and enhancing the capability of LLMs to generate questions of high quality standards in the educational domain requires a benchmark. However, previous studies have mainly viewed LLMs from a student's perspective, focusing on tasks like reading comprehension <cit.> and exam evaluations <cit.>. However, these tasks focus on adopting contexts to passively answer questions or make reasoning, and these tests treat LLMs as students, assessing their abilities by how they answer questions, while the LLM's questioning capability through generating educational questions is under-studied. Current education-related research is far from adequate to determine LLMs' question raising capability as a teacher, and there isn't a benchmark that studies the overall teaching abilities of LLMs, seeing them as teachers. Although some role-playing tasks <cit.> mimic professional dialogues but don't truly assess the LLMs' teaching capabilities. Therefore, if we want LLMs to assist in teaching effectively, we need to evaluate and enhance their teaching abilities, as possessing knowledge and guiding others to learn are distinct skills. Therefore, in this paper, we have developed a benchmark for assessing whether LLMs generate high-quality questions in the field of education, guided by professional educational theories. Unlike general questioning, as shown in Fig. <ref> (a), our benchmark requires that the generated questions not only be fluent and readable but also meet the fundamental characteristics proposed earlier (i.e. the first characteristic), as shown in Fig. <ref>(b). Specifically, we draw on Anderson and Krathwohl's educational taxonomy <cit.> to prompt LLMs to generate questions at six levels for each context. We select tasks from three domains, including general, single-discipline, and interdisciplinary domains, to more comprehensively assess the strengths of LLMs as teachers in various fields. Based on the four characteristics proposed earlier, we have also designed four evaluation metrics: consistency, relevance, coverage, and representativeness, to assess the value of questions posed by LLMs in the educational domain, thereby comprehensively evaluating the questioning capability of LLMs as teachers in education through evaluating their generated educational questions. Our experiments reveal that LLMs like GPT-4, Claude2, and GPT-3.5 demonstrate good questioning capability across domains as teachers in education through evaluating their generated educational questions. In summary, our contributions are threefolds: * We introduce the problem of evaluating questioning capability in education as a teacher for LLMs through evaluating their generated educational questions, building a framework based on educational theory that includes six cognitive levels and tasks from three different domains. * We establish four evaluation metrics to assess the questioning capability in education as a teacher of LLMs through evaluating their generated educational questions. * We conduct experimental evaluations of 11 LLMs, providing quantitative standards and subject orientations for each LLM's questioning capability as a teacher. § DATASETS AND TASK SETUPS We propose a benchmark named Dr.Academy, which has tasks from three domains. The first two request LLMs to generate questions in the general and monodisciplinary domain, respectively, based on the six levels of Anderson and Krathwohl's educational taxonomy <cit.>, including memory, understanding, application, analysis, evaluation and creation. The third one requests LLMs to generate questions that intersect multiple subjects. The overview of Dr.Academy is shown in Fig. <ref>. §.§ Context Construction Initially, we collect 10,000 contexts from the general domain and produce an additional 10,000 contexts specifically for the monodisciplinary domain. In the general domain, the contexts are sourced from the SQuAD dataset <cit.>, an extractive reading comprehension dataset derived from Wikipedia articles, and are utilized as the foundation for the LLMs to generate questions. In the monodisciplinary domain, we generate corresponding contexts for each of the multiple-choice questions in the MMLU dataset <cit.>, which covers a broad spectrum of subjects, with GPT4 [https://chat.openai.com/]. These contexts include essential information related to the question and all candidate choices. The prompt for generating contexts is shown in Table <ref>. We also conduct manual evaluations on the generated contexts for the MMLU questions. In this process, we engage three graduate students from different disciplines to perform the evaluations. For each discipline, we randomly select 1% of the questions to undergo manual assessment. If these entries do not achieve a manual evaluation score of 4 (on a scale of 1-5), we will regenerate the contexts. §.§ Task Setup We have designed three tasks and each task requires LLMs to generate questions catering for the corresponding domain. Finally, these generated questions will be used to evaluate the questioning capability in education as a teacher of LLMs. The prompt for generating questions is shown in Table <ref> (row “Generation”). General domain tasks request an LLM to generate questions with the collected contexts from SQuAD based on the six levels of Anderson and Krathwohl's educational taxonomy <cit.>, including memory, understanding, application, analysis, evaluation and creation. For instance, in Fig. <ref> (a), at the memory level, a question might ask specific details like “What are the religious features of a school building?”; at the understanding level, it could be about reasons such as “Why is a school considered to have Catholic characteristics?”; at the creating level, questions could be more open-ended, involving imagination and design, etc. This task is designed to evaluate “which LLM is more suitable to be a general course teacher”. Monodisciplinary domain tasks request an LLM to generate questions with the generated contexts from MMLU, focusing on either humanities (like history, geography) or sciences (like physics, chemistry), based on the same six educational levels. In science, for instance, a memory-level question might ask about element symbols and formulas, such as “What is the chemical formula for ammonium sulfate?”; an application-level question related to real-world phenomena, like “choose a substance that reacts with hydrochloric acid to produce carbon dioxide”. This task is designed to evaluate “which of the two LLMs is more suitable to act as a humanities teacher and a science teacher.” Interdisciplinary domain tasks request an LLM to generate questions that cross multiple subject areas, reflecting each subject's characteristics. For example, in Fig. <ref>(c), when merging literature and geography, a question might seek an explanation of the geographical phenomenon described in a poem's line. In combining art and geography, a question might ask about the geographical features represented in a song. A less successful example of an interdisciplinary question is one where the involved disciplines are unrelated, such as asking about Einstein's theory of relativity and then about the Cauchy inequality in mathematics. This question touches on physics and mathematics but lacks a meaningful connection between the two, making it not truly interdisciplinary. This task is designed to evaluate “which LLM is more suitable to act as a interdisciplinary teacher.”, qualifying if LLMs solving the problem requires understanding knowledge from both subjects. §.§ Evaluation metrics We adopt consistency, relevance, coverage, and representativeness to evaluate LLMs' generated questions in the general and monodisciplinary domains, respectively, while using relevance and representativeness to evaluate questions in the interdisciplinary domain. The difference of metrics selection is because questions in interdisciplinary domain lack a comprehensive contextual framework. For instance, in reality, there is no distinct academic discipline like “historical geography.” This absence of a well-defined, unified context means that metrics such as coverage and consistency do not apply. To validate the effectiveness of these metrics, we consult ten experts in education to rate the effectiveness of these metrics within the field of education on a scale of 1 to 5. They consistently award these metrics scores of 4 and above, which leads us to believe that these metrics are meaningful for evaluating questions in education. We also align these metrics with manual evaluations in Figure 6, indicating that our metrics are indeed significant within the field of education. Additionally, experiments are conducted to compare these metrics with human scoring in order to corroborate the validity and reasonableness of them (see Fig. <ref>). The prompt for evaluating questions is shown in Table <ref> (see row “Evaluation”). Specifically, consistency is to assess whether the question accurately corresponds to a pre-defined level of the educational taxonomy, relevance is to assess whether the question is related to the provided text content or theme, coverage is assessed by determining if all generated questions based on a context encompasses a major portion (over 50%) of this given context, representativeness evaluates whether a question captures the main content or core ideas of the text. Metrics are rated on a binary scale, with 1 for criteria met and 0 for not met, as shown in Fig. <ref> and Table <ref>. We adopt GPT-4 to score each question three times. A question that scores 1 in two out of three instances meets the metric's requirement <cit.>. § EXPERIMENTS In this section, we conduct extensive experiments to evaluate different LLMs' questioning capability through evaluating their generated educational questions in the proposed Dr.Academy. §.§ Experimental Setups Our experiments are conducted on 8 Nvidia A100 GPUs, each with 80GB of memory, and we use PyTorch [https://pytorch.org/] in Python [https://www.python.org/]. We set the maximum sequence length for both input and output sequences to maximum 1000 tokens. §.§ Datasets, Baselines and Metrics The baseline LLMs for this evaluation are BLOOM-7B <cit.> BLOOM-176B <cit.>, Claude2 <cit.>, Falcon-7B <cit.>, Falcon-180B <cit.>, GPT3.5 <cit.>, GPT4 <cit.>, LLaMA2-7B <cit.>, LLaMA2-70B <cit.>, Vicuna-7B <cit.>, and Vicuna-33B <cit.>. For the task of manual evaluation, we enlist the help of three graduate students specializing in education. We begin by informing the annotators about the purpose of each task and the scoring guidelines. Following this, we request them to score the responses. During the evaluation phase, we randomly select 1000 questions generated by each LLM for each task and ask three volunteers to manually evaluate the generated responses using the same criteria applied to GPT-4. To ensure the reliability and validity of the human ratings, we calculate the Inter-rater Agreement using Krippendorff’s Alpha (IRA). For ratings that exhibit low agreement (< 0.7), we remove the particular statement from consideration and replace it with a new one. This method guarantees the precision and consistency of our human assessment. §.§ Main results Question 1: Which LLM is more suitable to be a general course teacher? Answer 1: GPT4! The term “suitable” is used to assess the effectiveness of each LLM in different academic subjects. This evaluation helps in identifying which LLMs perform best in specific educational disciplines. In the general domain tasks, various LLMs demonstrate diverse performances as shown in Table <ref> and Table <ref>. Specifically, GPT4 achieves a perfect score of 100% in both consistency and relevance, indicating its strong capability in understanding task requirements and generating relevant questions. However, its coverage score of 54.5% suggests there is room for improvement in generating questions that encompass more content. In representativeness, with a score of 80.1%, GPT-4 shows a good grasp of the context's core content and viewpoints, crafting questions with depth and breadth. BLOOM-176B and Claude2 also score perfectly in relevance, reflecting their excellent performance in linking questions to the context's themes and content. However, their lower scores in coverage and representativeness indicate potential for improvement in capturing the full extent and core insights of the texts. Moreover, “Aver” in Table <ref>, Table <ref> and Table <ref> represent the average result of the corresponding dimensions under each domain, which are obtained using in-context learning (i.e. ICL). ICL is to introduce a human-written sample into the prompt which typically improves LLMs' performance across all metrics, while most LLMs show a decline in the 0-shot setting, demonstrating the critical role of ICL in enhancing the quality of question generation. Question 2: Which of the two LLMs is more suitable to act as a humanities teacher and a science teacher? Answer 2: Both are GPT4! In the monodisciplinary domain tasks, LLMs are compared based on their performance in humanities and sciences as illustrated in Table <ref> and Table <ref>. The results reveal that the majority of the LLMs perform marginally better in the scientific disciplines compared to the general domain. Specifically, GPT4 excels across all metrics, particularly in the science disciplines, where it scores higher than in the humanities, indicating great capability in handling science content. Following closely is Claude2, which nearly matches or equals GPT4 in Relevance and representativeness in the humanities, demonstrating a deep understanding and effective processing of humanities content. Claude2 also maintains a high performance in the science disciplines. GPT3.5 shows competitive strength across the four metrics, especially in relevance and representativeness within the science subjects, approaching the leading performance of GPT4. BLOOM-176B scores significantly higher in consistency within science compared to the humanities, and also demonstrates good capability in coverage and representativeness, suggesting its strengths in processing logical and scientific data. Question 3: Which LLM is more suitable to be a interdisciplinary teacher? Answer 3: Claude2! Results of LLMs in the interdisciplinary domain tasks are shown in Table <ref> and Table <ref>. It shows that Claude2 outperforms other LLMs with scores of 89.1% in relevance and 93.3% in representativeness. Following closely is GPT4, with scores of 87.8% in relevance and 91.2% in representativeness, also indicating strong performance. GPT3.5 and LLaMA2-70B also show high scores, particularly in representativeness, suggesting their capability in understanding key textual content and generating in-depth questions. On the other hand, BLOOM-7B, Falcon-7B, and Vicuna-7B perform relatively poorly on both metrics. Specifically, BLOOM-7B scores below 40% in both relevance and representativeness, which may suggest a need for further enhancement in understanding interdisciplinary content and generating high-quality questions. Question 4: Which LLM is more suitable to be a all-around teacher? Answer 4: GPT4! We also comprehensively compare the performance of LLMs in three tasks as shown in Table <ref> and Fig. <ref>. Specifically, in the general domain tasks, GPT4 scores the highest and ranks first. In the monodisciplinary domain tasks, including humanities and science, GPT4 also has the best performance. In the interdisciplinary domain tasks, Claude2 occupies the first rank. Finally, for the comprehensive rating, GPT4 ranks first again with the highest score. Overall, GPT4 shows the best performance in most tasks, while Claude2 also demonstrates strong capability in certain tasks. Overall, GPT4, Claude2, and GPT3.5 perform well in these assessments, demonstrating their versatility and adaptability as high-performance models. On the other hand, BLOOM-7B and Falcon-7B tend to perform weaker in most fields, which may make them more suitable for specific application scenarios. Question 5: What's the relationship among metrics for various LLMs? Answer 5: Pairwise positive correlations! We also analyze the relationship among four metrics in three tasks as shown in Fig. <ref>. Fig. <ref> (a) and Fig. <ref> (b) represent the relationship between four metrics of question quality generated by different LLMs in the general and monodisciplinary domains, respectively. The size of the circle indicates coverage, with larger circles covering more content of the text. The darker the color of the circle, the higher the relevance of the questions to the text. The “Average_1” in the first graph represents a group of LLMs, which are zoomed in on the second graph, and the “Average_2” in the second graph represents a subset of these LLMs, which are further examined in the third graph. In Fig. <ref> (a), analyzing the general domain tasks, we see a positive correlation between relevance and consistency across all three graphs. Representativeness also shows a positive correlation with relevance and consistency, but the correlation is weaker. As relevance and consistency increase, the LLMs have darker colors and larger circles, indicating higher relevance and coverage. Although larger LLMs like BLOOM-176B show good coverage, not all models with large coverage have high relevance. For example, Falcon-180B does not perform as well as BLOOM-176B in relevance, suggesting a need for balance between the breadth of text coverage and the accuracy of question generation. In the third graph, LLMs like GPT4 maintain high relevance while also achieving good coverage. In Fig. <ref> (b), for the monodisciplinary domain tasks, the correlations between the four metrics are not as pronounced as in the general domain. The third graph shows little color variation, indicating that representativeness does not change much with increased relevance and consistency. However, there are LLMs like GPT4 that stand out in all metrics, shown in the top right corner with a dark color and large size. But GPT3.5, while showing good representativeness and relevance, has only average consistency. In Fig. <ref> (c), analyzing the interdisciplinary domain tasks, generally shows a positive correlation between relevance and representativeness, although it's not as clear in the second graph. Overall, LLMs like GPT4, Claude2, BLOOM-176B, and GPT3.5 perform well across all four metrics, while the 7B series models tend to perform less well. The metrics also tend to show positive correlations with each other. Question 6: Is the automatic scores generated by GPT4 agree with human perspectives? Answer 6: Yes, the Pearson correlation coefficient reaches 0.947 and Spearman rank correlation reaches 0.87! We adopt Pearson correlation coefficient that normalized to a 1-100 scale to investigate the difference between automatic and human scores for different LLMs as shown in Fig. <ref>. The human scores for each metric and the corresponding agreement of human annotators on these metrics are listed in Table <ref>. We find that automatic and human evaluations for LLMs generally agree, showing a high positive correlation with the Pearson correlation coefficient reaching 0.947 and Spearman rank correlation reaching 0.870. Specifically, GPT4 performs excellently in both automatic and human scoring, with minimal difference, indicating widespread recognition of its capability. Similarly, Claude2 has close scores in both evaluations, indicating balanced performance in assessment tasks. It's important to clarify that the process of generating questions and scoring them is separate. During the scoring phase, there is no knowledge of which LLM generates which question. We believe that even if other LLMs are used to evaluate GPT-4's performance against a comparatively weaker 7b model, the results would still favor GPT-4, a conclusion also supported by human evaluations. The findings suggests that automatic scoring has the potential to partially replace human scoring for evaluating questioning capability in education as a teacher of LLMs through evaluating their generated educational questions. §.§ Case study We present a set of general domain questions generated by GPT4, identified as the leading LLM in this area, in Fig.<ref>. The top-performing LLMs in monodisciplinary (Humanities), monodisciplinary (Science), and interdisciplinary domains have their questions displayed in Fig.<ref>, Fig.<ref>, and Fig.<ref>, respectively. Additionally, further examples from baseline LLMs are shown in figures ranging from Fig.<ref> to Fig.<ref>. In Fig.<ref>, the first question tests memory by asking about the true ruler of the Twilight Realm. The second requires understanding the reasons behind the Mirror of Twilight's durability. The third applies Link's transformation ability to his quest. The fourth analyzes the contrasting rules of Midna and Zant. The fifth evaluates the use of the Mirror as a punishment tool. Finally, the sixth encourages creating an alternative ending for the saga. These questions showcase GPT-4's range from simple recall to creative thinking, and the generated questions can be adopted to make students quickly grasp the key content, indicating its strong questioning capability in education as a teacher through evaluating their generated educational questions. We also show a good case in the interdisciplinary tasks generated by Claude2, which shows outstanding comprehensive capability that can effectively combining different subject knowledge to pose accurate and challenging questions. In chemistry and physics combination questions, it understands the basic principles of chemical reactions and the physical properties of bubble motion in liquids, demonstrating analysis and synthesis ability in interdisciplinary questions. Claude2 shows high adaptability and understanding in complex tasks involving multiple disciplines. § RELATED WORK §.§ Question Generation The potential of LLMs in generating questions for educational purposes garners significant academic attention. <cit.> have developed a reinforcement learning approach specifically for generating natural questions. <cit.> evaluate the educational utility of questions produced by LLMs, organizing them according to their levels of difficulty. <cit.> investigate the methods LLMs use to create questions, focusing on the tracking of dialogue states. <cit.> introduce a method involving double hints for the generation of questions about visuals. <cit.> emphasize the importance of sub-questions in improving the effectiveness of primary visual questions. <cit.> examine various prompting techniques for LLMs and analyze the differences in model responses. <cit.> utilize the capabilities of GPT-3 to foster curiosity-driven questioning among children. Collectively, these studies underline the evolving role of LLMs in reshaping question generation instead of searching valuable questioning points. §.§ Test-based Benchmark There has been an increasing focus on evaluating the capability of LLMs in the context of standardized exams and academic benchmarks. For example,  <cit.> introduce GAOKAO Benchmark to evaluate the intuitive benchmark of Chinese college entrance examination questions;  <cit.> propose the first comprehensive Chinese evaluation package C-EVAL;  <cit.> present a human-centric benchmark AGIEval designed for evaluating foundation models;  <cit.> introduce CMMLU, a comprehensive Chinese benchmark covering multiple disciplines;  <cit.> introduce SciBench to systematically investigate the reasoning ability required to solve complex scientific problems;;  <cit.> propose M3KE, a large-scale multi-layer and multi-disciplinary knowledge assessment benchmark;  <cit.> propose a method to evaluate the multi-task accuracy of large Chinese language models across various domains;  <cit.> introduce FinEval, a benchmark designed for financial knowledge evaluation in LLMs;  <cit.> focus on the Chinese Elementary School Math Word Problems dataset to evaluate reasoning capability;  <cit.> explore ChatGPT's potential to complete the Vietnam National High School Graduation Exam; <cit.> propose performance criteria to assess the generated multiple-choice questions; <cit.> present LearningQ, an educational question generation dataset containing over 230K document-question pairs. <cit.> introduce a novel game named BrainKing for evaluating LLM capabilities under incomplete information scenarios. <cit.> presents a novel framework named EmotionQueen for evaluating the emotional intelligence of LLMs. However, these tests treat LLMs as students, assessing their abilities by how they answer questions instead of seeing them as teachers. § CONCLUSIONS AND FUTURE WORK In conclusion, our study presents a pioneering investigation into the questioning capability in education as a teacher of large language models (LLMs) through evaluating their generated educational questions, shifting the traditional role of LLMs from learners to educators. We have developed a comprehensive benchmark, named Dr.Academy, based on educational taxonomies that assesses LLMs' abilities to generate questions across various domains with four evaluation metrics. Our findings indicate that models like GPT4, Claude2, and GPT3.5 demonstrate promising teaching potential. Looking ahead, the future directions of this research include refining the evaluation metrics for even more nuanced assessments of teaching effectiveness and expanding the range of subjects and domains covered. § LIMITATIONS One limitation of our study is that it primarily focuses on the ability of large language models (LLMs) to generate questions, which is just one aspect of teaching. Actual teaching involves more complex interactions, including providing feedback, adapting to students' needs, and fostering critical thinking, areas not fully captured by our current benchmark. Additionally, our approach relies heavily on the textual content, which may not comprehensively represent the nuances of human teaching methods that include non-verbal cues and personalized interactions. Therefore, while our findings offer valuable insights into the potential of LLMs as teaching aids, they should be viewed as a starting point for more in-depth research into the multifaceted nature of teaching and learning processes. § ACKNOWLEDGEMENTS This work is supported by Science and Technology Commission of Shanghai Municipality Grant (No. 22511105902), Shanghai Municipal Science and Technology Major Project (No.2021SHZDZX0103), the National Natural Science Foundation of China (No.62072323), Shanghai Science and Technology Innovation Action Plan (No. 22511104700), and the Zhejiang Lab Open Research Project (NO. K2022NB0AB04).
http://arxiv.org/abs/2408.12498v1
20240822154552
Smart Fleet Solutions: Simulating Electric AGV Performance in Industrial Settings
[ "Tommaso Martone", "Pietro Iob", "Mauro Schiavo", "Angelo Cenedese" ]
cs.RO
[ "cs.RO" ]
#1##1#1ALG@line-1 Smart Fleet Solutions: Simulating Electric AGV Performance in Industrial Settings Tommaso Martone1, Pietro Iob12, Mauro Schiavo2, Angelo Cenedese1 1Department of Information Engineering, University of Padova, Padua, Italy tommaso.martone@studenti.unipd.it, pietro.iob@phd.unipd.it, angelo.cenedese@unipd.it 2 Techmo Car S.p.a., Padua, Italy pietro.iob@techmo.it, mauro.schiavo@techmo.it August 26, 2024 ============================================================================================================================================================================================================================================================================================================================================== § ABSTRACT This paper explores the potential benefits and challenges of integrating Electric Vehicles (EVs) and Autonomous Ground Vehicles (AGVs) in industrial settings to improve sustainability and operational efficiency. While EVs offer environmental advantages, barriers like high costs and limited range hinder their widespread use. Similarly, AGVs, despite their autonomous capabilities, face challenges in technology integration and reliability. To address these issues, the paper develops a fleet management tool tailored for coordinating electric AGVs in industrial environments. The study focuses on simulating electric AGV performance in a primary aluminum plant to provide insights into their effectiveness and offer recommendations for optimizing fleet performance. § INTRODUCTION The surge in Electric Vehicles (EVs) deployment reflects a shift towards sustainable transport, particularly noticeable in industrial contexts <cit.>. However, challenges like battery technology limitations hinder widespread adoption <cit.>. EVs offer environmental benefits but face hurdles like high costs and limited range in industrial use. Autonomous Ground Vehicles (AGVs) offer logistical solutions, but also face adoption challenges akin to EVs <cit.>. This paper aims to tackle these issues by developing a plant simulator tool for AGV coordination. It aims to showcase electric AGVs' capabilities in industrial settings and serve as a performance evaluation tool for fleet deployment <cit.>. Specifically, without sacrificing applicability, this study targets the creation of a simulation environment capable of assessing the performance of such a fleet within the confines of a primary aluminum plant. Section <ref> will explore a comprehensive elucidation of the problem domain and the author's proposed solution aimed at materializing the envisioned simulation environment, while a specific emphasis on the selected case study will be presented in <ref>. Finally, Section <ref> will encapsulate the principal findings gleaned from the ensuing simulation campaign, thereby encapsulating the essence of this endeavor. § PROBLEM DESCRIPTION AND PROPOSED SOLUTION This work can be used as a tool to assess multiple performance levels. However, as a valuable example, this paper will focus on determining the minimum size a fleet must have to fulfill all plant requests and prevent accumulation. Determining the optimal fleet size is influenced by plant priorities and operational efficiency, with vehicle routing significantly impacting task completion times and overall efficiency. The study introduces the Fleet Management Simulator (FMS), a versatile and modular tool adaptable to various contexts, providing insights into the operational dynamics of electric AGVs-driven plants. The FMS is structured as a Finite State Machine (FSM) <cit.>, with three main states: Idle, Charging, and Routine. The Routine state includes essential vehicle tasks for plant functioning, the Charging state accounts for recharging needs, and the Idle state designates availability for secondary tasks. Task allocation is managed by the Plant Manager (PM), which selects optimal vehicles for plant requests, and a Decentralized Tasks Manager (DTM), which assigns temporary tasks to available vehicles when there are no plant requests. This dual management approach ensures comprehensive consideration of plant and vehicle needs. This work focuses on three vehicle types: Fluoride Feeder Vehicle (FFV), Anode Pallet Transport Vehicle (APTV), and Metal Transport Vehicle (MTV). These vehicles are essential in aluminum smelting: the FFV distributes Aluminum Fluoride (AlF_3), the APTV transports carbon anodes, and the MTV moves molten aluminum to the furnaces. Fig. <ref> illustrates the FSM implementation, showing system inputs, vehicle quantities, and weights for the Plant Manager and Decentralized Tasks Manager. Colors indicate different vehicle classes: blue for FFV, red for APTV, and yellow for MTV, with grey states accessible by any vehicle. Idle states represent temporary tasks while vehicles are available for the PM. PM-reachable states are marked with a dotted line. The transition between states is determined by a symbol alphabet defined as follows: Σ: {PM, DTM, BC, VE/L, TC, G} * PM: Plant Manager request. * DTM: Decentralized Task Manager selection. * BC: Battery Charge under 20%. * VE/L: The Vehicle is Empty or Load for FFVs. * TC: The task has been completed. * G: Specific symbol for a PM task called Garbage task. The combination of these symbols with the vehicle's class will define the different state transitions of the AGVs. As previously mentioned, the three primary states can be further explored, unveiling the entire system. In Idle states, vehicles perform temporary tasks while available for selection by the PM. The LookForEvents state allows vehicles to determine their next state based on battery charge and commands from the PM and DTM. In Surveillance, vehicles scout less visited areas to monitor and gather data using stereo cameras or Lidar <cit.> <cit.>. In Idle_charging, vehicles recharge briefly at a charging station. The AlF_3_refill state sends FFVs to refill at the AlF_3 storage. The Charge states include Charge_Brain and Charge_AGV. Vehicles below 20% SOC select a charging station in Charge_Brain and charge fully in Charge_AGV. The Routine states differ by vehicle class, involving specific tasks such as pot refilling for FFVs, anode replacement and waste removal for APTVs, and molten aluminum collection for MTVs. For the design of the Plant Manager, a cost function model has been employed. This approach offers significant flexibility; by adjusting the weights it becomes possible to enhance the relevance of one behavior over another. For APTV and MTV the employed cost function is reported in eq.(<ref>): f_V = W_r· (R - d_req) + W_d/d_task For FFV the employed cost function is reported in eq.(<ref>): f_FFV = f_V + W_l· m_l Where R [m] is the estimated distance range the vehicle can cover with the current charge. d_req [m] is the total distance required to perform the task and navigate to the closest charging station. d_task [m] is the exact distance from the vehicle's current position to the assigned destination. m_l [kg] is the current mass of AlF_3 on the FFV. W_r, W_d, and W_l represent the weights associated with vehicle autonomy, goal distance, and vehicle load, respectively. The estimation of R is conducted through the following computations: R = SOC × d_avg d_avg = d_tot/E_c- E_r Where d_avg [m/Wh] is the average distance a vehicle can cover per unit of energy, while d_tot [m] is the total distance covered and E_c [Wh] is the total energy consumed. Finally, E_r [Wh] is the total energy regenerated through regenerative brakes and SOC [Wh] is the state of charge of the vehicle. The accuracy of d_avg improves throughout the simulation as parameters in eq.(<ref>) are calculated at each step, increasing accuracy with more data. The DTM determines the best task for each vehicle based on their charge, position, and for the FFV, the amount of AlF_3 in its tank, unlike the PM, which compares all vehicles. Each vehicle uses a decentralized cost function tailored to specific tasks. By comparing these tailored functions, each vehicle identifies the task with the minimum cost. The employed decentralized cost functions are reported in the following eq.(<ref>), (<ref>), (<ref>): f_charge = W_SOC· SOC + W_dist· (d_CS + V_path) f_surv = W_surv· V_min + W_dist· (d_E + V_path) f_refill = W_Load· m_l + W_dist· (d_R + V_path) where V_path is the sum of visit values, d_CS is the distance to the charging station, d_E is the distance to the least visited area. Moreover, d_R [m] is the distance to the closest AlF_3 storage and V_min is the minimum visit value. Finally, W_SOC, W_d, W_surv, and W_Load are weights for vehicle SOC, goal distance, plant surveillance, and vehicle load. § SIMULATION SETUP The simulations utilized the open-source software SUMO (Simulation of Urban Mobility) <cit.> to accurately model the plant environment, vehicle behaviour, and fleet management system (FMS). By recreating the plant's environment and vehicle models in SUMO, the study was able to gather comprehensive data on both plant and vehicle states. For effective traffic simulation in SUMO, a detailed map was created, including all vehicle points of interest such as potlines, cast houses, charging stations, and additional buildings for storage and maintenance. Each building was represented as a node in the network, with potlines marked by intersections representing clusters of pots. Charging stations were centrally placed on the map to avoid key areas of interest. Vehicles followed simplified routines for task completion, halting near designated nodes with waiting times based on a Gaussian distribution proportional to task duration to enhance realism. Dijkstra's algorithm was chosen for vehicle routing due to its suitability for the network size and reliance on edge travel times, though this choice remains adaptable based on the specific system being simulated <cit.>. Dijkstra's algorithm assesses the traverse time of each edge from the original location to determine the shortest path to the destination <cit.>. By integrating the concept of visit values and a parameter (W_veh) indicating vehicle locations on the map, it becomes possible to identify a sub-optimal path that balances the shortest route with the least visited edges while avoiding other vehicles. Eq.(<ref>) illustrates how the traversal time (T_t) of each edge dynamically updates at each simulation step, incorporating W_veh if a vehicle occupies the edge, alongside the precise visit value calculated by the Forget Function (FF(Edge_i|_t)), weighted by Wvisit. These visit values are determined using the forget function described in eq.(<ref>). T_t = T_t · (1 + W_visit· FF(Edge_i|_t) + W_veh) FF(Edge_i|_t) = 1+ FF(Edge_i|_t)/1 + e^(K ·(t - t - Δ t)) Here, T_t denotes the updated traverse time of the edge, while T_t represents its original traverse time. Additionally, W_visit serves as a weight determining the relevance of visit values for routing decisions, while W_veh detects the presence of vehicles on the edge. The function FF(Edge_i|_t) computes the exact visit value on the i-th edge at the current time, with FF(Edge_i|_t) representing the previous visit value at time t. Moreover, K indicates the speed rate at which the visit value decreases over time, t stands for the current time step, t signifies the last time step the edge was visited, and Δ t represents the time needed to halve the visit value. Table <ref> reports the parameters employed during the simulation campaign. § RESULTS Testing various vehicle combinations, a suitable set has been identified that effectively prevents the accumulation of plant requests. Fig. <ref> depicts the selected fleet of 2 FFV, 4 APTV and 4 MTV, demonstrating its ability to fulfill all plant requests. The plots provide a comparison of active requests from the plant against vehicles charging their batteries throughout a one-week time frame. The red line in each subplot represents the accumulation of the requests. The second subplot, associated with the APTV fleet also provides the accumulation of the Garbage requests in blue, while the green line indicates the vehicles that are undergoing a charging cycle. The request fluctuations remain bounded throughout the entire simulation duration, preventing requests saturation. Only in rare instances do they reach their maximum value, typically due to an unfavorable convergence of demands from distant cells. Collecting the amount of time each vehicle spends in the implemented states it has been possible to gather further insight on their behavior. Fig.<ref> visualize the percentage of time the vehicles spend in each state averaged by the number of vehicles in the fleet. It's interesting to note that the majority of vehicles spend over 60% of their time charging their batteries. However, this trend doesn't apply to the FFV fleet. This divergence can be attributed to the FFV vehicles allocating more time to Idle states, which also include the Idle_Charging state. For this reason, it has been investigated a different scenario where the charging stations are equipped for battery swap, drastically cutting recharging times. Fig. <ref> illustrates the mean, per vehicle class, of the percentage of time spent in each state when replacing the batteries. Investing in charging stations equipped with supplementary battery packs for fast replacement enables a significant reduction in vehicle downtime, ensuring quicker availability. As a result, the overall operational efficiency of the plant improves, potentially necessitating fewer vehicles to operate the facility effectively. To validate the efficiency of replacing batteries, Fig. <ref> shows the comparison of plant requests with vehicles replacing batteries. In this simulation, it has been employed an underestimated fleet, hence, a fleet with one vehicle less per class with respect to the simulation depicted in Fig. <ref>. With this approach, it becomes evident how vehicles can perfectly satisfy plant requirements, completing all tasks in a very short time and preventing requests accumulation even better than the minimum fleet estimated without battery swap. By integrating the concept of "Visit Values" into vehicle routing using Dijkstra's algorithm, the study achieved a more balanced distribution of these values across the map, leading to increased street usage for navigation. Adjusting the weights W_surv in the decentralized cost functions and W_visit in the adapting traverse time allows for prioritizing either scouting under-visited streets or briefly charging the vehicle, and determining the significance of visit values in computing optimal paths, respectively. Two simulations were conducted to illustrate the distribution of visit values. In the first simulation, routing based on visit values was disabled (W_visit = 0), and the surveillance task was deprioritized (W_surv = 20). In the second simulation, empirically determined values were used. Table <ref> compares the percentage of edges with visit values below the threshold, where a lower percentage indicates better map coverage due to more edges being visited. § CONCLUSION This study introduces a versatile modular simulation tool designed for various contexts, providing comprehensive performance analysis from both vehicle and plant perspectives. Using an aluminum smelter with electric AGVs as a test case, initial trials identified the smallest fleet size required to prevent the accumulation of plant requests. However, a significant amount of time was spent by vehicles on charging. To address this, battery-swapping was implemented, significantly reducing recharging time and enhancing efficiency. This improvement allowed for a smaller fleet size without sacrificing plant performance. Additionally, a method based on Dijkstra's algorithm was developed to improve map coverage, optimizing routes by balancing the shortest paths with the least-visited streets and avoiding other vehicles. This method markedly increased street coverage, with only 1.2% of streets visited fewer than 10 times after 14 hours, compared to 14.1% without the method. The study also highlights the tool's potential for further analyses, such as evaluating fleet robustness under vehicle failure scenarios. The use of cost functions within this architecture allows for task prioritization and tailored vehicle behavior by adjusting weight sets for different vehicle classes. While this flexibility enables extensive exploration of fleet and weight combinations, it also introduces complexity in finding optimal solutions, necessitating experience and time. § ACKNOWLEDGMENT This research is supported by the collaboration of Techmo Car S.p.a. (Padova, Italy) with the University of Padova, Italy. ieeetr
http://arxiv.org/abs/2408.11174v1
20240820201319
Combining Objective and Subjective Perspectives for Political News Understanding
[ "Evan Dufraisse", "Adrian Popescu", "Julien Tourille", "Armelle Brun", "Olivier Hamon" ]
cs.CL
[ "cs.CL", "cs.SI" ]
[ Stephen Maguire August 26, 2024 =================== § ABSTRACT Researchers and practitioners interested in computational politics rely on automatic content analysis tools to make sense of the large amount of political texts available on the Web. Such tools should provide objective and subjective aspects at different granularity levels to make the analyses useful in practice. Existing methods produce interesting insights for objective aspects, but are limited for subjective ones, are often limited to national contexts, and have limited explainability. We introduce a text analysis framework which integrates both perspectives and provides a fine-grained processing of subjective aspects. Information retrieval techniques and knowledge bases complement powerful natural language processing components to allow a flexible aggregation of results at different granularity levels. Importantly, the proposed bottom-up approach facilitates the explainability of the obtained results. We illustrate its functioning with insights on news outlets, political orientations, topics, individual entities, and demographic segments. The approach is instantiated on a large corpus of French news, but is designed to work seamlessly for other languages and countries. § INTRODUCTION Political news provides essential information, the interpretation of which shapes our understanding of political actions and events. Analyzing the vast amount of political news available on the web is only possible by automating the process. In order to maximize its impact for stakeholders in computational politics, such analysis should be comprehensive and flexible. Most news analysis tools focus on objective metrics, or include only very coarse subjective metrics. Textmap <cit.> detects entities and their relationships, and grounds them in space and time. <cit.> extend Textmap's purely objective analysis using lexicon-based sentiment analysis at the article level, which does not allow for precise sentiment analysis towards article entities. A citation graph analysis allows the clustering of news outlets based on their political affinity <cit.>. An aggregation of TV and radio broadcast metadata is used to estimate political biases <cit.>. More recently, the quantification of subjective aspects has also been approached with a focus on classifying the political leanings of entire news articles using large language models <cit.>. These approaches have important shortcomings, notably in terms of granularity, explainability, and the need to compile a new dataset when the political context changes. We tackle these limitations by introducing a news analysis framework which provides comprehensive objective and subjective insights. These insights can be aggregated in a flexible manner to provide insights about news outlets, political topics, individual entities and demographic segments. To obtain this result, the framework integrates recent NLP components, information retrieval techniques, news outlet metadata, temporal metadata and external knowledge bases. In particular, we deploy target-dependent sentiment classification (TSC) to obtain a fine-grained subjective representation of articles. The framework is designed to work seamlessly for multiple countries since all NLP components are multilingual and the knowledge bases are not country specific. It is instantiated for political news, using a French political news corpus. The main findings are: * mainstream political orientations are presented in a rather balanced way in the major news outlets, but the radical left and right are positively and negatively biased, respectively; * the mentions and sentiments associated with political orientations vary across impactful political topics; * sentiment scores are generally negative, with important variation between news outlets; * the most positive and negative sentiment scores for politicians are correlated with the public perception of their actions; * mentions are biased toward male politicians, but sentiment scores of female politicians are higher; * there is an age bias toward older politicians; * the French semi-presidential system is reflected in the news, with a dominance of presidents or presidential candidates. These findings can be used by multiple stakeholders. News outlets can analyze their positioning and make it more transparent to the general public. Social scientists can gain new insights into the online representation of the political landscape. Political organizations can monitor the online reporting of their political actions. Citizens can be informed about potential political, topical and demographic biases of news outlets. § RELATED WORK Media analysis studies are based on tools and insights from different disciplines, including political science <cit.> economics <cit.>, and NLP <cit.>. Research efforts that reveal political, thematic or demographic biases are of particular interest. Gender representation has been shown to be biased in multiple countries. <cit.> show that male politicians are more present in news reporting for countries with proportional representation. <cit.> analyze news abstracts and conclude to an underrepresentation of women in English news. <cit.> and <cit.> investigate gender bias in Dutch and French news, respectively. Online gender bias tracking tools were also developed, such as Gender Gap Tracker <cit.> for Canada and Gendered News <cit.> for France. Aside from gender, topic-related bias in local newspapers is investigated in <cit.>, with focus on the European Union. <cit.> study the bias of words in German news. <cit.> show that political orientations are unevenly represented across broadcast outlets by aggregating mentions of individual political and non-political figures. <cit.> analyze the French media ecosystem by considering the online interaction graph between them. These studies are interesting for quantifying mentions but do not address the measurement of subjectivity. Framing <cit.>, the action of emphasizing certain facts over others, has also been studied. <cit.> introduce the Gun Violence Frame Corpus and implement an approach for news framing detection. Their results show that framing is dynamic and follows closely the gun-related violent crimes. <cit.> investigate how immigration is framed in US newspapers over the last decade, and show an increase in negative messages about immigrants. <cit.> studies framing through sentiment classification and word choice. The author implements a method for highlighting media outlet position over a specific topic. <cit.> very recently tested large language models for frames detection in news headlines. Stance detection <cit.>, i.e. the subjective positioning of the author towards a target, is also actively investigated. <cit.> build a corpus of news focused on company merging stance detection. <cit.> address the detection of misleading news headlines through stance comparison of the body and the text. Other approaches treat stance detection as a text-level sentiment analysis task <cit.>, but as <cit.> points out, this level is too coarse to bring satisfactory insights about stance. More recently, some approaches <cit.> have used stance-labeled article databases such as AllSides to train language models for stance-classification in an end-to-end fashion. As briefly stated in introduction those approaches have several shortcomings: (1) stance is characterized at a coarse level of granularity, and it is impossible to provide detailed insights about politicians, political organizations, or demographic segments; (2) the explanatory power of the results obtained is limited, since only one class label per article is provided; and (3) the task is country and time-dependent and relies on supervised annotated datasets, thus requiring the creation of new datasets for each different political context. Although they provide useful analytical tools for capturing media bias, most studies use lexical, mention-count based, or metadata approaches that fail to account for the sentiment expressed toward key entities in the text. This is mainly an effect of the relative lack of TSC resources for the political domain. Target-dependent Sentiment Classification (TSC) predicts the opinion towards a precise entity and can be aggregated to better understand the positioning of a news outlet toward that entity <cit.>. TSC is often evaluated on short texts such as tweets <cit.>, reviews <cit.> or comments <cit.>. However, news texts are more challenging because sentiments are expressed implicitly or indirectly <cit.>, often include multiple targets in a single sentence <cit.>, and both negative and positive arguments about a target entity are combined due to the fact that journalists are supposed to be objective <cit.>. <cit.> introduced a new dataset of news sentences extracted from US newspapers annotated with polarity at the entity level. <cit.> address multilingual target sentiment analysis in news by creating an aligned dataset in several European languages. TSC resources and methods such as the ones described <cit.> are central to our work, and we integrate them into our framework. § NEWS PROCESSING FRAMEWORK This section presents the main technical components of the proposed news analysis framework. Corpus collection. The corpus was built by collecting RSS streams from more than 280 politically diverse French news outlets between January 2016 to December 2022. Texts of the articles were extracted from their HTML using Trafilatura <cit.>. As RSS were not completely mutually exclusive, and because some articles may be available from multiple links due to revision versions, a near deduplication using MinHashLsh <cit.> was performed to only keep one version per domain name. Parameters are provided in Appendix <ref>. Only news including 200 characters or more were retained, totaling over 457,000 articles. Corpus statistics are provided in Appendix <ref>. Entity detection. Articles were split into sentences using the French “fr_web_core_sm” Spacy model. We then used the French version of Flair <cit.> for entity detection. There are 2.27M mentions of persons in the corpus. Entity linking. To solve ambiguities, entities were linked to Wikidata <cit.> using mGenre <cit.>, a recent multilingual linking model. Linking was considered successful if the log-likelihood of the best candidate was greater than -0.2. The threshold was determined by labeling 200 predictions per 0.1 range between 0 and 0.5, and it provides over 95% accuracy. There are 1.27M linked persons left. Use of knowledge bases. Wikidata <cit.> entries of linked persons were used to select politicians and extract their gender, birth date, country, and political affiliation(s). ParlGov <cit.> includes information about political parties in different countries. It uses a scale from 0 to 10 (left to right) to encode political orientation. There are 520K mentions of politicians (89.9K female), out of which 345K are French and 333K can be linked to political orientations from ParlGov. To facilitate interpretability, we reduce this to a 5 points scale by merging pairs of neighboring orientations. The resulting scale used in experiments includes the following orientations: RL - radical left (30.5K mentions); CL - center left (56.4K); C - center (113.3K); CR - center right (87.3K); RR - radical right (45.8K). We finally match French political party names from Wikidata and ParlGov to attribute an orientation to politicians. Target-dependent sentiment classification aims to determine the sentiment that is expressed towards a given entity in a given context. TSC resources for the political domain were only recently made available for English <cit.> and for multiple languages <cit.>. We use the French version of this last dataset to train an SPC-Bert model <cit.>, which is based on the classical Sentence Pair Classification task of Bert <cit.> and provides strong TSC performances <cit.>. The underlying transformer architecture is CamemBert <cit.>, the French equivalent of RoBERTa <cit.>. We follow the training procedure described in <cit.> and produce inferences for negative, neutral, and positive classes. The training procedure uses a standard train/dev/test split, with each split representing 70%/10%/20% of the entries respectively. The model achieves an F1_macro score of 70.8. It is important to assess the impact of errors made by the TSC component. We run a preliminary experiment with two samples. A first, for which we keep all sentiment predictions, and a second, using a threshold, for which we only keep the 50% most confident ones for each class. For both samples, we compute the average sentiment scores and the number of mentions for the 1,000 most cited politicians from the corpus. We then compute the Pearson correlation between the sampled distributions, and obtain values of 0.993 and 0.966 for the number of mentions and the sentiments distributions, respectively. These results show that, while individual predictions are imperfect, their aggregation leads to very stable results, and all sentiment predictions are usable. Topic analysis. We use an information retrieval approach to select relevant subsets of documents for political topics. This type of approach is interesting because it enables an easy and flexible definition of topics. The news corpus was indexed using BM25 <cit.>, a probabilistic text representation method which provides strong performance when compared to more recent neural-based approaches <cit.>, <cit.>. We chose a set of 10 impactful political topics associated with queries for retrieval. The set includes both diversified topics and similar ones. The list of topics includes: (1) climate change, (2) corruption in the political domain, (3) the economic consequences of the Covid-19 crisis, (4) the health-related issues of Covid-19, (5) the yellow vests protest which took place in France, (6) immigration, (7) purchasing power, (8) the war in Syria, (9) the war in Ukraine and (10) the economic consequences of the war in Ukraine. These topics are used in the analysis of the Subsection <ref>. Pertinence thresholds are needed to select relevant subsets of documents for each topic. These were determined empirically by examining the titles and snippets of BM25 returned results. There are between 2.1K and 8K mentions per topic in the corpus. § POLITICAL NEWS ANALYSIS We propose a comprehensive analysis of political representation in the news by combining objective and subjective perspectives. We analyze news outlets, political topics, politicians, and demographic segments (gender, age). §.§ Outlet-oriented Analysis Source Sentiment. Figure <ref> presents the average sentiment for mentions of politicians for the most frequent news outlets in the corpus. A short description of each source mentioned below is proposed in Appendix <ref>. The political leanings of sources are those established in <cit.>. The average sentiment across all outlets is negative with an average of -0.083, but with significant variability between sources, which can be explained by various factors such as their nature, geographic focus and/or political leanings. The lowest scores are obtained for les-crises, causeur and mediapart, which is consistent with their characterization by experts as right-wing anti-establishment; right-wing and left-wing outlets, respectively. High scores are obtained for regional or local news outlets such as la depeche, nicematin or letelegramme, as well as for economic media such as lesechos or latribune. As for the major outlets, their scores are close to the average, with lefigaro being the least negative, followed by lemonde and liberation. Mapping of Political Orientations. We analyze the positioning of news outlets with respect to major political orientations by aggregating mentions and sentiment scores. The mention-oriented analysis is similar to existing ones <cit.>, while the sentiment-oriented analysis adds a subjective perspective. We use the five-points political orientation mapping described in Section <ref>. Figure <ref> summarizes the mappings of mentions and of sentiment scores vs. political orientations. There is no correlation between mentions and sentiment since the Pearson correlation coefficient between the AVG columns in the two figures is 0.06. This result shows that the two types of analyses are complementary and provide additional insights into political news. Right-leaning orientations are more cited than their left-leaning counterparts but the sentiment associated with right-leaning orientations is more negative. This contrast is even clearer for RR and RL, the two radical orientations that are represented in the two figures. A complementary analysis per news outlet is presented in Appendix <ref>. Figure <ref> shows that the center, including the French governing party, is the most mentioned orientation. The right-wing tendencies are more represented than their left-wing counterparts. Outlet-wise, the center is overrepresented in economic newspapers (latribune and lesechos), but also in les-crises, a right-wing anti-establishment outlet. RL is strongly present in humanite, the newspaper of the French Communist Party, while the extreme right is often cited by sources such as  closermag, a tabloid, or causeur, a right-wing outlet. Figure <ref> illustrates the sentiment-oriented positioning of news outlets. It is quantified by the difference between the sentiment score per orientation and the average sentiment score of the source. The positioning of sources toward political orientations varies, and this is a positive finding since diversified opinions are required for a healthy democratic debate. The representation of mainstream orientations (CL, C, CR) is rather balanced in the major sources (left of the figure) and more variable in other source (right of the figure). We note that, overall, there is a positive bias toward radical left (RL) politicians and a negative bias toward radical right (RR) ones. Notable exceptions for RL include valeursactuelles, closermag and lindependant. The only two sources which have a slightly positive positioning toward RR relative to their average positioning are valeursactuelles and causeur, but the average sentiment scores of RR remain overall negative even for these two sources. Intuitively, the center, which includes the current ruling party, is most criticized by news outlets which are left-wing (humanite and mediapart) and right-wing (valeursactuelles and causeur). A complementary analysis of the temporal variation of mentions of political orientations and of sentiment associated with them is presented in Appendix <ref>. We also note that the average sentiment of sources differs between Figures <ref> and <ref>. The scores obtained for French-only politicians are higher than when including all politicians. This may be explained by a more critical stance of French media on international issues than on national ones. §.§ Topic Oriented Analysis The results for mentions and sentiments associated with political topics are presented in Figures <ref> and <ref>. The mentions of centrist orientation, which includes the governing party, dominate most topics. This is particularly the case for the health effects of Covid-19 and for the war in Ukraine, two topics for which the government's communication is prevalent. Corruption is one notable exception, with the center-right being most cited because several of its politicians, such as the three with the lowest average scores from Table <ref> were involved in major corruption scandals. There are some differences even for closely related topics, such as the pairs related to Covid-19 and Ukraine. There, the mentions of the center are more dominant for Covid Health and for Ukraine War. The deviation of sentiment associated with political orientations from the average of the topic, illustrated in Figure <ref>, can be interpreted as a proxy for the credibility of political orientations for the respective topic. For instance, corruption has a particularly negative representation, with an average sentiment of -0.5. While still in the negative range, RL, CL and C orientations are perceived better than CR and RR on this crucial political topic. The radical right has the lowest scores on average, with particularly negative positioning for climate change and the war in Ukraine. This finding is probably explained by the relatively small importance given to environment in RR platforms and by the favorable positioning with respect to Russia in the past for the war in Ukraine. The radical left is also perceived negatively on the two Ukraine related topics for a similar reason. However, the sentiment toward RL is more positive than that of other orientations for most other topics. §.§ Politician Oriented Analysis Frequently-mentioned Politicians. Table <ref> provides insights about the top-10 most frequently mentioned politicians: overall mentions, average sentiment in the corpus, sentiment variation over time (i.e. the standard deviation of sentiment per year between 2016 and 2022). The table shows that political discussion in France is focused on presidency since all top-10 names are of French or foreign presidents or presidential candidates. To verify the potential bias due to presidential elections, we ran the analysis separately for years without election and results are similar. Emmanuel Macron is by far the most mentioned politician, with Russian and American presidents also being mentioned frequently. The other French politicians from Table <ref> are mainly 2022 presidential candidates. There is no correlation between their mentions in the corpus and their electoral results. Eric Zemmour and Valérie Pécresse scored lower in the elections than Marine Le Pen and Jean-Luc Mélenchon, but are more frequently mentioned. The average sentiment scores from Table <ref> vary substantially. Donald Trump, Vladimir Putin and Nicolas Sarkozy have the lowest scores. This is probably due to the critical perception of their political actions for the first two, and by ongoing legal actions against the third. Interestingly, Valérie Pécresse has the highest sentiment score in Table <ref>, despite her poor electoral performance in 2022. We verified her mentions and many of them correspond to her unexpected victory in the primary election of her party. We note that sentiment is more negative for far-right candidates (Eric Zemmour and Marine Le Pen) compared to Jean-Luc Mélenchon, the main left-wing candidate. This finding confirms the aggregated results from Figure <ref>. Sentiment varies to a different extent during the examined period, and this is explained by different factors: the political comeback for Joe Biden, the emergence of Eric Zemmour as presidential candidate in 2022, the invasion of Ukraine for Vladimir Putin etc. Details about temporal dynamics of mentions and sentiment for politicians from Table <ref> are provided in Appendix <ref>. Polarized Sentiment Scores. Table <ref> presents the top-10 politicians with the highest and lowest sentiment scores. The first group includes local politicians whose action is well-appreciated (Carole Delga and David Lisnard), national politicians who are favorably viewed by the public (Roselyne Bachelot and Jean Lassalle), and international politicians with a centrist or center-left positioning who took office during the analyzed period (Olaf Scholtz and Kamala Harris). Valéry Giscard is an exception in that this former French president died in 2020 and was already retired from politics for a long time. These two factors could explain his favorable representation in the corpus. Strongly negative scores are mainly associated with politicians involved in prominent political scandals (Claude Guéant, Penelope Fillon, Patrick Balkany) and to authoritarian leaders (Alexander Lukashenko) or populist leaders (Donald Trump, Jair Bolsonaro, Viktor Orban). We note that Penelope Fillon has a much more negative score (-0.61) than François Fillon (-0.33), her husband, who provided her a no-show job for decades. This is explained by the fact his representation in the corpus includes the beginning of the 2017 presidential campaign, when François Fillon won his party's primary elections. §.§ Gender Representation Existing studies which quantify the representation of genders in politics <cit.> show that men are much more present than women. We deepen this analysis by adding a subjective component to understand how female and male politicians are represented in the news [The corpus includes only 8 mentions of other gender.], with an illustration of mentions (Figure <ref>) and mean sentiment (Figure  <ref>). The results from Figure <ref> show that the news from the analyzed corpus contain significantly more mentions of male than of female politicians. The percentage of female mentions varies from 16.7% (lemonde) to 22.3% (leparisien). Considering that the overall proportion of women in French politics is higher (38% in Parliament[<https://data.ipu.org/content/france>] and 50% in the Government), the results from Figure <ref> show a clear underrepresentation of women. This bias is higher than the one observed for French audio-visual media, where female politicians amount for 30% of the total mentions <cit.>. The difference is probably explained by the stronger regulatory pressure on audio-visual media compared to written media. A temporal analysis of gender representation between 2016 and 2022 (detailed in Appendix <ref>) shows that gender bias is relatively stable. This finding indicates that the debate about the representation of gender in society have only a limited effect on the quantitative representation of women in politics. The gender-focused analysis of sentiment from Figure <ref> shows that the mentions of female politicians are more favorable than those of their male counterparts. This trend is respected for individual outlets, with some variation of the gap between genders. This favorable coverage is a positive signal for reducing gender bias in politics, but more efforts are needed to reduce the mention gap. §.§ Age Representation Figure <ref> illustrates the average age of politicians mentioned in the corpus. The global average is 57.4 years, with individual averages varying between 54.6 for leparisien and 59.4 for sudouest. The average age of members of the French Parliament is 49.1, while the corresponding figure is even lower for the Government. This age-related bias confirms the qualitative results presented in Table <ref>, which shows that the majority of frequently mentioned politicians are older than the average. This is particularly the case for Joe Biden, Donald Trump, Vladimir Putin and Nicolas Sarkozy. The main exception is Emmanuel Macron, who is much younger than the average age reported in Figure <ref>. § LIMITATIONS We discuss below a series of limitations of the proposed analysis framework, both in terms of data and processing. Scope of the dataset. The collected dataset includes a diversity of French media which are available online, but is focused on classical media. It does not include political texts published via social networks, which could further increase diversity. Such an enrichment of the dataset is challenging due to the content access policies of these platforms. The dataset content might induce biases regarding the results of the analysis. However, we carefully designed experiments so as to have sufficiently large samples in each case. Dataset imbalance. The sources are represented by a variable number of news and of mentions of entities, as detailed in Appendix <ref>. This is an effect of the collection strategy, which was more or less intensive for different time intervals, of the availability of RSS streams, and of the existence of paywalls. In the latter case, only the first sentences of the articles are available, but this block of text still provides interesting information. The outlet-related imbalance is mitigated by the fact that the analyses are performed on aggregated samples of data with sufficient data point for each sample. Types of entities used for analysis. The entity detection and linking components handle persons, organizations, and locations. The first two types of entities are most interesting for our work, but the sentiment classification dataset only includes persons. The extension to organizations, political parties in particular, would be useful. However, it would require an important labeling effort and is left for future work. The absence of organizations is mitigated by the fact that politicians can be mapped to political parties using Wikidata <cit.>, which can be themselves assigned to political orientations via ParlGov <cit.>. Sentiment classification robustness. In Section <ref>, we show that sentiment classification leads to stable aggregated results, despite imperfections for individual cases. Another potential limitation is related to the robustness of results when inferring sentiment for a set of entities that are not in the training set. This is the case for the news corpus used here since it is independent of the sentiment classification dataset used for training this component. We perform a train/validation/test split of the target-dependent classification dataset in which the validation and test subsets contained 50% of entities that appear in the training set and 50% which do not appear in it. After inference, we compute the F1 scores separately for the two test subsets and the difference in performance is under 1%. We also manually annotate a set of 500 randomly-selected sentiment predictions from our corpus and again obtained F1 scores which are similar to those obtained for the internal test set from <cit.>. This indicates that sentiment classification is transferable to new entities and can be used here. Completeness of analysis. The dimensions of the news representations obtained with the our framework can be combined in a flexible way. A series of results were presented in Section <ref>, and they are complemented by those discussed in the Appendices. Other combinations of dimensions of news representations could be used to perform different analyses, for instance by aggregating results at the political party level, but cannot be presented due to lack of space. We will release the code and data needed to facilitate further experiments by experts from different disciplines which take an interest in political news analysis. § CONCLUSIONS We introduced a news analysis framework which is useful for different computational politics stakeholders. The proposed framework combines recent NLP components, information retrieval and structured information to provide rich insights about the representation of politics in the news. The NLP components are multilingual and the resources ensure international coverage. The approach could thus be easily transferred to other countries and other languages in order to understand the similarities and differences of political representations in different democratic countries. We provide a comprehensive analysis focused on sources, politicians, and the representation of demographic segments. The obtained results are coherent with past findings <cit.>, but are significantly different, notably due to the important role given to the subjective perspective enabled by the use of target-dependent sentiment classification. At source level, we quantify the sentiment expressed of outlets and unveil their positioning with respect to the main political orientations. At topic level, we show that there are important differences between the distributions of mentions of political orientations and that the sentiment expressed about them varies significantly. At an individual level, we confirm that the French political representation in the media is dominated by the presidential function, and show that the automatic analysis is well-correlated with the public perception of politicians. Regarding demographic biases, we note that, unfortunately, the representation of politics in the French news is still a “country for old men”, since young and female politicians are still strongly underrepresented. These findings are only examples of the insights which can be obtained with a rich and flexible representation of news, and can be used by multiple stakeholders. Newspaper editors and journalists can obtain feedback about their outlet's positioning and that of competitors, and push toward a more diversified coverage of political news. Social scientists can use this type of insights to deepen the analysis of the presentation of politicians and politics in the media. Political organizations can deploy the pipeline to monitor their political action and improve it. Regulatory bodies could use framework to quantify biases and push toward a more diversified representation of demographic segments. The proposed analyses are also a way to improve transparency about the political topics and positioning of news outlets. As such, they can improve the understanding of the political news landscape by the general public. Code and data used here will be released to enable reproducibility. The links of news included in the corpus will also be provided. ACM-Reference-Format § PAPER CHECKLIST * For most authors... * Would answering this research question advance science without violating social contracts, such as violating privacy norms, perpetuating unfair profiling, exacerbating the socio-economic divide, or implying disrespect to societies or cultures? Yes, the question of improving the understanding of the media political landscape shouldn't imply any of these. * Do your main claims in the abstract and introduction accurately reflect the paper's contributions and scope? Yes * Do you clarify how the proposed methodological approach is appropriate for the claims made? Yes * Do you clarify what are possible artifacts in the data used, given population-specific distributions? NA * Did you describe the limitations of your work? Yes, see <ref> * Did you discuss any potential negative societal impacts of your work? No, because we don't think there is * Did you discuss any potential misuse of your work? NA * Did you describe steps taken to prevent or mitigate potential negative outcomes of the research, such as data and model documentation, data anonymization, responsible release, access control, and the reproducibility of findings? NA * Have you read the ethics review guidelines and ensured that your paper conforms to them? Yes * Additionally, if your study involves hypotheses testing... * Did you clearly state the assumptions underlying all theoretical results? NA * Have you provided justifications for all theoretical results? NA * Did you discuss competing hypotheses or theories that might challenge or complement your theoretical results? NA * Have you considered alternative mechanisms or explanations that might account for the same outcomes observed in your study? NA * Did you address potential biases or limitations in your theoretical framework? NA * Have you related your theoretical results to the existing literature in social science? NA * Did you discuss the implications of your theoretical results for policy, practice, or further research in the social science domain? NA * Additionally, if you are including theoretical proofs... * Did you state the full set of assumptions of all theoretical results? NA * Did you include complete proofs of all theoretical results? NA * Additionally, if you ran machine learning experiments... * Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? No, we will release the code upon paper acceptance. Also, we won't be able to release the dataset for copyright purposes, but we'll supply a list of urls pointing to the collected articles, as well as the preprocessing pipeline. * Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? Yes, in <ref> * Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? We're using model training procedures that are described in other papers * Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? Yes for the resource type and the energy consumed in <ref>. There's not much interest in specifying the number or the type of cluster. Each step is scalable from 1 to N gpus. * Do you justify how the proposed evaluation is sufficient and appropriate to the claims made? Yes in <ref> * Do you discuss what is “the cost“ of misclassification and fault (in)tolerance? No, we use aggregated metrics that cancel out the effects of misclassification in NER, Entity-Linking, and Sentiment Analysis. * Additionally, if you are using existing assets (e.g., code, data, models) or curating/releasing new assets, without compromising anonymity... * If your work uses existing assets, did you cite the creators? See <ref> * Did you mention the license of the assets? No, because they are under permissive licenses * Did you include any new assets in the supplemental material or as a URL? NA * Did you discuss whether and how consent was obtained from people whose data you're using/curating? No, because we juridically don't need to * Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? NA, news data * If you are curating or releasing new datasets, did you discuss how you intend to make your datasets FAIR (see <cit.>)? NA * If you are curating or releasing new datasets, did you create a Datasheet for the Dataset (see <cit.>)? NA * Additionally, if you used crowdsourcing or conducted research with human subjects, without compromising anonymity... * Did you include the full text of instructions given to participants and screenshots? NA * Did you describe any potential participant risks, with mentions of Institutional Review Board (IRB) approvals? NA * Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? NA * Did you discuss how data is stored, shared, and deidentified? NA § ETHICS STATEMENT This work is part of a project which was reviewed and approved by our institution's ethical committee. The committee provided useful guidance concerning the selection of news sources, and the granularity of the analysis. The recommendations were integrated in the work done. Data were collected from publicly available news outlets, and content is not distributed directly to protect copyright. Only the intermediate data needed for the analysis will be provided to third parties, after minimizing them in accordance to the data minimization principle of (GDPR Article 5). News articles were collected from diversified newspapers, which lean toward different political orientations. This and the focus on aggregated results reduce the risk of mischaracterizing any of the mentioned entities or political orientations. Analyses such as the one proposed here contribute to making the public debate healthier and are protected by the right to freedom of expression in the European Union (EHCR Article 10). Moreover, all authors did their best to take an objective stance when analyzing the obtained results, and to base the claims made on reputable and objective sources. § PIPELINE IMPLEMENTATION DETAILS The amount of compute is shared as follow: § DATASET STATISTICS Due to the different availability of content from news outlets, the distribution of the number of articles among them is uneven. Figure <ref> is the distribution of articles over the 20 most frequent news outlets. The collection of news articles was also limited by the availability of historical articles on the media platforms. Figure <ref> shows how uneven is the distribution. One solution in the future could be to expand the number of articles for past years using CommonCrawl snapshots. § DESCRIPTION OF NEWS SOURCES Table <ref> provides a brief description of top-10 outlets and the other outlets mentioned in the text. These descriptions are based on existing works such as <cit.> and <cit.> § REPRESENTATIVES OF THE FRENCH POLITICAL ORIENTATIONS We complement the analysis of sentiment per political orientation from Subsection <ref> in Table <ref>. It lists the sentiment associated with the five most cited politicians belonging to each political orientation. § REPRESENTATION OF MENTIONS AND SENTIMENT FOR NEWS OUTLETS We build vectorial representations of news outlets to capture their objective and subjective perspectives on the political spectrum. The mentions vector stores the number of occurrences of each politician, while the sentiment vector encodes the average sentiment about each politician. We use the 1,000 most frequently mentioned politicians as support. To obtain a reliable representation, only politicians which are mentioned at least 10 times are kept in the sentiment vectors. Average representations of the corpus are also built and used as reference. The individual representations are compared with average ones using cosine similarity, and ranked based on this measure. Figure <ref> presents the mentions and sentiment ranks for the top 10 sources. The lower the rank is, the more similar the source is to the average. For instance, lefigaro, one of the major French daily newspapers, is ranked second for both representations, this means that the distribution of entities and of their associated sentiment is very close to that of the entire corpus. A large gap between the two representations is observed for lemonde. This source conveys a sentiment which is close to the average, but the distribution of politician mentions is rather far from the average one. This is also the case for liberation and francetvinfo, which are rather consensual in terms of sentiment conveyed, but deviate from the average for mentions of politicians. We note that lemonde, lefigaro and liberation, three of the main French newspapers are ranked first, second and third in terms of sentiment representation. This occurs despite their different political leanings, which are center left, center right and left, respectively <cit.>. § TEMPORAL DYNAMICS OF POLITICIANS We present the temporal evolution of mentions of the ten most frequently mentioned politicians in Figures <ref> and <ref>. The detailed view of mentions from Figure <ref> illustrates the strong representation of Emmanuel Macron in the corpus. We also note the surge of mentions of Vladimir Putin in 2022, and that for Eric Zemmour in 2021. Sentiment changes in variable proportions in Figure <ref>. Stronger variations are explained by different factors. Joe Biden was initially absent from the corpus, then was presented negatively, but sentiment became more positive when his 2020 candidacy gained traction and he won the US presidential elections; Eric Zemmour was portrayed very negatively prior to 2021, but his image improved for some news outlets when his 2022 presidential run gained momentum; Vladimir Putin has an overall negative representation that has worsened since the invasion of Ukraine. Weaker variations are observed for Emmanuel Macron whose sentiment score is close to the average from Figure <ref>, except 2018, the year of the Yellow Vests movement, when his image was degraded. Nicolas Sarkozy has a constantly negative score which is probably explained by the array of legal actions opened against him. § TEMPORAL DYNAMICS OF POLITICAL ORIENTATIONS Figures <ref> and <ref> illustrate the temporal evolution of mentions and average sentiment associated with political orientations. This analysis complements the outlet-oriented one from Subsection <ref>. The distribution of mentions changes strongly between 2016 and 2017. 2016 is dominated by the center-left, which included the governing party at the time, and the center right. The central orientation gains momentum in 2017, when its party became majoritary, and is dominant between 2018 and 2020. The center right is well represented in 2021, when its primary elections for the 2022 presidential election took place and were frequently discussed in the media, with the center also being frequently mentioned. Then, the center again becomes dominant in 2022. The representation of the radical right became stronger in 2021 and 2022, when Marine Le Pen and Eric Zemmour played important roles in the campaign for the presidentioal election. A similar trend is observed for the radical left, whose main representative, Jean-Luc Mélenchon was also well placed in the presidential race. The evolution of sentiment is presented in Figure <ref>. It confirms that left-wing orientations have a media coverage which is overall more favorable compared to that of their right counterparts. The contrast is most marked for the radical orientations, with RL sentiment being positive during most years, and particularly in 2016 and 2022. Inversely, the sentiment toward the radical right is more negative in 2017, when Marine Le Pen lost the presidential election, and in 2021. The sentiment toward the center was most positive in 2017, the year when Emmanuel Macron won his first presidential run. However, this sentiment became negative in 2018, the year when the Yellow Vests movement errupted. § TEMPORAL DYNAMICS OF GENDER REPRESENTATION The gender-oriented analysis presented in Subsection <ref> is enriched with a presentation in Figure <ref> of the evolution of gender mentions in time. There is overall little variation of the gender imbalance over the studied period. A small reduction of this bias is noted between 2017 and 2020, but the proportion of female mentions decreases in 2020.
http://arxiv.org/abs/2408.12201v1
20240822082207
Prescribing positive curvature with conical singularities on $\mathbb S^2$
[ "Jingyi Chen", "Yuxiang Li", "Yunqing Wu" ]
math.DG
[ "math.DG", "math.AP" ]
Ultra-broadband non-degenerate guided-wave bi-photon source in the near and mid-infrared F. Roeder,^1,2,* A. Gnanavel,^1,2 R. Pollmann,^1,2 O. Brecht,^1,2 M. Stefszky,^1,2 L. Padberg,^1,2 C. Eigner,^2 C. Silberhorn,^1,2 B. Brecht^1,2 August 26, 2024 ===================================================================================================================================================== § ABSTRACT For conformal metrics with conical singularities and positive curvature on 𝕊^2, we prove a convergence theorem and apply it to obtain a criterion for nonexistence in an open region of the prescribing data. The core of our study is a fine analysis of the bubble trees and an area identity in the convergence process. § INTRODUCTION A real divisor 𝔇 on a compact surface Σ is a formal sum 𝔇=∑_i=1^mβ_i p_i, where β:=(β_1,⋯,β_m)∈^m and p_i∈Σ are distinct. The Euler characteristic of the pair (Σ,β) is defined to be χ(Σ,β):=χ(Σ)+∑_i=1^mβ_i. Let g_0 be a Riemannian metric on Σ. A conformal metric g on (Σ,g_0) is said to represent the divisor 𝔇 if g is a smooth metric away from p_1,⋯,p_m such that around each p_i there is an isothermal coordinate neighbourhood U_i w.r.t. g_0 with a coordinate z_i such that z_i(p_i)=0 and g is in the form g=e^2v |z_i|^2 β_ig_, where v ∈ C^0(U_i) ∩ C^2(U_i\{p_i}). The point p_i is called a conical singularity of angle θ_i=2π (β_i+1) if β_i>-1 and a cusp if β_i=-1. Finding a conformal metric g=e^2ug_0 representing 𝔇 with a prescribed function K on Σ is equivalent to solving -Δ_g_0 u= K e^2u -K_g_0-2 π∑_i=1^mβ_iδ_p_i in the sense of distribution, where δ_p_i is the Dirac measure at p_i. When χ(Σ,β)>0 and 𝔇 consists of conical singularities, existence of a conformal metric representing 𝔇 is known in a few cases. For instance, it is shown that (<ref>) admits a solution if 0<χ(Σ,β)<2min_i{1,β_i+1} (which is called the Trudinger constant), β_1,...,β_m>-1 and K is positive somewhere <cit.> and if χ(Σ,β)>2, the genus of Σ is at least 1, β_1,...,β_m>0 and β not in certain lattice, K is a positive Lipschitz continuous function <cit.>. In <cit.>, F. Luo and G. Tian established the uniqueness of Troyanov's solution to Liouville's equation on the punctured complex plane. For Liouville type equations, uniqueness and symmetry of solutions are studied in <cit.>, etc. and existence via a topological degree theory in <cit.>; we refer the reader to the reference therein for many other works. For the spherical metric of constant curvature on 𝕊^2, important classifications of conformal metrics with conical singularities has been achieved. It is shown that a solution cannot have exactly one singularity by W.X. Chen and C.M. Li <cit.>, a solution with exactly two conical singularities must satisfy a strong rigidity by M. Troyanov <cit.>, and A. Eremenko <cit.> treated three conical singularities. We will use the first two cases in our proof of results stated below. When the singular points {p_1,...,p_m} are not fixed on 𝕊^2, the question of finding a spherical metric with prescribed angles at m conical singularities has been studied, in particular, under a constrain on the holonomy representation: π_1(𝕊^2\{p_1,...,p_m}) → SO(3), in <cit.>. Inspired by these works, although the points in 𝔇 are fixed in our consideration, we introduce a function to describe the type of divisors in our interest: First, let 𝔽: ℝ^m →𝒫({1,⋯,m}) be an index function for certain half spaces of ℝ^m where 𝒫 stands for the power set, given by 𝔽( (v_1, ⋯,v_m) )={i:v_i≤1/2∑^m_k=1v_k}. Then, we define 𝒜_m={β=(β_1,⋯,β_m) ∈ℝ^m:-1<β_1,⋯,β_m≠ 0, 1/2χ(𝕊^2,β) ∈ (0,+∞) \ℕ, and for ∀ J⊂𝔽(β), 1/2χ(𝕊^2,β)-∑_j∈ Jβ_j≠ |J|, |J|+1, ⋯, |J|+[ 1/2χ( 𝕊^2 ) ]}. 𝒜_m={β=(β_1,⋯,β_m): β_i>-1,β̅>-2,β_i≠ 0, 1/2β̅-b≠j,j+1,⋯,j+ṽ for all j=0,1,⋯, #𝔽_j(β) and b∈Γ_j(β)}. For example, when 0<χ(S^2,β)<2, i.e. -2<β̅<0, we have β̃=0,𝔽(β)={β_i:β_i<0}. Note that #𝔽(β)≤ m-1. Since β̅/2≠ 0, β∈𝒜_m if 1/2β̅-b≠ j for any j=1, 2, ⋯, #𝔽(β). The definition above is very clear since by Cor 3.8, we know that we should eliminate all the following: 𝒢(representing the number of smooth bubbles) + ℬ(representing the number of singular bubbles) =1+1/2 Sum(β)-η where η∈Γ_ℬ(β). Moreover, we have the estimates: 𝒢≤ 1+1/2Sum(β). The following are the original definitions: 𝒜_m={ (a_1,⋯,a_m)∈ℝ^m:a_i>-1,1/2∑_i=1^m a_i≠ (s-1)+∑_j=1^sa_i_j, for any 1 ≤ i_1<i_2<⋯<i_s≤ m with 0<s<m and a_i_j<0 }, ℬ_m={ (a_1,⋯,a_m)∈ℝ^m:a_i>-1, 1/2∑_i=1^m a_i≠ t+∑_j=1^sa_i_j, for any 1 ≤ i_1<i_2<⋯<i_s≤ m with 0≤ s ≤ m and any t ∈ℕ}. This is equivalent to say 𝒜_m (origininal) = ⋃_ℬ=1^m-1𝒜_m(0, ℬ ), 𝒜_m (origininal) = ⋃_0 ≤𝒢≤ 1+1/2 Sum(β), 0 ≤ℬ≤ m, 𝒢+ℬ≥ 1𝒜_m(𝒢 , ℬ ). There are divisors on 𝕊^2 with β in 𝒜_m that are not representable by the spherical metric (cf. <cit.>); in other words, (<ref>) with such β and K=K_g_0=1 admits no solutions. The nonexistence persists in an open neighbourhood of (β,1) as a special case of our results in this paper. Let K lie in the set C^+(𝕊^2) of positive continuous functions on 𝕊^2. Suppose that β∈𝒜_m and one of the following assumptions holds: (𝒪_1) 0<χ(𝕊^2,β)<2; (𝒪_2) β_i≤ 1 for any i ∈{1,⋯,m}. If the equation -Δ_𝕊^2 u=Ke^2u-2π∑_i=1^mβ_iδ_p_i-1 has no solutions in ∩_p∈ [1,2) W^1,p(𝕊^2,g_𝕊^2), then there exists a neighbourhood 𝒰 of (K,β) in C^+(𝕊^2) × (-1,∞)^× m such that for any ( K̃,β̃ ) ∈𝒰, -Δ_𝕊^2 u=K̃ e^2u-2π∑_i=1^mβ̃_iδ_p_i-1 has no solutions in ∩_p∈ [1,2) W^1,p(𝕊^2, g_𝕊^2). Assume β =(β_1,⋯,β_m) ∈𝒜_m and K is in the set C^+(𝕊^2) of positive continuous functions on 𝕊^2. If the equation -Δ_𝕊^2 u=Ke^2u-2π∑_i=1^mβ_iδ_p_i-1 has no solutions in ∩_p∈ [1,2) W^1,p(𝕊^2,g_𝕊^2), then there exists a neighbourhood 𝒰 of (K,β) in C^+(𝕊^2) × (-1,∞)^× m such that for any ( K̃,β̃ ) ∈𝒰, -Δ_𝕊^2 u=K̃ e^2u-2π∑_i=1^mβ̃_iδ_p_i-1 has no solutions in ∩_p∈ [1,2) W^1,p(𝕊^2, g_𝕊^2). In fact, stronger regularity on K allows us to drop the assumption (𝒪_1) and (𝒪_2) in Theorem <ref>: Let K lie in the set C^1,+(𝕊^2) of positive C^1-smooth functions on 𝕊^2. Suppose β =(β_1,⋯,β_m) ∈𝒜_m. If the equation -Δ_𝕊^2 u=Ke^2u-2π∑_i=1^mβ_iδ_p_i-1 has no solutions in ∩_p∈ [1,2) W^1,p(𝕊^2,g_𝕊^2), then there exists a neighbourhood 𝒰 of (K,β) in C^1(𝕊^2) × (-1,∞)^× m such that for any ( K̃,β̃ ) ∈𝒰, -Δ_𝕊^2 u=K̃ e^2u-2π∑_i=1^mβ̃_iδ_p_i-1 has no solutions in ∩_p∈ [1,2) W^1,p(𝕊^2, g_𝕊^2). A key ingredient in proving Theorem <ref> is the compactness statement in Theorem <ref> below. Fix m distinct points p_1,...,p_m on 𝕊^2. Let β^k=(β^k_1,...,β^k_m)∈ℝ^m with β^k→β∈ℝ^m as k→∞ and 𝔇^k=β_1^kp_1+⋯+β_m^kp_m. For a sequence of conformal metrics g_k=e^2u_kg_𝕊^2 representing 𝔇^k with curvature K_k→ K>0, the theorem says that (up to passing to subsequences) either the sequence converges to a conformal metric with curvature K representing a divisor 𝔇=∑^m_i=1(β_i -2n_i)p_i where n_i∈ℕ∪{0}, or the sequence collapses to 0-measure but after normalization it converges to a conformal metric representing a nontrivial divisor (possibly different from 𝔇) on 𝕊^2 with curvature 0. Suppose that the curvature measures _g_k of g_k (or a subsequence under consideration) converge weakly. The function Θ(x):𝕊^2→ℝ defined by Θ(x )=lim_r → 0lim_k →∞𝕂_g_k( B_r^g_𝕊^2(x) ). reveals the curvature concentration at x and we can show that {x: Θ(x) ≠ 0 } has only finite elements, hence the sum ∑_x ∈𝕊^2Θ(x) δ_x is well-defined. The value of Θ(x) can be calculated precisely by analyzing the bubble tree structure carefully (see Proposition <ref>, Proposition <ref> and Proposition <ref>). Let p_1, ... , p_m be distinct points on 𝕊^2. Suppose that g_k=e^2u_kg_𝕊^2 and u_k satisfies -Δ_𝕊^2 u_k=K_k e^2u_k-1-2 π∑_i=1^mβ_i^kδ_p_i, in the sense of distribution, where (A1) K_k∈ C^0(𝕊^2) and K_k→ K>0 in C^0(𝕊^2), (A2) β_i^k→β_i>-1, (A3) χ(𝕊^2,β)=2+∑_i=1^mβ_i>0. Then after passing to a subsequence one of the following holds: (a) If u_k→ u weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2), then u solves -Δ_𝕊^2 u=K e^2u+∑_x ∈𝕊^2Θ(x) δ_x-1. {u_k} can only has bubbles at p_i with β_i>1, and the bubble tree at each such point has only one level. Furthermore, the total number of bubbles, saying s, has an upper bound: s ≤1/2χ(S^2,β). u_k converges weakly to some u in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2), and there exist nonnegative integers n_i, i=1,⋯,m such that -Δ_𝕊^2 u= K e^2u +4 π∑_i=1^m n_iδ_p_i-2π∑_i=1^mβ_i δ_p_i-1. (b) If u_k-c_k→ v weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2) with c_k→ -∞, then -Δ_𝕊^2 v=∑_x ∈𝕊^2Θ(x) δ_x-1. Each bubble tree has at most two levels, and s_1-s_2=1/2χ(S^2,β)-∑_{ i: {u_k} has a singular bubble at p_i }β_i, where s_1 and s_2 denote the total number of bubbles of {u_k} at 1-level and 2-level, respectively. u_k-c_k converges weakly to some v in ∩_p∈ [1,2)W^1,p(𝕊^2,g_𝕊^2), where c_k is the average of u_k and c_k→ -∞, and there are m',m”,m”'∈ℕ∪{0} such that -Δ_𝕊^2 v = 4 π∑_j=1^m' ( β_i_j+n_j' ) δ_p_i_j+ 4 π∑_j=1^m” n_j”δ_ p_i_j+m' +4 π∑_j=1^m”' n_j”' δ_q_j -2 π∑_j=1^mβ_jδ_p_j-1, where 1-β_i_j/2<n'_j∈ℤ, n_j”, n_j”' ∈ℕ∪{0}, q_j∈𝕊^2\{p_1, ⋯,p_m}. We will show that (a) in the above theorem does not happen and the bubble trees have only one level, provided one of the following holds: * χ(𝕊^2,β)<2 * β_i≤ 1 for any i ∈{1,⋯,m}. For these two cases, the number of bubbles is bounded by the divisor: #{bubbles}≤ 1+1/2∑_i=1^m |β_i|. Convergence of solutions without divisors has been studied extensively, for example, in <cit.>, etc. For conformal metrics representing divisors, it is shown in <cit.> that if {u_k} has at least one bubble and K_k→ K in C^1(𝕊^2) (compare to C^0(𝕊^2) in Theorem <ref>), then (b) in Theorem <ref> holds and v solves -Δ_𝕊^2 v=∑_j=1^s_1 ( 4 π +4 πβ_i_j) δ_p_i_j+∑_j=1^s_2 4 πδ_q_j-2 π∑_i=1^mβ_iδ_p_i-1. Using the techniques developed in this paper, we can give a detailed description on the bubble development under the stronger assumption K_k→ K in C^1(𝕊^2): the sequence {u_k} can only develop bubbles at 1-level, u_k→-∞ almost everywhere, and the bubble tree structure can be described in a clear way (see the detailed argument in Proposition <ref>): * {u_k} has s_i smooth bubbles at 1-level at some p_i with s_i=β_i+1; thus, if β_i∉ℕ, then {u_k} has no smooth bubbles at p_i; * {u_k} has one bubble with two singularities at 1-level at some p_i; * {u_k} has one smooth bubble at 1-level at some q_j∉{p_1, ⋯,p_m}. Finally, we would like to mention that Mazzeo-Zhu have developed a theory for moduli of metrics with divisors in <cit.> (see the reference therein). Acknowledgement. The first author would like to thank Professor Gang Tian for arranging the visits to BICMR in the summers of 2023 and 2024 in the course of this work. All authors are grateful for the wonderful research environment provided by Tsinghua University during their collaboration. § PRELIMINARY In this section, we collect a few definitions and results from <cit.> and <cit.> that will be used in this paper. Let Σ be a closed surface with a Riemannian metric g_0, the Gauss curvature K_g_0 and the area element dV_g_0. Let (Σ,g_0) be the set of Radon measures g_u=e^2ug_0 so that there is a signed Radon measure μ(g_u) for u∈ L^1(Σ,g_0) satisfying ∫_Σφ dμ(g_u) =∫_Σ(φ K_g_0 - u Δ_g_0φ)dV_g_0,∀φ∈ C^∞_0(Σ). We write dV_g_u= e^2u dV_g_0 and call the signed Radon measure _g_u= μ(g_u) the curvature measure for g_u. In an isothermal coordinate chart (x,y) for the smooth metric g_0, we can write g_0= e^2u_0g_ for some local function u_0. So any g∈(Σ,g_0) is locally expressible as g=e^2vg_, where v∈ L^1_(Σ) and -Δ v dxdy =_g_v as distributions where Δ=∂^2/∂ x^2+∂^2/∂ y^2. Let (Σ,g_0) be a surface and 𝔇=∑_i=1^m β_i p_i be a divisor on Σ. A Radon measure g=e^2ug_0∈ℳ(Σ,g_0) represents 𝔇 with curvature function K if _g=K e^2u dV_g_0 -2π∑_i=1^mβ_iδ_p_i, and K e^2u∈ L^1(Σ,g_0). In other words, in the sense of distributions, -Δ_g_0u=K e^2u-K_g_0-2π∑_i=1^mβ_iδ_p_i. By (<ref>), for a closed surface Σ we have 1/2π∫_ΣK e^2u dV_g_0=χ(Σ)+∑_i=1^mβ_i=χ(Σ,β). For simplicity, we use the notations: ℳ(Ω)=ℳ( Ω,g_ ) where Ω⊂ℝ^2 is a domain and ℳ(𝕊^2)=ℳ(𝕊^2, g_𝕊^2) where g_𝕊^2 is the metric on 𝕊^2 of curvature 1. <cit.> Let μ be a signed Radon measure on D. Suppose that -Δ u=μ holds weakly and u_L^1(D)<γ. Then (1) u ∈ W^1,q( D_1/2 ) for any q ∈ [1,2). Moreover, r^1-2/q∇ u_L^q(D_r(x))≤ C(q) (u_L^1(D)+|μ|(D)),∀ D_r(x)⊂ D_1/2; (2) if |μ|(D)<τ<2π, then for any p < 4 π/τ, there is β=β(τ,γ,p) such that ∫_ D_1/2 e^p|u|≤β. <cit.> Let {μ_k } be a sequence of signed Radon measures on D with |μ_k|(D)<ϵ_0< π. Suppose that -Δ u_k=μ_k holds weakly with ∇ u_k_L^1(D)<Λ and (D,g_k)<Λ'. Then after passing to a subsequence, one of the following holds: (1) u_k→ u weakly in W^1,p(D_1/2) for any p ∈ [1,2) and e^2 u_k→ e^2 u in L^q(D_1/2) for some q>1; (2) u_k→ -∞ for a.e. x and e^2u_k→ 0 in L^q(D_1/2) for some q>1. Let c_k be the mean value of u_k on D_1/2. By the Poincaré inequality and Sobolev inequality, for any q ∈ [1,2), u_k-c_k_ L^q(D_1/2 ) ≤ C. Combining with Lemma <ref>, we know that u_k-x_k is a bounded sequence in ∩_p ∈ [1,2) W^1,p(D_1/2). After passing to a subsequence, we may assume u_k-c_k converges weakly to some function u_∞ in ∩_p ∈ [1,2) W^1,p(D_1/2). We choose p_0∈ (2,4 π/ϵ_0), then by Lemma <ref>, ∫_D_1/2 e^p_0 |u_k-c_k| ≤β(ϵ_0,Λ,p_0), then for q ∈ [1,p_0/2), e^2(u_k-c_k) converges to e^2u_∞ in L^q(D_1/2). Since (D,g_k)<Λ', then by Jensen's inequality, after passing to a subsequence, either c_k→ c_∞>-∞ or c_k→ -∞. If c_k converges to c_∞, we set u=u_∞+c_∞, then e^2u_k converges to e^2u in L^q(D_1/2 ); if c_k→ -∞, then e^2u_k converges to 0 in L^q(D_1/2 ). <cit.> Let μ be a signed Radon measure defined on a closed surface (Σ,g_0) and u ∈ L^1(Σ,g_0) solves -Δ_g_0 u= μ weakly. Then, for any r>0 and q ∈ [1,2), there exists C=C(q,r,g_0) such that r^q-2∫_ B_r^g_0(x) |∇_g_0 u |^q≤ C (|μ|(Σ) )^q. §.§ Gauss-Bonnet formula Let g = e^2ug_∈ℳ(D). There is a set E⊂ (0,1) such that the 1-dimensional Lebesgue measure of (0,1)\ E is zero (<cit.>) and ∫_∂ D_t∂ u/∂ r is well-defined for all t ∈ E. Furthermore, _g(D_t\ D_s) = -∫_∂ D_t∂ u/∂ r + ∫_∂ D_s∂ u/∂ r, _g(D_t) = -∫_∂ D_t∂ u/∂ r for all t, s ∈ E. The latter equation notably implies the convergence: lim_t → 0, t ∈ E∫_∂ D_t∂ u/∂ r = -_g({0}). The set {t : _g(∂ D_t) ≠ 0} is countable, allowing to pick E with _g(∂ D_t) = 0, for all t ∈ E. Let g_k = e^2u_kg_∈ℳ(D) and assume that u_k→ u weakly. Set g = e^2ug_. We select a subset 𝒜⊂ (0,1) with the following properties: * (0,1) \𝒜 has zero (1 dimensional Lebesgue) measure; * _g_k(∂ D_t) = _g(∂ D_t) = 0 for all t ∈𝒜 and for all k; * (<ref>) holds for any t ∈𝒜. From <cit.>, _g_k(D_t) →_g(D_t) for all t ∈𝒜, in turn ∫_∂ D_t∂ u_k/∂ r→∫_∂ D_t∂ u/∂ r, for all t ∈𝒜. Now, we assume _g(∂ D_t) = 0 for any t. As t→ s, we observe _g(D_t\ D_s) → 0. Let Φ_u(t) = ∫_∂ D_t∂ u/∂ r, t ∈𝒜. For any t_k ∈𝒜 with t_k → t_0, Φ(t_k) is a Cauchy sequence. Consequently, we can extend the domain of Φ from 𝒜 to (0,1) by defining: Φ_u(t) = lim_s ∈𝒜, s → tΦ_u(s). This extension ensures Φ_u is a continuous function on (0,1), and we establish the relationships: _g(D_t\ D_s) = -Φ_u(t) + Φ_u(s), _g(D_t) = -Φ_u(t), for all s, t ∈ (0,1). We now discuss terminologies needed to describe the bubble-tree analysis. Let g_k=e^2u_kg_∈ℳ(D) with |_g_k|(D)<Λ. A sequence { (x_k,r_k) } is a blowup sequence of {u_k} at 0 if x_k→ 0, r_k→ 0 and u_k'=u_k(x_k+r_kx)+log r_k converges weakly to a function u in ∩_p∈[1,2)W^1,p_(^2), and u is called a bubble. The sequence {u_k} has no bubble at 0 if no subsequence of {u_k} has a blowup sequence at 0. We say u_k' converges to a ghost bubble if there exists c_k→ -∞ such that u_k'-c_k converges weakly to some function v in ∩_p∈[1,2)W^1,p_(^2). The blowup is designated for invariance of area: (D_R, e^2u'_k g_)=(D_Rr_k(x_k),e^2u_k g_). The ghost bubble has the following property: lim_r → 0lim_k →∞∫_ D_1/r∖∪_z ∈𝒮 D_r(z) e^ 2u_k' =0, where 𝒮 is the set consisting of measure-concentration points: = {y: μ(y) ≥ϵ_0/2}. where μ is the weak limit of |𝕂_ e^ 2 (u_k'-c_k) g_| and ϵ_0 is chosen as in Theorem <ref>. At a point, the sequence {u_k} may have more than one blowup sequences, we distinguish them according to the following definitions. Two blowup sequences {(x_k,r_k)} and {(x_k',r_k')} of {u_k} at 0 are essentially different if one of the following happens r_k/r_k'→∞, r_k'/r_k→∞, |x_k-x_k'|/r_k+r_k'→∞. Otherwise, they are essentially same. We say the sequence {u_k} has m bubbles if {u_k} has m essentially different blowup sequences and no subsequence of {u_k} has more than m essentially different blowup sequences. For two essentially different blowup sequences {(x_k,r_k)}, {(x_k',r_k')}, we say { (x_k',r_k') } is on the top of { (x_k,r_k) } and write { (x_k',r_k')}<{ (x_k,r_k)}, if r_k'/r_k→ 0 and x_k'-x_k/r_k converges as k→∞. (1) If { (x_k',r_k') }<{ (x_k,r_k) }, we have l_k:=r_k'/r_k→ 0 and y_k:=x_k'-x_k/r_k→ y. If we set v_k=u_k(x_k+r_kx)+log r_k and v_k'=u_k(x'_k+r_k'x)+log r_k', we can verify v_k'(x)=v_k (l_kx+y_k)+log l_k. Then { (y_k,l_k) } is a blowup sequence of { v_k } at y, and the limit of v_k' can be considered as a bubble of { v_k }. (2) If two essentially different blowup sequences { (x_k,r_k)}, { (x_k',r_k') } are not on the top of each other, then we must have |x_k-x_k'|/r_k+r_k'→∞ and separation of domains: for any t>0, when k is sufficiently large it holds D_tr_k(x_k)∩ D_tr_k'(x_k')=∅. { (x_k',r_k') } is right on the top of { (x_k,r_k) }, if { (x_k',r_k') }<{ (x_k,r_k) } and there is no { (x_k”,r_k”) } with { (x_k',r_k') }<{ (x_k”,r_k”) }<{ (x_k,r_k) }. We define the level of a blowup sequence and its corresponding bubble as follows: * A blowup sequence { (x_k,r_k) } of { u_k} at the point 0 is at 1-level if there is no { (x_k',r_k') } with { (x_k,r_k) } < { (x_k',r_k')}. For m ≥ 2, a blowup sequence { (x_k,r_k) } is at m-level if there exists a blowup sequence { (x_k',r_k') } which is at (m-1)-level such that { (x_k,r_k) } <{ (x_k',r_k') }. * For m ≥ 1, a bubble is at m-level if it is induced by a blowup sequence at m-level. Two essentially different blowup sequences at m-level may be right on the top of two essentially different blowup sequences at (m-1)-level. By Remark <ref>, if { (x_k^1,r_k^1) },{ (x_k^2,r_k^2) } are at the same level, then for any t>0, when k is large D_tr_k^1(x_k^1)∩ D_tr_k^2(x_k^2) =∅ Let g=e^2ug_∈ℳ( D \{0}) with |𝕂_g|( D \{0} ) <ϵ_1<π. for some ϵ_1. Extend _g to a signed Radon measure μ by taking μ(A)=𝕂_g(A ∩ (D \{0})), ∀ A ⊂ℝ^2 and write μ=_g⌊ (D\{0}). By <cit.>, we have the following decomposition: u(z)=I_μ(z)+ λlog|z|+w(z), where I_μ(z)=-1/2 π∫_ℝ^2log |z-y| d μ(y), λ=lim_ a.e r→ 01/2 π∫_∂ D_r∂ u/∂ r, and w is a harmonic function on D\{0} with ∫_∂ D_t∂ w/∂ t=0. Hence, we can find a holomorphic function F on D\{0} with (F)=w, which means (u-I_μ)(z)=(F(z))+λlog |z|, z ∈ D \{0}. <cit.> (1) If the area of D is finite, namely, ∫_D e^2u<∞, then w is smooth on D, and λ≥ -1. Moreover, g∈ℳ(D) with _g=μ-2πλδ_0. (2) If we further assume 𝕂_g≥ 0 on D \{0}, then λ>-1. (1) For g=e^2ug_∈ℳ(D), the residue of g (or u) at 0 is (g,0)=(u,0)=-1/2π_g({0}). (2) For g=e^2ug_∈ℳ(D \{0}) satisfying |𝕂_g|(D \{0})+∫_D e^2u<∞, the residue of g (or u) at 0 is (g,0)=(u,0)=-1/2π_g({0}). (3) For g=e^2ug_∈ℳ(^2\ D) satisfying |_g|(^2\ D)+∫_^2\ D e^2u<∞, the residue of g (or u) at ∞ is (g,∞)=(u,∞)=-1/2π_g'({0}) where g'=e^2u'g_∈ℳ(D\{0}) with u'(x')=u(x'/|x'|^2)-2log|x'|, x'=x/|x|^2 is extended in ℳ(D). Since lim_a.e. r→ 0∫_∂ D_r∂ u'/∂ r=2π(g',0)=2π(u,∞), then by direct calculations, lim_a.e. r→ 0∫_∂ D_1/r∂ u/∂ r=-2π(2+(u,∞)). <cit.> Let g_k=e^2u_kg_∈ℳ(D) with |_g_k|(D)+(D,g_k)≤Λ. Assume that u_k converges to u weakly in W^1,p(D) for some p∈[1,2) and {u_k} has a blowup sequence {(x_k,r_k)} at 0 with the corresponding bubble u'. Then there exists t_i→ 0, such that lim_i→∞lim_k→∞_g_k(D_t_i(0)\ D_r_k/t_i (x_k))=-2 π(2+(u,0)+(u',∞)). The objective of this section is to analyze the area in various regions arose in a blowup procedure. The main observation is the conservation of area in the limiting process when curvatures are uniformly bounded away from zero. Let c_k as the mean value of u_k on D_1/2, then by the Poincaré and Sobolev inequalities and (P3), after passing to a subsequence, u_k-c_k converges weakly in ∩_p ∈ [1,2) W^1,p(D_1/2). By (P2) and Jensen's inequality, c_k<+∞. If c_k is bounded below, we may assume u_k converges weakly in ∩_p ∈ [1,2) W^1,p(D_1/2) to some u and define g_∞=e^2u g_. If c_k→ -∞, we define (D_1/2,g_∞)=0. If no bubble occurs then there is no area concentration point: <cit.> Let g_k=e^2u_k g_∈ℳ(D) which satisfies (P1)-(P3). Assume that {u_k} has no bubble. Then lim_r → 0lim_k →∞( D_r, g_k)=0. If the curvature does not change sign then convergence of solutions occurs: <cit.> Let g_k=e^2u_kg_∈ℳ(D) with _g_k=f_ke^2u_kdx-2πβ_kδ_0. Assume (1) (D,g_k)≤Λ_1; (2) r ∇ u_k_L^1(D_r(x))≤Λ_2 for all D_r(x) ⊂ D; (3) -Λ_3 ≤ f_k≤ -1 or 1≤ f_k≤Λ_3, and f_k converges to f for a.e. x∈ D; (4) β_k→β. Assume { u_k } has no bubble. Then, after passing to a subsequence, one of the following holds: (a) u_k→ u weakly in ∩_p ∈ [1,2) W^1,p(D_1/2) and e^2u_k→ e^2u in L^1(D_1/2), and -Δ u=fe^2u-2πβδ_0; (b) u_k→ -∞ a.e. and e^2u_k→ 0 in L^1(D_1/2). The following area identity plays a key role in this paper: <cit.> Let g_k =e^2u_k g_∈ℳ(D) satisfy (P1) _g_k = f_k dV_g_k + λ_k δ_0 (where λ_k might be 0) with f_k ≥ 1 or f_k ≤ -1; (P2) |_g_k|(D) + (D,g_k) ≤Λ_1; (P3) r^-1∇ u_k_L^1(D_r(x))≤Λ_2 for all D_r(x) ⊂ D. We assume {u_k} has finitely many bubbles v_1, ⋯, v_m, induced by blowup sequences {(x_k^1,r_k^1) }, ⋯, { (x_k^m,r_k^m)} at the point 0, respectively. and we further assume that {u_k} has no bubbles at any point x ∈ D \{0}. Then after passing to a subsequence, lim_k →∞( D_1/2,g_k)=(D_1/2,g_∞)+∑_i=1^m (ℝ^2,e^2v_ig_). We allow m=0, which means {u_k} has no bubbles at 0, and the sum term in (<ref>) vanishes for this case. § POSITIVE CURVATURES ON 𝕊^2 WITH DIVISORS In this section, we investigate the bubble tree convergence of a sequence of conformal metrics with conical singularities on a sphere. §.§ Bubbles at 1-level In this subsection, we establish a result concerning a property of bubbles at 1-level on a disk, which will play a crucial role in the subsequent discussions. Let g_k=e^2u_k g_∈ℳ(D) and let u_k∈⋂_p∈[1,2)W^1,p(D) solve -Δ u_k = K_k(x)e^2u_k - 2πλ_kδ_y_k, under the following assumptions: (a) There exists Λ_1 >0 such that (D,g_k) ≤Λ_1; (b) There exists Λ_2 >0 such that r^-1∇ u_k_L^1(D_r(x))≤Λ_2 for all D_r(x) ⊂ D; (c) {u_k} converges to u weakly in ⋂_p∈[1,2) W^1,p(D); (d) K_k converges to K in C(D), where K>0; (e) The sequences {y_k} and {λ_k} converge to y and λ∈(-1,∞), respectively. Then, we have: (1) {u_k} converges weakly in W^2,p_loc(D\{y}) for any p>1; (2) If { u_k } has a blowup sequence { (x_k,r_k) }, then x_k→ y, y_k∈ D_t(y)\ D_r_k/t(x_k) for any fixed t>0 and large k, and the corresponding bubble is smooth; (3) If λ≤ 1, then { u_k } has no bubble in D. Step 1: We show that u_k converges weakly in W^2,p_loc(D\{y}) for any p>1. We assume that |_g_k| converges weakly to a Radon measure μ and define ={x:μ({x})>ϵ_0/2}, where ϵ_0 is chosen as in Theorem <ref>. By Theorem <ref>, e^2u_k is bounded in L^q_1(Ω) for some q_1>1 and any Ω⊂⊂ D\. By the L^p-estimate <cit.>, u_k→ u weakly in W^2,p_loc(D\) for any p ≥ 1. Thus, to see u_k→ u in W^2,p_loc(D\{y}), it suffices to show \{y}=∅. We assume that there exists ỹ∈\{y}, and select δ such that D_2δ(ỹ)∩(∪{y})={ỹ}. For large k, the equation satisfied by u_k on D_2δ(ỹ ) is -Δ u_k=K_ke^2u_k. If u_k is bounded from above in D_2δ(ỹ ), then u_k is bounded in W^2,p(D_δ(ỹ )) for any p. This implies that K_ke^2u_k converges in C((D_δ(ỹ )), which contradicts the choice that ỹ is in . Thus, we must have max_D_δ(ỹ ) u_k→ +∞. Let c_k=max_D_δ(ỹ )u_k=u_k(z_k),ρ_k=e^-c_k, and define ũ_k(x)=u_k(z_k+ρ_kx)+logρ_k. Recalling that u_k→ u weakly in W^2,p_loc(D_2δ\{ỹ}) for any p, we conclude z_k→ỹ (otherwise we will obtain a contradiction to the choice of z_k). Since ũ_k(0)=max_D_Rũ_k, applying <cit.> and <cit.>, we conclude ũ_k_W^2,p(D_R)<C(R) for any R. Then, ũ_k converges weakly in W_loc^2,p(ℝ^2) to a function ũ which satisfies -Δũ=K(ỹ)e^2ũ,ũ(0)=max_ℝ^2ũ=0,∫_^2e^2ũ<∞. By <cit.>, we have ũ(x)=-log(1+K(ỹ )/4|x|^2),and∫_^2K(ỹ )e^2ũ=4π. Thus, e^2ũg_ is the metric with constant curvature K(ỹ ) on 𝕊^2\{∞}. So ∞ is not a singularity hence (ũ,∞)=0. By Proposition <ref>, we obtain lim inf_r→ 0lim_k→∞_g_k(D_r(ỹ )\ D_r_k/r(z_k)) ≤ -2 π(2+(u,ỹ)+(ũ,∞))<0. However, since _g_k=K_ke^2 u_kdx on D_r(ỹ )\ D_r_k/r(z_k) and K_k>0, we arrive at a contradiction. Step 2: Assume that {u_k} has a blowup sequence {(x_k,r_k)} at some point x_0∈ D. We claim that for any fixed r>0, y_k lies in the neck region D_r(x_0) \ D_r_k/r(x_k). Suppose u_k'=u_k(x_k+r_kx)+log r_k→ u' weakly in ∩_p ∈ [1,2) W_^1,p(ℝ^n). If y_k∉ D_r(x_0)\ D_r_k/r(x_k), then u_k has no singularity in D_r(x_0)\ D_r_k/r(x_k). By Proposition <ref>, 0≤lim inf_r→ 0lim inf_k→∞_g_k(D_r(x_0)\ D_r_k/r(x_k))≤-2π(2+(u,x_0)+(u',∞)), which leads to a contradiction since (u,x_0)>-1 and (u',∞)>-1. Consequently, x_0=y, |-y_k-x_k/r_k | →∞. In turn, for any r>0, u_k' solves the following equation on D_r when k is sufficiently large: -Δ u_k'=K_k( x_k+r_k x ) e^2u_k'. Applying similar argument as in Step 1, u_k'→ u' weakly in W^2,p_loc(ℝ^2) for any p>1, where u' satisfies -Δ u'=K(x_0)e^2u',∫_^2e^2u'<∞. Applying the result in <cit.> again, we deduce that (u',∞)=0. By Proposition <ref>, -2πλ ≤lim inf_r→ 0lim inf_k→∞_g_k(D_r(y)\ D_r_k/r(x_k)) ≤-2π(2+(u,y)+(u',∞))<-2 π which implies that λ>1. Therefore, if λ≤ 1, {u_k} has no bubble in D. Synge's theorem tells us that a Riemannian surface with positive Gauss curvature is simply-connected and has no closed minimizing geodesics. Heuristically, Lemma <ref> (2) can be viewed as a version of Synge's theorem on surfaces with positive curvature measure: if (2) does not hold, the neck which joins the bubble part and the base part has no singularities so it will eventually disappear in the blowup process, and it looks like the middle part of a dumbbell, which is “thin". Hence a “closed minimizing geodesic" will occur, see Figure I. .6 [domain=30:330, samples=300] plot (4+2*cos(), 1.6*sin()); (5.73,0.8) arc[start angle=-129.77399,end angle=-39.18002, radius=1.46280]; (12,0) arc[start angle=16.8106,end angle=136.40540, radius=2.49779]; (7.8,-1) arc[start angle=-136.40540,end angle=-16.8106, radius=2.49783]; (7.8,-1) arc[start angle=39.18002,end angle=129.77399, radius=1.46305]; [dashed] (7,-0.5) arc[start angle=-43.59587,end angle=43.59587, radius=0.72511]; (7,0.5) arc[start angle=136.3876,end angle=223.6124, radius=0.72489]; (12,0) circle (2pt) node[right] P_i; Figure I. There is a closed minimizing geodesic. 0.6 [domain=30:330, samples=300] plot (4+2*cos(), 1.6*sin()); (5.73,0.8) arc[start angle=-129.77399,end angle=-39.18002, radius=1.46280]; (11,0) arc[start angle=2.0217,end angle=143.26971, radius=1.77692]; (7.8,-1) arc[start angle=-143.26971,end angle=-2.0217, radius=1.77704]; (7.8,-1) arc[start angle=39.18002,end angle=129.77399, radius=1.46305]; (7,0.5) arc[start angle=122.82726,end angle=159.71268, radius=1.01248]; (6.6,0) arc[start angle=-159.75049,end angle=-122.93556, radius=1.01392]; [dashed] (7,-0.5) arc[start angle=-43.59587,end angle=43.59587, radius=0.72511]; (6.6,0) circle (2pt) node[left] P_i; Figure II. The singular point is in the neck region, shortest loops may not be smooth. §.§ The structure of bubble tree Let p_1, ⋯,p_m be distinct points on 𝕊^2. From now on, we set g_k=e^2u_kg_𝕊^2∈ℳ(𝕊^2) with _g_k=K_kdV_g_k-2π∑_i=1^mβ_i^kδ_p_i, in other words, u_k∈∩_p ∈[1,2)W^1,p(𝕊^2,g_𝕊^2) solves the equation -Δ_𝕊^2 u_k=K_ke^2u_k-2π∑_i=1^mβ_i^kδ_p_i-1 in the sense of distributions. Assume _g_k converge as measures. The curvature concentration function of {u_k} is Θ(x )=lim_r → 0lim_k →∞𝕂_g_k( B_r^g_𝕊^2(x) ). For the remainder of this section, we make the following assumptions: (A1) K_k is continuous and K_k→ K in C(𝕊^2), with K>0. (A2) β_i>-1, and β_i^k→β_i. (A3) χ(𝕊^2,β)=2+∑_i=1^mβ_i>0. Then, there exist constants 0<a<b depending on K and β_i such that a<(𝕊^2,g_k)<b. Let c_k be the mean value of u_k over (𝕊^2,g_𝕊^2). By Lemma <ref>, after passing to a subsequence, we assume u_k-c_k→ v weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2). By Jensen's inequality, c_k is bounded from above. If c_k is bounded, we may further assume u_k→ u weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2). We will use the curvature concentration function Θ to investigate the equations that u and v satisfy. Note. From now on, the phrase “for any x_0∈𝕊^2, we choose an appropriate isothermal coordinate system with x_0=0" means: the domain of u_k under this coordinate system is D ⊂ℝ^2, {u_k} has no bubbles on D \{x_0}, and if x_0∈{p_1, ⋯,p_m} then D ∩{ p_1⋯,p_m}={x_0}; if x_0∉{p_1, ⋯,p_m} then D ∩{ p_1, ⋯,p_m}=∅. We divide the proof of Theorem <ref> into following propositions. Let g_k=e^2u_k g_𝕊^2∈ℳ(𝕊^2) where u_k are the solutions of ((<ref>)) satisfying (A1), (A2), (A3). If {u_k} has no bubble at x_0, then Θ(x_0)= {[ -2 πβ_i if x_0 =p_i∈{p_1,⋯,p_m},; 0 if x_0∉{p_1, ⋯, p_m}. ]. By choosing an appropriate isothermal coordinate system with x_0=0, u_k solves the following equation locally: -Δ u_k= {[ K_ke^2u_k-2 πβ_i^kδ_0 when x_0∈{p_1,⋯,p_m},; K_ke^2u_k when x_0∉{p_1, ⋯, p_m} . ]. Applying Proposition <ref>, if x_0∉{p_1,⋯,p_m}, Θ(x_0) =lim_r → 0lim_k →∞∫_D_r K_k e^2u_k≤ 2 K_C^0lim_r → 0lim_k →∞∫_D_r e^2u_k =0; if x_0 =p_i∈{p_1,⋯,p_m}, Θ(x_0) =lim_r → 0lim_k →∞( ∫_D_r K_k e^2u_k -2 πβ_i^kδ_0)=-2 πβ_i. Denote 𝒞={ x ∈𝕊^2: {u_k} has at least a bubble at x }∪{p_1, ⋯,p_m}. We know that 𝒞 consists of finite points, then by Proposition <ref>, the following sum is well-defined: ∑_x ∈𝕊^2Θ(x) δ_x=∑_x ∈𝒞Θ(x) δ_x. Let g_k=e^2u_k g_𝕊^2∈ℳ(𝕊^2) where u_k are the solutions of (<ref>) satisfying (A1), (A2), (A3). Then (1) If u_k→ u weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2), then u solves -Δ_𝕊^2 u=K e^2u+∑_x ∈𝕊^2Θ(x) δ_x-1. (2) If u_k-c_k→ v weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2) with c_k→ -∞, then v solves -Δ_𝕊^2 v=∑_x ∈𝕊^2Θ(x) δ_x-1. Since the proofs of (1) and (2) are almost the same, we only present the proof for (1). At any fixed x_0∈𝕊^2, we choose an appropriate isothermal chart with x_0=0. Then for any φ∈𝒟(D), ∫_D ∇ u ∇φ =lim_k →∞∫_D∇ u_k∇φ =lim_k →∞∫_Dφ d_g_k =lim_r → 0lim_k →∞( ∫_D∖ D_r+∫_D_r) φ d_g_k = ∫_D φ Ke^2u+φ(0)lim_r → 0lim_k →∞_g_k(D_r) =∫_Dφ K e^2u+Θ(x_0) φ(0), which yields that locally u solves -Δ u=K e^2u+ Θ(x_0) δ_0=K e^2u+∑_x ∈ DΘ(x) δ_X. Therefore, u solves -Δ_𝕊^2 u=K e^2u+∑_x ∈𝕊^2Θ(x) δ_x-1. To obtain the explicit equations that u and v satisfy, we need to compute Θ(x_0) when {u_k} has at least one bubble at x_0. Let g_k=e^2u_k g_𝕊^2∈ℳ(𝕊^2) where u_k are the solutions of (<ref>) satisfying (A1), (A2), (A3). Suppose u_k→ u weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2). If {u_k} has at least one bubble at some x_0∈𝕊^2, then x_0=p_i for some i ∈{1,⋯,m} satisfying β_i>1, all bubbles of {u_k} at x_0 are smooth and at 1-level. Moreover, if s is the number of bubbles at x_0, then Θ(x_0)=4 π s-2 πβ_i. If {u_k} has at least one bubble at some x_0∈𝕊^2, we choose an appropriate isothermal chart with x_0=0. Then the following equation holds on D: -Δ u_k= {[ K_ke^2u_k-2 πβ_i^kδ_0 when x_0∈{p_1,⋯,p_m},; K_ke^2u_k when x_0∉{p_1, ⋯, p_m} . ]. If x_0∉{p_1,⋯,p_m}, then {u_k} has no bubbles at 0 by Lemma <ref>(3), a contradiction. So x_0∈{p_1,⋯,p_m}. WLOG, we assume x_0=p_1. Then u_k satisfies -Δ u_k=K_k e^2u_k-2 πβ_1^kδ_0 on D. By Lemma <ref> (3), β_1>1. By Lemma <ref> (2), all bubbles at 1-level are smooth. We claim that there is no bubble at 2-level. Assume { (x_k,r_k) } is a blowup sequence at 1-level, then by Lemma <ref> (2), for any t>0, 0 ∈ D_t\ D_r_k/t(x_k) for sufficiently large k. Then for any r>0, u_k^'(x)=u_k(r_kx+x_k) +log r_k solves the following equation on D_r when k is sufficiently large: -Δ u_k^'=K_k(r_kx+x_k) e^2u_k^'. By Lemma <ref> (3), { u_k^'} has no bubble, which means that there does not exist bubbles at 2-level. Since each bubble is smooth with total curvature 4π, there exist only finitely many bubbles, saying { (x_k^1,r_k^1)}, ⋯, { (x_k^s,r_k^s) }. Then by Theorem <ref>, Θ(x_0) =lim_r→ 0lim_k→∞_g_k(D_r ) = lim_r→ 0lim_k→∞( ∫_D_r K_k e^2u_k-2 πβ_1^kδ_0) = K(x_0)lim_r→ 0lim_k→∞(D_r, g_k)-2πβ_1 =4π s-2πβ_1. 0.7 [line width=1.3pt] (2.1,1) circle (1pt); [line width=0.7pt] (2,1.1) circle (2); [line width=0.5pt] (1.3,1.5) circle (0.3); [line width=0.5pt] (2.3,0.3) circle (0.4); (2.4,1) node 0; (1.3,2.1) node D_Rr_k^1(x_k^1); (2.3,-.3) node D_Rr_k^1(x_k^2); (12,-1) arc(40:140:4); [line width=1.3pt] (8.9,0.42) circle (1pt); [rotate around=30:(8,2.25),line width=0.7pt] (8,2.25) ellipse (1 and 2); [rotate around=-27:(9.5,1.9),line width=1pt,opacity=0.3] (9.5,1.9) ellipse (1 and 1.6); (8.9,0) node 0; (7.5,3) node (S^2,g_v^1); (10,2.5) node (S^2,g_v^2); (10,-1) node (D,g_v); Local figure of Non-Collapsing Case: All bubbles of {u_k} at p_i with β_i >1 are smooth and at 1-level. 0.7 [rotate=-270] [line width=1.3pt]plot[smooth]coordinates(6,2)(5,0)(6,-1)(8,0)(7,1.2)(6,2); [shift=(-1,-0.3)] [rotate around=19:(6.8,3.4),line width=0.6pt] (6.8,3.4) ellipse (0.7 and 1.2); [rotate around=-27:(7.5,3.1),line width=1pt,opacity=0.2] (7.5,3.1) ellipse (0.6 and 0.9); [shift=(4.9, 1.9), rotate around=-40:(1,1),scale=0.5] [rotate around=-27:(9.5,1.9),line width=1pt,opacity=0.4] (9.5,1.9) ellipse (1 and 1.6); Global figure of Non-collapsing Case: All bubbles are smooth, and their south poles are attached to a singularity. Let g_k=e^2u_k g_𝕊^2∈ℳ(𝕊^2) where u_k are solutions of (<ref>) satisfying (A1), (A2), (A3). Suppose u_k -c_k→ v weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2) with c_k→ -∞. If {u_k} has at least one bubble at some x_0∈𝕊^2, then one of the following holds: (1) x_0∉{p_1, ⋯,p_m} and {u_k} has s bubbles at 1-level at x_0. Each bubble is smooth and Θ(x_0)=4π s. (2) x_0= some p_i and {u_k} has s bubbles at 1-level at x_0. Each bubble is smooth and Θ(x_0)=4π s-2πβ_i. (3) x_0= some p_i and {u_k} has s bubbles at x_0 while only one of them is singular and the others are all smooth with Θ(x_0)=4 π s+2 πβ_i. (4) x_0= some p_i and { u_k^i} has s bubbles at 1-level and s^' bubbles at 2-level at x_0. Exactly one of the bubbles at 1-level is singular, say v', while the other bubbles at 1-level are smooth. All bubbles at 2-level are right on the top of v' and smooth with Θ(x_0)=4 π (s-s')+2 πβ_i. If {u_k} has at least a bubble at some x_0∈𝕊^2, we choose an appropriate isothermal coordinate system with x_0=0. Firstly, we show that there are only finitely many bubbles at 1-level. We assume { (x_k^1,r_k^1) }, ⋯, { (x_k^s,r_k^s)} are arbitrary s blowup sequences at 1-level (here we do not say there are only s blowup sequences at 1-level), then for any fixed R>0, D_Rr_k^i(x_k^i)∩ D_Rr_k^j(x_k^j)=∅, i≠ j, when k is sufficiently large. Then we may assume for any i ∈{2,⋯,s}, for any fixed R>0, 0 ∉ D_Rr_k^i(x_k^i) when k is sufficiently large. We set u_k^i=u_k(x_k^i+r_k^ix)+log r_k^i, which converges to a bubble v^i, then for any i ∈{2,⋯,s}, for any fixed R>0, u_k^i satisfies the equation -Δ u_k^i(x)=K_k(x_k^i+r_k^ix)e^2u_k^i(x), x∈ D_R, when k is sufficiently large. Applying Lemma <ref> (3) to u_k^i, { u^i_k } has no bubble, we conclude that u_k^i has no concentration, hence v^i is smooth. Therefore, there is at most one nonsmooth bubble at 1-level, so there are only finitely many bubbles at 1-level. Now we may assume that { (x_k^1,r_k^1)}, ⋯, { (x_k^s,r_k^s)} are exactly all blowup sequences at 1-level. We divide the argument into the following different cases. Case 1: x_0∉{p_1,⋯,p_m}. For this case, u_k solves -Δ u_k=K_k e^2u_k. Then for any i ∈{1,⋯,s} and any fixed R, 0 ∉ D_Rr_k^i(x_k^i) when k is large. So each v^i is smooth and there are no bubbles at 2-level. By Theorem <ref>, Θ(x_0) =lim_r→ 0lim_k→+∞_g_k(D_r) =K(x_0)lim_r→ 0lim_k→+∞(D_r) =K(x_0)∑_i=1^s(^2,g_v^i) = 4 π s. Case 2: x_0∈{p_1,⋯,p_m}. WLOG, we assume x_0=p_1. Then -Δ u_k=K_k e^2u_k-2 πβ_1^kδ_0. Case 2.1: For any i ∈{1,⋯,s} and any fixed R>0, 0 ∉ D_Rr_k^i(x_k^i) when k is large. Then each v^i is smooth and no bubbles are at 2-level. By Theorem <ref>, Θ(x_0) =lim_r→ 0lim_k→+∞_g_k(D_r) = K(x_0)lim_r→ 0lim_k→+∞(D_r)-2πβ_1 = K(x_0)∑_i=1^s(^2,g_v^i)-2πβ_1 =4π s-2πβ_1. 0.7 [line width=1.3pt] (1.9,0) circle (1pt); [line width=0.7pt] (2,0) circle (2); [line width=0.5pt] (1.3,.5) circle (0.3); [line width=0.5pt] (2.3,-.65) circle (0.4); (2.2,0) node 0; (1.3,1.1) node D_Rr_k^1(x_k^1); (2.3,-1.3) node D_Rr_k^2(x_k^2); 2; [line width=1.3pt] (8.9,0.42-) circle (1pt); [rotate around=30:(8,2.25-),line width=0.7pt] (8,2.25-) ellipse (1 and 2); [rotate around=-27:(9.5,1.9-),line width=1pt,opacity=0.3] (9.5,1.9-) ellipse (1 and 1.6); (8.9,0-) node 0; (7.5,3-) node (S^2,g_v^1); (9.9,2.5-) node (S^2,g_v^2); Collapsing Case 1 and Case 2.1: All bubbles are smooth and at 1-level Case 2.2: There exists i_0 such that 0∈ D_Rr_k^i_0(x_k^i_0) for fixed R. WLOG, we assume i_0=1. Then for any i ∈{2,⋯,s}, for any fixed R>0, we have 0 ∉ D_Rr_k^i(x_k^i) when k is large. Hence, v^i is smooth for any i ∈{2,⋯,s}. Now we consider the bubble v^1. Set y_k^1=-x_k^1/r_k^1 and assume y_k^1→ y_∞. Then -Δ u_k^1=K_k(x_k^1+r_k^1x)e^2 u_k^1-2πβ_1^kδ_y_k. By arguments similar to those for Proposition <ref>, there exists τ such that -Δ v^1=K(x_0)e^2v^1+τδ_y_∞. We further divide Case 2.2 into two cases. Case 2.2.1: {u_k^1} has no bubble, i.e. no bubbles at 2-level. Then -Δ v^1=K(x_0)e^2v^1-2πβ_1δ_y_∞. By <cit.> and <cit.>, as a metric over 𝕊^2, g_v^1 has exactly 2 singularities and ∫_ℝ^2 K(x_0) e^ 2v^1 =4 π +4 πβ_1. Then by Theorem <ref>, Θ(x_0) =lim_r→ 0lim_k→+∞_g_k(D_r) = K(x_0)lim_r→ 0lim_k→+∞(D_r, g_k)-2πβ_1 = K(x_0)∑_i=1^s(^2,g_v^i)-2πβ_1 =(4 π +4 πβ_1) +4π(s-1)-2πβ_1=4π s +2 πβ_1. 0.7 (1.9,0) circle (1.5pt); [line width=0.7pt] (2,0) circle (2.3); [line width=0.5pt] (1.1,.6) circle (0.3); [line width=0.5pt] (2,-.2) circle (0.5); (2.2,0) node 0; (1.3,1.2) node D_Rr_k^2(x_k^2); (2.3,-1.1) node D_Rr_k^1(x_k^1); 2; [line width=1.3pt] (8.9,0.42-) circle (1pt); [rotate around=-27:(9.5,1.9-),line width=1pt,opacity=0.3] (9.5,1.9-) ellipse (1 and 1.6); [line width=0.5pt] (8.9,0.42-) arc(-20:70:3); [line width=0.5pt] (8.9,0.42-) arc(70:-20:-3); (8.9,0-) node 0; (7.8,3-) node (S^2,g_v^1); (9.8,2.5-) node (S^2,g_v^2); Collapsing Case 2.2.1: All bubbles are at 1-level; only one of them is singular Case 2.2.2: { u_k^1 } has bubbles. By Lemma <ref> (2), {u_k^1} can only have bubbles at 1-level (which are the bubbles of {u_k} at 2-level) and all bubbles of {u_k^1} are smooth. By Lemma <ref> (3), β_1 > 1. Similar to the arguments in Proposition <ref>, τ=4 π s'-2 πβ_1 , where s' is the number of the bubbles of { u_k^1 }. By <cit.> and <cit.> again, ∫_ℝ^2 K(x_0) e^ 2v^1 =4 π -2 τ. Let { (x_k^s+1,r_k^s+1) }, ⋯, { (x_k^s+s',r_k^s+s') } be all of the blowup sequences right on the top of { (x_k^1,r_k^1)}. Then for any i ∈{ s+1, ⋯, s+s^'}, { (x_k^i,r_k^i) } converges to a smooth bubble and { (x_k^i,r_k^i)} has no bubbles. By Theorem <ref>, we have Θ(x_0) =lim_r→ 0lim_k→+∞_g_k(D_r) = K(x_0)lim_r→ 0lim_k→+∞(D_r,g_k)-2πβ_1 = K(x_0)∑_i=1^s+s^'(^2,g_v^i)-2πβ_1 =4 π (s-1)+(4 π -2 τ)+4π s^'-2πβ_1 =4π(s-s')+2 πβ_1. 0.7 [line width=1.3pt] (2.1,0.2) circle (1pt); [line width=0.7pt] (2,0.3) circle (2.1); [line width=0.5pt] (1.7,0.8) circle (1.1); [line width=0.5pt] (3,-0.8) circle (0.4); [line width=0.5pt] (2,0.9) circle (0.3); [line width=0.5pt] (1.6,0.2) circle (0.2); [line width=0.5pt] (1,1.2) circle (0.2); (2.35,0.2) node 0; (1,1.2) node 1; (3,-0.8) node 2; (2,0.9) node 3; (1.6,0.2) node 4; (-2,1) node 1: D_Rr_k^1(x_k^1); (-2, 0.5) node 2: D_Rr_k^2(x_k^2); (-2,0) node 3: D_Rr_k^3(x_k^3); (-2,-.5) node 4: D_Rr_k^4(x_k^4); 2; [line width=1.3pt] (7.13,4.25-) circle (1pt); [rotate around=-27:(9.5,1.9-),line width=1pt,opacity=0.4] (9.5,1.9-) ellipse (1 and 1.6); [line width=0.8pt] (8.9,0.42-) arc(-20:70:3); [line width=0.8pt] (8.9,0.42-) arc(70:-20:-3); [rotate around=19:(6.8,3.4),line width=0.6pt] (6.8,3.4) ellipse (0.7 and 1.2); [rotate around=-27:(7.5,3.1),line width=1pt,opacity=0.2] (7.5,3.1) ellipse (0.6 and 0.9); (7.2,4.5-) node 0; (7.8,3-) node (S^2,g_v^1); (9.8,2.5-) node (S^2,g_v^2); (5.2,4) node (S^2,g_v^3); (9,3.1) node (S^2,g_v^4); Collapsing Case 2.2.2: There are two levels; the only singular bubble is at 1-level; all bubbles at 2-level are right on the top of the singular bubble. 0.7 [rotate=-90] [shift=(-1,-0.3)] [rotate around=19:(6.8,3.4),line width=0.6pt] (6.8,3.4) ellipse (0.7 and 1.2); [rotate around=-27:(7.5,3.1),line width=1pt,opacity=0.2] (7.5,3.1) ellipse (0.6 and 0.9); [shift=(10.5, 1.15), rotate around=-170:(1,1),scale=0.7] [rotate around=-27:(9.5,1.9),line width=1pt,opacity=0.4] (9.5,1.9) ellipse (1 and 1.6); [line width=0.8pt] (8.9,0.42) arc(-20:70:3); [line width=0.8pt] (8.9,0.42) arc(70:-20:-3); [rotate around=19:(6.8,5.4),line width=0.6pt] (6.8,5.4) ellipse (0.7 and 1.2); [rotate around=-27:(7.5,5.1),line width=1pt,opacity=0.2] (7.5,5.1) ellipse (0.6 and 0.9); (6.26,1.93) circle (2.3pt); Global figure of Collapsing Case: There are at most two levels. The first level consists of smooth spheres and a sphere with two singular points. The South Poles of the smooth spheres and one of the singular points of the nonsmooth sphere are attached to a point (the limit of g_k). The second level consists of smooth spheres with their South Poles attached to the other singular point of a nonsmooth sphere. Theorem <ref> now follows from combining Proposition <ref>, Proposition <ref>, Proposition <ref> and Proposition <ref>. Under the assumptions of Proposition <ref>, if we further assume {u_k} has only bubbles at 1-level, we will obtain a numerical relation between the number of the bubbles and a linear combination of β's components. Furthermore, both the number of singular bubbles and that of smooth bubbles can be controlled by χ(𝕊^2,β). Let g_k=e^2u_k g_𝕊^2∈ℳ(𝕊^2) where u_k are solutions of (<ref>) satisfying (A1), (A2), (A3). We assume that u_k -c_k→ v weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2) with c_k→ -∞ and {u_k} only have bubbles at 1-level. If {u_k} has singular bubbles at p_1, ⋯, p_j_0 and {u_k} has t smooth bubbles, then {u_k} has s=t+j_0= 1/2χ(𝕊^2, β)-∑_i=1^j_0β_i bubbles. Moreover, max_1≤ i≤ j_0β_i ≤1/2χ(𝕊^2,β)-1, t ≤1/2χ(𝕊^2,β). By our assumptions, Case 2.2.2 in the proof of Proposition <ref> will not occur. Then we may assume that { u_k } has t_i smooth bubbles and one singular bubble at p_i for i=1,⋯,j_0, t_i smooth bubbles at p_i for i=j_0+1,⋯,j_1 and t_i^' smooth bubbles at q_i for i=1, ⋯, j_2 where q_i∉{p_1, ⋯,p_m}. Then {u_k} has s=∑_i=1^ j_0(t_i+1 )+∑_i=j_0+1^j_1 t_i+∑_i=1^j_2 t_i^'=j_0+t bubbles. By Proposition <ref> (2) and Proposition <ref>, 4 π =∑_i=1^j_0( 4π +2 πβ_i+4 π t_i)+∑_i=j_0+1^j_1(4 π t_i -2 πβ_i)+∑_i=j_1+1^m( -2πβ_i )+∑_i=1^j_2 4 π t_i^' =∑_i=1^j_0 (4 π +4 πβ_i)+4 π t -2 π∑_i=1^mβ_i, which yields that 2 πχ(𝕊^2,β)=4 π + 2 π∑_i=1^mβ_i =4 π(j_0+t)+4 π∑_i=1^j_0β_i. Since β_i>-1, 4 π + 2 π∑_i=1^mβ_i = 4 π∑_i=1^j_0 (1+β_i )+4 π t ≥max{ 4 π + 4 πβ_1, ⋯, 4 π + 4 πβ_j_0, 4 π t }, which yields that max_1≤ i≤ j_0β_i ≤1/2∑_i=1^mβ_i=1/2χ(𝕊^2,β)-1, t ≤ 1+1/2∑_i=1^mβ_i=1/2χ(𝕊^2,β). With additional assumptions on β, the assumptions of Proposition <ref> are satisfied and the following corollaries are obtained. Let g_k=e^2u_k g_𝕊^2∈ℳ(𝕊^2) where u_k are the solutions of (<ref>) satisfying (A1), (A2), (A3). Suppose that χ(𝕊^2,β)=2+∑_i=1^mβ_i∈ (0,2) and {u_k} has at least one bubble. Then (1) u_k -c_k→ v weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2) with c_k→ -∞. (2) There exist s<m and 1 ≤ i_1<i_2<⋯<i_s≤ m with β_i_j<0 such that {u_k} has exactly one singular bubble at p_i_j. Moreover, 4 π s+4 π∑_j=1^sβ_i_j=2 π∑_i=1^mβ_i+4 π . Since χ(𝕊^2,β) ∈ (0,2), then lim_k →∞∫_𝕊^2 K_k dV_g_k=lim_ k →∞ (4 π +2 π∑_i=1^mβ_i^k)=4 π +2 π∑_i=1^mβ_i<4 π, which implies that {u_k} cannot have smooth bubbles. Then by Proposition <ref>, u_k -c_k→ v weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2) with c_k→ -∞, and by Proposition <ref>, Case 1, Case 2.1 and Case 2.2.2 in the proof of Proposition <ref> cannot happen. Therefore, only Case 2.2.1 can happen with s=1 and β_i<0, which means that {u_k} has exactly one singular bubble at 1-level at some p_i with β_i<0. By Proposition <ref> (2), there exist s ≤ m and 1 ≤ i_1<i_2<⋯<i_s≤ m with β_i_j<0 such that -Δ_𝕊^2 v=∑_j=1^s ( 4 π+4 πβ_i_j ) δ_p_i_j-2 π∑_i=1^mβ_iδ_p_i-1, which yields 4 π s+4 π∑_j=1^sβ_i_j=2 π∑_i=1^mβ_i+4 π . What left is to show s<m. If s=m, 4 m π +4 π∑_i=1^mβ_i=2 π∑_i=1^mβ_i+4 π, in turn 2-2m=∑_i=1^mβ_i∈ (-2,0). This is impossible as m is an integer. Let g_k=e^2u_k g_𝕊^2∈ℳ(𝕊^2) where u_k are the solution of (<ref>) satisfying (A1), (A2), (A3). Assume β_i≤ 1 for any i ∈{1,⋯,m} and {u_k} has at least one bubble. Then (1) u_k -c_k→ v weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2) with c_k→ -∞. (2) All bubbles of { u_k} are at 1-level. Further, there exists a set I ⊂{1,⋯,m} such that 4 π s+4 π∑_i ∈ Iβ_i =2 π∑_i=1^mβ_i+4 π, where s is the number of bubbles of {u_k}. Since {u_k} has at least one bubble and β_i≤ 1, then by Proposition <ref> u_k -c_k→ v weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2) with c_k→ -∞. By Proposition <ref>, Case 2.2.2 (in the proof) cannot happen, hence the assertion follows from Proposition <ref> immediately. hence the bubbles are divided into the following three types (after rearranging the index): Type 1 (corresponding to Proposition <ref> Case 1): {u_k} has a_i smooth bubbles at 1-level at q_i, 1 ≤ i ≤ t, where q_i∉{p_1, ⋯, p_m}; Type 2 (corresponding to Proposition <ref> Case 2.1)): {u_k} has b_i smooth bubbles at 1-level at p_i for i=1, ⋯,m_1; Type 3 (corresponding to Proposition <ref> Case 2.2.1)): {u_k} has b_i bubbles, for which one of them is singular and the others are smooth, at 1-level at p_i for i=m_1+1, ⋯,m_2. Then by Proposition <ref>, Proposition <ref> and Proposition <ref>, 4 π =∑_x ∈𝕊^2Θ(x) =∑_i=1^t 4 π a_i+∑_i=1^m_1 (4 π b_i-2 πβ_i)+∑_i=m_1+1^m_2 ( 4 π b_i +2 πβ_i )+∑_i=m_2+1^m -2 πβ_i =∑_i=1^t 4 π a_i+∑_i=1^m_1 4 π b_i+∑_i=m_1+1^m_2 (4 π b_i +4 πβ_i )-∑_i=1^m 2 πβ_i. Since s=∑_i=1^t a_i+∑_i=1^m_2 b_i, then we obtain 4 π +∑_i=1^m 2 πβ_i=2 πχ(𝕊^2,β)=4 π s+4 π∑_i=m_1+1^m_2β_i. §.§ Simplification of bubble-tree convergence when K_k→ K in C^1 In this subsection, we will use the Pohozaev inequality to show that if { u_k} has at least one bubble and K_k converges in C^1 then c_k→-∞ and u_k has only bubbles at 1-level. [Pohozaev identity on an annulus] Assume K ∈ C^1(D\D_δ) and -Δ u = Ke^2u on D\D_δ. Define a function P(t) = t∫_∂ D_t(|∂ u/∂ r|^2-1/r^2|∂ u/∂θ|^2)+2 ∫_∂ D_t∂ u/∂ r+ t ∫_∂ D_t Ke^ 2u . Then for any δ<s<t<1, we have P(t) - P(s) = ∫_D_t\ D_s re^2u∂ K/∂ r. Under the assumptions of Lemma <ref>, if we further assume K_k converges to a positive function K in C^1(D), then { u_k} has no bubbles. We argue by contradiction. Assume {u_k} has at least one bubble. Then by Lemma <ref>, there exist s≥ 1 smooth bubbles at 1-level at y. By Theorem <ref>, lim_r → 0lim_k → +∞∫_ D_r(y) K_k e^2u_k=4 π s, which yields -Δ u= K e^2u-2π (λ-2s) δ_y. For convenience, we set λ^'=λ-2s. Define P_k(t) =t∫_∂ D_t(y_k)(|∂ u_k/∂ r|^2-1/r^2|∂ u_k/∂θ|^2)+2 ∫_∂ D_t(y_k) ∂ u_k/∂ r+ t ∫_∂ D_t(y_k) K_ke^ 2u_k , P(t) =t∫_∂ D_t(y)(|∂ u/∂ r|^2-1/r^2|∂ u/∂θ|^2)+2 ∫_∂ D_t(y) ∂ u/∂ r+ t ∫_∂ D_t(y) K e^ 2u . By Proposition <ref>, for any 0<s<t<1/2 and sufficiently large k, |P_k(t)-P_k(s) | =|∫_D_t(y_k) ∖ D_s(y_k) re^2u∂ K_k/∂ r| ≤ t K_k _C^1( D )(D,g_k), which yields lim_t→ 0lim_k→ +∞lim_s→ 0|P_k(t)-P_k(s)|=0. Let's calculate lim_k→ +∞lim_s→ 0P_k(s) and lim_t→ 0lim_k→ +∞P_k(t) step by step. Since -Δ u_k =K_ke^2u_k-2 πλ_k δ_y_k, we may write u_k=v_k+λ_klog r, where -Δ v_k=K_k e^2u_k. Since λ_k>-1, for large j we have K_ke^2u_k∈ L^p(D_ 2^-j(y_k)) for some p>1. Then v_k∈ W^2,p(D_ 2^-j(y_k)) ⊂ W^1,2(D_ 2^-j(y_k)) ∩ C^0(D_ 2^-j(y_k)). It follows ∫_ D_2^-j (y_k) ∖ D_2^-j-1(y_k) |∇ v_k|^2→ 0, as j→ +∞. Hence, there exists s_j∈ ( 2^-j-1,2^-j) such that s_j∫_∂ D_s_j(y_k)|∇ v_k|^2→ 0, as j → +∞. By direct calculations, lim_j →∞ s_j∫_∂ D_s_j|∂ u_k/∂ r|^2=lim_j →∞ s_j∫_∂ D_s_j|∂ v_k/∂ r+λ_k/r|^2=2 πλ_k^2, lim_j →∞ s_j∫_∂ D_s_j1/r^2|∂ u_k/∂θ|^2≤lim_j →∞ s_j∫_∂ D_s_j|∇ v_k|^2=0, lim_j →∞∫_∂ D_s_j(y_k)∂ u_k/∂ r=lim_j →∞( 2 πλ_k+∫_∂ D_s_j(y_k)∂ v_k/∂ r) =2 πλ_k-lim_j →∞∫_D_s_j (y_k) K_ke^2u_k =2 πλ_k, lim_j →∞ s_j∫_∂ D_s_j(y_k) K_k e^2u_k= lim_j →∞ s_j^2∫_0^2 π K_k(s_j,θ) e^2u_k(s_j,θ) ≤ Clim_j →∞ s_j^2+2λ_k=0. Thus, we obtain lim_k→∞lim_j →∞P_k(s_j)=2πλ^2+4πλ. Since u_k converges to u in C^2,α_(D\{y}), then by similar calculations, lim_t→ 0lim_k→∞P_k(t)=lim_t→ 0P(t) =2πλ^'^2+4πλ^'. Then (λ-λ^')(λ+λ^'+2)=0. Since λ^'=λ-2s<λ, we obtain λ^'=-λ -2<-1 which leads to a contradiction to Lemma <ref>. Let g_k=e^2u_k g_𝕊^2∈ℳ(𝕊^2) where u_k are the solutions of (<ref>). Assume (A2), (A3) hold and K_k→ K in C^1(𝕊^2) with K>0. If {u_k} has at least one bubble, then (a) u_k -c_k→ v weakly in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2) with c_k→ -∞. (b) All bubbles of {u_k} are at 1-level. More precisely, if {u_k} has at least one bubble at some x_0∈𝕊^2, then one of the following holds: * x_0= some p_i, {u_k} has s=β_i+1 smooth bubbles at 1-level at x_0, and Θ(x_0)=4 π +2 πβ_i. * x_0= some p_i, {u_k} has one singular bubble at x_0 at 1-level, and Θ(x_0)=4π +2πβ_i. * x_0∉{p_1, ⋯,p_m}, {u_k} has one smooth bubble at 1-level at x_0, and Θ(x_0)=4π. (c) Furthermore, there exists a set I ⊂{1,⋯,m} such that # { bubbles of {u_k}} = 1/2χ(𝕊^2,β)-∑_i ∈ Iβ_i,≤ 1+1/2∑_i=1^m |β_i|. (a) follows from Lemma <ref> immediately. Step 1: We prove (b) by investigating the structure of bubble trees. For any fixed point x_0∈𝕊^2, if {u_k} has at least one bubble at x_0, we choose an appropriate isothermal coordinate system with x_0=0. Via similar arguments as in the proof of Proposition <ref>, we may assume {u_k} has s blowup sequences {( x_k^1, r_k^1 )}, ⋯, { (x_k^s , r_k^s ) } at x_0 at 1-level, and let v^1, ⋯, v^s the corresponding bubbles. Set u_k^i(x)=u_k( x_k^i +r_k^ix )+log r_k^i. Then for any fixed R>0, when k is sufficiently large, D_R r_k^i(x_k^i ) ∩ D_R r_k^j(x_k^j ) =∅, i ≠ j. By Lemma <ref>, { u_k^i(x)} has no bubbles. Case 1: x_0∈{p_1, ⋯, p_m}. WLOG, we may assume x_0=p_1, then u_k solves -Δ u_k=K_k e^2u_k-2 πβ_1^kδ_0. Case 1.1: For any j ∈{1,⋯,s} and any fixed R>0, 0 ∉ D_Rr_k^i(x_k^i) for large k. Then each v^i is smooth and no bubbles are at 2-level. By Theorem <ref>, we have Θ(x_0) =lim_r→ 0lim_k→+∞_g_k(D_r) =K(x_0)lim_r→ 0lim_k→+∞(D_r)-2πβ_1 =K(x_0)∑_i=1^s(^2,g_v^i)-2πβ_1 =4π s-2πβ_1. Case 1.2: There exists i_0 such that 0∈ D_Rr_k^i_0(x_k^i_0) for fixed R. WOLG, we assume i_0=1, then for any i ∈{2,⋯,s}, for any fixed R>0, 0 ∉ D_Rr_k^i(x_k^i) when k is sufficiently large. Hence for any i ∈{2,⋯,s}, v^i is smooth. We set y_k^1=-x_k^1/r_k^1 and assume y_k^1→ y_∞. Then u_k^1 satisfies the equation -Δ u_k^1=K_k(x_k^1+r_k^1x)e^2 u_k^1-2πβ_1^kδ_y_k. Since { u_k^1} has no bubbles, then -Δ v^1=K(x_0) e^ 2v^1-2 πβ_1δ_Y, ∫_ℝ^2 K(x_0) e^2v^1=4 π +4 πβ_1. Then by Theorem <ref>, Θ(x_0) =lim_r→ 0lim_k→+∞_g_k(D_r) =K(x_0)lim_r→ 0lim_k→+∞(D_r, g_k)-2πβ_1 =K(x_0)∑_i=1^s(^2,g_v^i)-2πβ_1 =(4 π +4 πβ_1) +4π(s-1)-2πβ_1=4π s +2 πβ_1. Case 2: x_0∉{p_1,⋯,p_m}. For this case, u_k solves -Δ u_k=K_k e^2u_k. Then for any i ∈{1,⋯,s} and any fixed R>0, 0 ∉ D_Rr_k^i(x_k^i) when k is large. So each v^i is smooth and no bubbles are at 2-level. By Theorem <ref>, we have Θ(x_0) =lim_r→ 0lim_k→+∞_g_k(D_r) = K(x_0)lim_r→ 0lim_k→+∞Area(D_r) =K(x_0)∑_i=1^s(^2,g_v^i) =4π s. Therefore, v solves the following equation locally: -Δ v=Θ(x_0) δ_0, where Θ(x_0) =4 π s-2 πβ_1 (when Case 1.1 holds) or 4 π s+2 πβ_1 (when Case 1.2 holds) or 4 π s (when Case 2 holds). Now we calculate Θ(x_0) more precisely via Proposition <ref>. Define P_k(t)=t∫_∂ D_t(|∂ u_k/∂ r|^2-1/r^2|∂ u_k/∂θ|^2)+2 ∫_∂ D_t∂ u_k/∂ r+ t ∫_∂ D_t Ke^ 2u_k . By Proposition <ref>, we have lim_t→ 0lim_k→ +∞lim_s→ 0|P_k(t)-P_k(s)|≤lim_t→ 0lim_k→ +∞lim_s→ 0Ct(𝕊^2, g_k)=0. Since -Δ u_k= {[ K_ke^2u_k-2 πβ_1^kδ_0 when x_0=p_1,; K_ke^2u_k when x_0∉{p_1, ⋯, p_m} , ]. then as in the proof of Lemma <ref>, there exists s_j→ 0 such that lim_k→+∞lim_j →+∞P_k(s_j)= {[ 2π(β_1)^2+4πβ_1 when x_0=p_1,; 0 when x_0∉{p_1, ⋯, p_m} . ]. On the other hand, since c_k → -∞ and v=-Θ(x_0) /2 πlog r+V, where V is harmonic on D, then lim_t → 0lim_k→+∞∫_∂ D_t∂ u_k/∂ r=lim_t → 0∫_∂ D_t∂ v/∂ r=-Θ(x_0) lim_t → 0lim_k→+∞ t ∫_∂ D_t K_k e^ 2u_k ≤lim_t → 0lim_k→+∞ Ct ∫_∂ D_t e^2v e^2c_k=0, lim_t → 0lim_k → +∞ t ∫_∂ D_t|∂ u_k/∂ r|^2=lim_t → 0 t ∫_∂ D_t|∂ v/∂ r|^2=Θ(x_0)^2/2 π, lim_t → 0lim_k → +∞ t ∫_∂ D_t1/r^2|∂ u_k/∂θ|^2 =lim_t → 0 t ∫_∂ D_t1/r^2|∂ v/∂θ|^2=0, which yields lim_t → 0lim_k → +∞ P_k(t)=Θ(x_0)^2/2 π-2 Θ(x_0)=2 π (λ^')^2-4 πλ^', where we set λ^'=Θ(x_0)/2π. Therefore, when x_0=p_1, we obtain ( λ^' )^2 -2λ^'= (β_1)^2+2β_1. which is equivalent to ( λ^' +β_1 )( λ^' -β_1-2 )=0. If λ^'=2s-β_1 (when Case 1.1 holds), then 2s( 2s- 2 β_1-2 )=0, which yields that s=β_1+1, and Θ(x_0)=2πβ_1+4 π. If λ^'=2s+β_1 (when Case 1.2 holds), then (2s+ 2 β_1 )(2s-2)=0, which yields that s=1, and Θ(x_0)=2πβ_1+4 π. When x_0∉{p_1,⋯,p_m} (when Case 2 holds), λ^'=2s, then we obtain ( λ^' )^2 -2λ^'=0, which yields that s=1, and Θ(x_0)=4 π. Step 2: We prove (c). By (b), we know that {u_k} only has bubbles at 1-level, then by (a) and Proposition <ref>, there exists a set I ⊂{1,⋯,m} such that s = 1/2χ(𝕊^2,β)-∑_i ∈ Iβ_i, which yields an upper bound of s immediately: s=1+1/ 2∑_ i ∈{1,⋯,m}∖ I β_i-1/2∑_i ∈ Iβ_i≤ 1+1/2∑_i=1^m |β_i|. We prove (c). By (b), the bubbles of {u_k} can be divided into the following three types (after rearranging the index): Type 1 (corresponding to (b) Case 1.1): {u_k} has β_i+1 smooth bubbles at 1-level at p_i for i=1,⋯,m_1; Type 2 (corresponding to (b) Case 1.2): {u_k} has one singular bubble at 1-level at p_i for i=m_1+1, ⋯,m_2; Type 3 (corresponding to (b) Case 2): {u_k} has one smooth bubble at 1-level at q_i, 1 ≤ i ≤ t, where q_i∉{p_1, ⋯,p_m}. Then by Proposition <ref> and (b), 4 π =∑_x ∈𝕊^2Θ(x) =∑_i=1^m_1 (4 π +2 πβ_i )+∑_i=m_1+1^m_2 (4 π +2 πβ_i )+∑_i=m_2+1^m ( -2 πβ_i )+∑_i=1^t 4 π =∑_i=1^m_2 ( 4 π +4 πβ_i )-2 π∑_i=1^mβ_i+4 π t, and this leads to 4 π +∑_i=1^m 2 πβ_i=2 πχ(𝕊^2,β)=4 π (m_2+t )+4 π∑_i=1^m_2β_i. Since #{bubbles}=∑_i=1^m_1( β_i+1 )+m_2-m_1+t=m_2+t+∑_i=1^m_1β_i, it follows that 4 π#{bubbles}+4 π∑_i=m_1+1^m_2β_i= 4 π +∑_i=1^m 2 πβ_i. Therefore, we obtain the following upper bound of the total number of the bubbles: #{bubbles}≤ 1+1/2∑_i=1^m |β_i|. §.§ Nonexistence of solutions around certain prescribed date Define 𝒜_m={ (a_1,⋯,a_m):a_i>-1, 1/2∑_i=1^m a_i≠ (s-1)+∑_j=1^sa_i_j, for any 1 ≤ i_1<i_2<⋯<i_s≤ m with 0<s<m and a_i_j<0 }, ℬ_m={ (a_1,⋯,a_m):a_i>-1, 1/2∑_i=1^m a_i≠ t+∑_j=1^sa_i_j, for any 1 ≤ i_1<i_2<⋯<i_s≤ m with 0≤ s ≤ m, and any t ≥ 0 }. We shall point that when s=0, the term ∑_j=1^sa_i_j in ℬ_m vanishes. Let K lie in the set C^+(𝕊^2) of positive continuous functions on 𝕊^2. Assume one of the following assumptions holds: (A) β =(β_1,⋯,β_m) ∈𝒜_m, 0<χ(𝕊^2,β)<2; (B) β =(β_1,⋯,β_m) ∈ℬ_m, β_i≤ 1 for any i ∈{1,⋯,m}. Then if the equation -Δ_𝕊^2 u=Ke^2u-2π∑_i=1^mβ_iδ_p_i-1 has no solutions in ∩_p∈ [1,2) W^1,p(𝕊^2,g_𝕊^2), then there exists a neighbourhood 𝒰 of (K,β) in C^+(𝕊^2) × (-1,∞)^× m such that for any ( K̃,β̃ ) ∈𝒰, -Δ_𝕊^2 u=K̃ e^2u-2π∑_i=1^mβ̃_iδ_p_i-1 has no solutions in ∩_p∈ [1,2) W^1,p(𝕊^2, g_𝕊^2). We give the proof of Theorem <ref>. We first show the assertion holds under the assumption (𝒪_1). Assuming the contrary, there exist K_k∈ C^+(𝕊^2), β_k=( β_1^k, ⋯ , β_m^k ) ∈ (-1,∞)^m such that K_k→ K in C(𝕊^2 ), β_k→β, and u_k solves -Δ_𝕊^2 u_k=K_ke^2u-2π∑_i=1^mβ_i^kδ_p_i-1. By (𝒪_1), ∑_i=1^mβ_i∈ (-2,0). If {u_k} has at least one bubble, then by Corollary <ref>, there exist s<m and 1 ≤ i_1<i_2<⋯<i_s≤ m such that for any 1 ≤ j ≤ s, β_i_j<0 and 4 π s+ 4 π∑_j=1^sβ_i_j= 2π∑_i=1^mβ_i+4 π . which contradicts to the assumption: β∈𝒜_m. In conclusion, {u_k} has no bubble. Applying Proposition <ref>, u solves -Δ_𝕊^2 u=Ke^2u-2π∑_i=1^mβ_iδ_p_i-1, a contradiction to our assumptions. By applying similar arguments and Corollary <ref>, the assertion still holds under assumption (𝒪_2). Applying similar arguments as in the proof of Theorem <ref> and Proposition <ref>, we can prove Theorem <ref>. Assume β =(β_1,⋯,β_m) ∈ℬ_m and K is in the set C^1,+(𝕊^2) of positive C^1 functions on 𝕊^2. If the equation -Δ_𝕊^2 u=Ke^2u-2π∑_i=1^mβ_iδ_p_i-1 has no solutions in ∩_p∈ [1,2) W^1,p(𝕊^2,g_𝕊^2), then there exists a neighbourhood 𝒰 of (K,β) in C^1,+(𝕊^2) × (-1,∞)^× m such that for any ( K̃,β̃ ) ∈𝒰, -Δ_𝕊^2 u=K̃ e^2u-2π∑_i=1^mβ̃_iδ_p_i-1 has no solutions in ∩_p∈ [1,2) W^1,p(𝕊^2, g_𝕊^2). Let g_k=e^2u_kg_𝕊^2∈ℳ(𝕊^2), u_k be the solution of (<ref>). After passing to a subsequence, one of the following holds: (a) u_k converges weakly to some u in ∩_p∈(1,2)W^1,p(𝕊^2,g_𝕊^2), and there exist nonnegative integers n_i, i=1,⋯,m such that -Δ_𝕊^2 u= K e^2u +4 π∑_i=1^m n_iδ_p_i-2π∑_i=1^mβ_i δ_p_i-1. (b) u_k-c_k converges weakly to some v in ∩_p∈ [1,2)W^1,p(𝕊^2,g_𝕊^2), c_k→ -∞, and there exist integers m',m”,m”' such that -Δ_𝕊^2 v = 4 π∑_j=1^m' ( β_i_j+n_j' ) δ_p_i_j+ 4 π∑_j=1^m” n_j”δ_ p_i_j+m' +4 π∑_j=1^m”' n_j”' δ_q_j -2 π∑_j=1^mβ_jδ_p_j-1, where, n_j' >1-β_i_j/2 are integers, n_j”, n_j”' are nonnegative integers, q_j∉{p_1, ⋯,p_m} is regular point. The result below follows from Theorem <ref> and <cit.>. Assume β=( β_1, ⋯, β_m ) ∈𝒜_m satisfying d_1(β,ℤ_o^m)=1, β_i -1 ∉ℤ. Then for any distinct p_1, ⋯, p_m, there is a neighbourhood U of β in (-1,∞)^m, such that for any β̃∈ U, the divisor ∑_i=1^mβ̃_i p_i cannot be represented by any metric in ℳ(𝕊^2) with constant curvature 1. Example. Let β=( β_1,β_2,β_3,β_4 )=(-3-2 α/10, 2k+α-1/10,α-1/10,2k+1/10), where k ∈ℕ∪{0} and α∈ (0,1/100). Then d_1(β,ℤ_0^4)=d_1(β,(-1,2k,0,2k))=1, 1/2χ(𝕊^2,β)=1+2k+α-1/5∈ (0,+∞) ∖ℕ, [ 1/2χ(𝕊^2,β)]=2k, 𝔽(β)={ i: β_i≤ 2k+α-1/5}. Case 1: when k=0, 𝔽(β)={1}, and 1/2χ(𝕊^2,β)-β_1=11/10≠ 1. Case 2: when k ≥ 1, 𝔽(β)={1,3}, and 1/2χ(𝕊^2,β)-β_1=2k+11/10≠ 1,⋯,1+2k, 1/2χ(𝕊^2,β)-β_3=1+2k+α-1/10≠ 1, ⋯,1+2k, 1/2χ(𝕊^2,β)-β_1-β_3=1+2k-α/10≠ 2,⋯,2+2k. In conculsion, β∈𝒜_4∩{d_1(β,ℤ_o^4)=1}. In other words, 𝒜_4∩{d_1(β,ℤ_o^4)=1}≠∅. All conditions in Corollary <ref> are fulfilled, therefore there is a neighbourhood U of β such that no divisor in U can be represented by a spherical metric. Example. Let β=(2k-1-α/5,2k+1+α/5,-1+1/5,2/5), where k∈ℕ and α∈ℝ\ℚ with 0<α<1/100. Then d_1(β,ℤ_o)=d_1(β,(2k,2k,-1,0)=1, ∑_i=1^4β_i =4k+2α-2/5, 1/2χ(𝕊^2,β)=1+2k+α-1/5∈ (0,∞) ∖ℕ. Since β_1=1/2∑_i=1^4β_i, {1}⊂𝔽(β), but 1/2χ(𝕊^2,β)-β_1=1, does not fullfill the definition of 𝒜_m???? Hence β∈𝒜_4∩⋯ with χ(𝕊^2,β)=2+4k+2α-2/5>4k+1. What are you trying to say here? In B_m and no solution? plain
http://arxiv.org/abs/2408.11241v1
20240820233926
CooPre: Cooperative Pretraining for V2X Cooperative Perception
[ "Seth Z. Zhao", "Hao Xiang", "Chenfeng Xu", "Xin Xia", "Bolei Zhou", "Jiaqi Ma" ]
cs.CV
[ "cs.CV" ]
A Heavy Ion Monitor on a Chip Based on a Non-Volatile Memory Architecture Dale Julson[1], Will Flanagan[2,3], Mike Youngs[4], Aidan Medcalf[1], Benedict Anderson[2], Sharanya Palit[2], Tim Hossain[1] [1] Cerium Laboratories, Austin, TX, 78741, US [2] Department of Physics, University of Dallas, Irving, TX, 75062, USA [3] University of Texas, Austin, TX, 78712, USA [4] Texas A&M Cyclotron Institute, College Station, Texas 77843, USA August 26, 2024 ====================================================================================================================================================================================================================================================================================================================================================================================== empty empty § ABSTRACT Existing Vehicle-to-Everything (V2X) cooperative perception methods rely on accurate multi-agent 3D annotations. Nevertheless, it is time-consuming and expensive to collect and annotate real-world data, especially for V2X systems. In this paper, we present a self-supervised learning method for V2X cooperative perception, which utilizes the vast amount of unlabeled 3D V2X data to enhance the perception performance. Beyond simply extending the previous pre-training methods for point-cloud representation learning, we introduce a novel self-supervised Cooperative Pretraining framework (termed as CooPre) customized for a collaborative scenario. We point out that cooperative point-cloud sensing compensates for information loss among agents. This motivates us to design a novel proxy task for the 3D encoder to reconstruct LiDAR point clouds across different agents. Besides, we develop a V2X bird-eye-view (BEV) guided masking strategy which effectively allows the model to pay attention to 3D features across heterogeneous V2X agents (i.e., vehicles and infrastructure) in the BEV space. Noticeably, such a masking strategy effectively pretrains the 3D encoder and is compatible with mainstream cooperative perception backbones. Our approach, validated through extensive experiments on representative datasets (i.e., V2X-Real, V2V4Real, and OPV2V), leads to a performance boost across all V2X settings. Additionally, we demonstrate the framework's improvements in cross-domain transferability, data efficiency, and robustness under challenging scenarios. The code will be made publicly available. § INTRODUCTION Achieving autonomy in complex and open traffic environments poses significant challenges for single-vehicle vision systems. These systems often suffer from occlusions and a limited perception range due to each vehicle’s singular viewpoint, limiting the capacity of current deep learning approaches to develop a holistic 3D representation in the interacting environment. Vehicle-to-Everything (V2X) Cooperative Perception <cit.> emerges as a promising solution by providing each ego agent with a comprehensive understanding of the surrounding environment. Through collaboration among connected agents (vehicles or infrastructure), V2X facilitates the sharing of critical sensing information, thereby extending the perception range and mitigating occlusions. However, this paradigm introduces additional geometrical and topological information that the model must handle, necessitating the exploration of a robust representation that accounts for these elements. Obtaining such a representation often requires a large amount of annotated data, but the scale of current real-world V2X datasets<cit.> is still limited compared to single-vehicle counterparts<cit.>. Therefore, investigating the representation learning problem within the constraints of current V2X dataset scales is crucial. In multi-agent perception systems, each ego agent must learn a representation that manages complex agent interaction information to achieve effective cooperation. Specifically, in V2X scenarios, the ego agent must handle different sensor configurations among various agents, which operate at different ranges and placement positions. Capturing this underlying distribution is challenging solely through the supervision of hand-labeled 3D bounding boxes. This complexity poses a challenge for the "train-from-scratch" paradigm, as its learned representation heavily depends on the random initialization of model parameters and the quality and quantity of annotated 3D bounding boxes, thereby affecting overall performance. For example, as discussed in<cit.>, such approach could be easily perturbed by synchronization or localization errors in real-world scenarios. DiscoNet<cit.> offers an alternative paradigm by employing a teacher model to guide the student model during training. However, a significant amount of annotations is still required to train an effective teacher model that can facilitate the learning of the student model. In this paper, based on the aforementioned observations, we propose a simple yet effective self-supervised multi-agent pretraining framework named CooPre, showing that Cooperative Pretraining enables the model to learn meaningful prior representation of the holistic 3D environment before the perception task, as illustrated in Fig.<ref>. Our framework leverages the benefits of unlabeled LiDAR point cloud data transmitted from different agents, allowing the model to reconstruct the point cloud location and learn essential prior knowledge of scenarios (e.g., intersections or corridors) and LiDAR sensor distributions (e.g., range, placement position, and sparsity) of each agent from a bird-eye-view (BEV) perspective. This design is compatible with current cooperative perception backbones. It makes the reconstruction task more challenging due to the enlarged perception field, which helps mitigate issues related to sparse feature points in far-range and occlusion scenarios. Compared to the other two training paradigms in Fig. <ref>, our pretraining framework is annotation-free, which is the main contribution to representation learning in V2X cooperative perception. Following the introduction of this straightforward approach, the remainder of the contribution of this paper focuses on explaining how, where, and why this approach works in V2X cooperative perception scenarios. Our analysis demonstrates: 1) why pretraining by showcasing our framework's strong generalizability to unseen distributions, which is beneficial for domain adaptation, especially when the target dataset is limited in scale; 2) why reconstruction pretraining by highlighting our framework's attention to the geometrical and topological features of rigid-body objects (e.g., cars and trucks); 3) why multi-agent pretraining by illustrating improvements in far-range perception and occlusion handling compared to train-from-scratch and single-agent pretraining frameworks<cit.>. Through extensive ablation studies and experiments on V2X-Real<cit.>, V2V4Real<cit.>, and OPV2V<cit.> datasets, we demonstrate the effectiveness of CooPre and shed light on how it works. § RELATED WORK §.§ Cooperative Perception Single-vehicle systems struggle with occlusions and long-distance perception in complex traffic environments due to the limitation of the perception range of LiDAR devices. Cooperative systems, on the other hand, enhance detection performance by sharing raw data (Early Fusion), detection outputs (Late Fusion), or intermediate bird-eye-view (BEV) representations (Intermediate Fusion) among connected agents <cit.>. With the recent advancement of a variety of simulation and real-world dataset curations<cit.>, many literature<cit.> have been discussing the algorithmic designs of collaborative modes from vehicle-to-vehicle (V2V) collaboration to vehicle-to-everything (V2X) collaboration in different traffic scenarios. Noticeably, the intermediate fusion strategy has been the primary direction since it achieves the best trade-off between accuracy and bandwidth requirements. After applying a 3D encoder<cit.> to the input LiDAR feature, the intermediate fusion strategy involves projecting 3D features to BEV features where agents will perform interactions before final detection results. For example, AttFuse<cit.> utilizes a simple agent-wise single-head attention to fuse all features, whereas V2X-ViT<cit.> presents a unified vision transformer for robust multi-agent multi-scale perception. Despite their targeted designs, these methods all follow "train-from-scratch" paradigm and thus exhibit unstable performance when faced with V2X collaboration challenges, particularly due to sensor data heterogeneity issues <cit.>. In this paper, we propose a generalizable pretraining framework to enhance the 3D encoders of these backbones. Our framework provides a more robust and versatile representation in the BEV space, directly facilitating the 3D detection task in cooperative perception scenarios from a BEV perspective. §.§ Lidar-based Self-supervised Learning Representation learning in autonomous driving scenarios <cit.> has been prevailingly investigated in single-vehicle systems. Stemming from the recent advancement of image reconstruction pretraining methods<cit.>, point cloud pretraining reconstruction methods<cit.> are also proven effective in improving backbone model's robustness and generalizability. Recently, this approach has been applied to 3D representation learning of outdoor point clouds. Occupancy-MAE<cit.> applies a voxel-wise masking strategy to reconstruct masked voxels and predict occupancy. GD-MAE<cit.> proposes a multi-level transformer architecture and a multi-scale masking strategy with a lightweight generative decoder to recover masked patches. GeoMAE<cit.> formulates the pretraining target to be geometric feature predictions, such as pyramid centroid, occupancy, surface normal, and curvature of point clouds. BEV-MAE<cit.> uses a BEV-guided masking strategy to learn BEV feature representations. Notably, while such pretraining methods have shown great potential in developing general feature representations, their improvement is limited due to the restricted perception field caused by perception range and occlusion in single-vehicle systems. Additionally, there is limited discussion on pretraining in multi-agent systems, with the most similar work being CORE<cit.>, which proposes a BEV map reconstruction as an auxiliary task alongside the original perception target. Our work proposes a multi-agent point cloud reconstruction pretraining framework that synergizes effective cooperative pretraining among vehicle and infrastructure agents in V2X cooperative scenarios. § METHOD §.§ Overview The CooPre pretraining pipeline, as shown in Fig.<ref>, involves an early fusion stage where the cooperative agents will transmit their corresponding LiDAR point clouds and metadata to the ego agent. Then we project the vast amount of unlabeled LiDAR point cloud data into a bird-eye-view (BEV) plane, where we perform random masking on the point clouds in predefined BEV grids at a fixed ratio. After that, the unmasked point clouds will be passed to the 3D encoder to obtain a BEV feature, which will be passed to a light decoder to reconstruct masked point clouds from both ego and cooperative agents. At the finetuning stage, the 3D encoder would be taken to perform the downstream cooperative perception task along with the designated fusion network<cit.>. §.§ V2X BEV-guided Masking Strategy Intuition. In a BEV perspective, directly applying a random masking strategy will often lead to the fact of masking out most of the ego agent's near-range point cloud without much attention to the far-range point cloud. This makes the pretraining task hard to learn the representation in far ranges. A BEV-guided masking strategy will mitigate this problem by masking the predefined BEV grid that evenly divides the BEV space. However, from a single-agent perspective, such a masking strategy often results in a large number of BEV grids with a sparse point cloud inside each grid, making the self-supervision hard and inaccurate. Multi-agent collaboration becomes powerful as it provides more point cloud occupation from other collaborative agents in that BEV grid with sparse point clouds. Therefore, we extend the BEV-guided masking strategy<cit.> to a V2X collaborative scenario. V2X LiDAR Point Clouds and Metadata Sharing. In the V2X cooperative scenario, suppose we have ego agent A_ego and a set of N cooperative agents A_i for i ∈{1 … N} within the communication range, where each agent could be either Connected Autonomous Vehicle (CAV) or infrastructure. During the pretraining stage, each cooperative agent A_i shares LiDAR point clouds and metadata information, such as poses, extrinsics, and agent type, to A_ego. We assume the transmission of LiDAR point clouds and metadata is well-synchronized. Consequently, after projecting the point clouds of each cooperative agent to the ego agent's coordinate, the perception point cloud field of the ego agent A_ego includes its own LiDAR point clouds P_ego as well as the LiDAR point clouds P_i from each cooperative agent A_i. We refer to the collection of projected LiDAR point clouds from cooperative agents as P_coop = ⋃_i P_i. Masking Strategy. For each ego agent A_ego, given the perception range and the resolution, we construct a BEV space of X × Y × C with a set of BEV grids g_i, j with a pre-defined voxel size. After that, for each LiDAR point p_k ∈ P_ego⋃ P_coop, we project it onto a corresponding BEV grid g_i,j based on its x, y coordinates. With the help of projected point clouds from cooperative agents, the number of empty BEV grids largely decreases. We then randomly apply a high masking ratio towards non-empty BEV grids, where the point clouds inside the grid will be masked and replaced by learnable point cloud tokens. These tokens will be used alongside with visible point clouds to perform reconstruction target with a light-weight decoder. §.§ Reconstruction Objective As the number of point clouds varies in each masked BEV grid, we designate the lightweight decoder to output a fixed number of point clouds for reconstruction and utilize the Chamfer distance loss as the learning objective. Specifically, suppose in the masked BEV grid g_i,j, the reconstruction loss is defined as the following equation between predicted point clouds P̂ and groundtruth point clouds P: .9!L_rec(P̂,P)=1/|P̂|∑_p̂_̂î∈P̂min_p_j∈ P p_i-p̂_̂ĵ_2^2+1/|P|∑_p_i∈ Pmin_p̂_̂ĵ∈P̂ p_i-p̂_̂ĵ_2^2 Note that the groundtruth point clouds in each masked grid contain a mixture of points from P_ego and P_coop. § EXPERIMENTS §.§ Experiment Setting Datasets. We evaluate our method on three datasets, namely V2X-Real<cit.>, V2V4Real<cit.>, and OPV2V<cit.>. V2X-Real is a large-scale, real-world V2X dataset that encompasses all V2X collaboration modes, including vehicle-centric (VC), infrastructure-centric (IC), vehicle-to-vehicle (V2V), and infra-to-infra (I2I). LiDAR sensors for this dataset were embodied with different sensor configurations and deployed in both intersection and corridor scenarios. Note that except for the V2V subset in the V2X-Real dataset, the rest of the subsets all exhibit such LiDAR sensor heterogeneity property. We also examine the effectiveness of our pretraining method on V2V4Real and OPV2V, two well-established benchmarks for real-world and simulated V2V cooperative perception. Evaluation Metrics. Following previous evaluation protocols <cit.>, we adopt the Average Precisions (AP) at the specified Intersection-over-Union (IoU) threshold as the metric to evaluate the detection performance. For V2X-Real <cit.> dataset, we evaluate the performance of different subclasses (i.e. vehicle, pedestrian, and truck) and a final mean average precision (mAP) at the threshold of 0.3 and 0.5. For V2V4Real <cit.> dataset, we evaluate the performance of the vehicle class at the threshold of 0.5 and 0.7 in both synchronized and asynchronized modes. For OPV2V <cit.> dataset, we evaluate the performance of the vehicle class at the threshold of 0.5 and 0.7 at their two separate test sets. Implementation Details. We evaluate our pretraining method using SECOND backbone<cit.>. Thus to have a fair comparison, we also train SECOND baseline method in V2X-Real<cit.> and V2V4Real<cit.>. All experiments are conducted in one Nvidia A6000 GPU. We employ AdamW<cit.> optimizer with a weight decay of 1×10^-2 to optimize our models. During the pretraining stage, we train the model with a batch size of 4 for 15 epochs using a learning rate of 0.002, and we decay the learning rate with a cosine annealing<cit.>. We use a masking ratio of 0.7 in our main experiments and a fixed predicted point cloud number of 20. During the fine-tuning stage, the optimization process is identical to the train-from-scratch baselines. We finetune the model for 40 epochs in OPV2V dataset, 60 epochs in V2V4Real dataset, and 20 epochs in V2X-Real dataset. We also add normal point cloud data augmentations for all experiments, including scaling, rotation, and flip<cit.>. §.§ Main Results In Table <ref>, we present the experimental results across all collaboration modes in V2X-Real<cit.> dataset. In the VC and V2V settings, CooPre substantially outperforms the train-from-scratch baselines across all classification categories. The improvements are particularly notable in the Car and Truck categories, with a 4.4 mAP0.5 increase in the Car category and a 6.3 mAP0.5 increase in the Truck category in the VC setting, and a 4.8 mAP0.5 increase in the Car category and a 5.3 mAP0.5 increase in the Truck category in the V2V setting. This aligns with our pretraining design, which enables the model to learn more 3D geometrical and topological information beforehand, resulting in more accurate bounding box detection results with higher thresholds. On the other hand, the pretraining shows an incremental effect on detecting pedestrians. We attribute this to two factors: 1) pedestrians are small-scale in nature and thus receive fewer LiDAR features than larger objects, and 2) pedestrians are non-rigid-body objects, making it more challenging for the model to learn their 3D features compared to rigid-body counterparts such as cars and trucks. On the infrastructure side, our pretraining method shows improvements for the Car and Truck categories, but we observe a subtle decrease in the Pedestrian category. We attribute this to the small-scale and non-rigid-body nature of pedestrians, which might be negatively affected by asynchronized results transmitted from connected vehicles, such as pose and localization errors<cit.>. This asynchronization can alter the perception accuracy of static infrastructure observers. We show the generalizability of our method in another real-world dataset V2V4Real<cit.> and a simulation dataset OPV2V<cit.>, as shown in Table <ref> and Table <ref>, respectively. For the V2V4Real dataset, after we pretrain and finetune the model, we test it under synchronized and asynchronized modes for a fair comparison. The improvement is substantial in both testing settings. For the OPV2V dataset, we outperform the baseline by a large margin across all test sets. These results demonstrate the generalizability of our pretraining method across different domains. §.§ Data Efficiency In this section, we investigate the benefits of our pretraining method in scenarios with limited labeled data. Specifically, we randomly sample 20%, 50%, and 80% of the training dataset and train the models with these annotated subsets. For our method, we pretrain the model on the entire training set and then finetune it on each sampled subset. As shown in Fig.<ref>, our method outperforms train-from-scratch baselines across all settings. Notably, the performance gain of CooPre increases as the percentage of data decreases. Additionally, CooPre provides crucial guidance when collaboration involves different sensor configurations. For instance, in the V2X-Real VC dataset, the baseline method struggles to learn a good representation with limited annotations and the performance drops dramatically due to data heterogeneity issues. With our pretraining method, the model learns meaningful prior knowledge of different sensor distributions, leading to substantial improvements even when finetuning with less labeled data. These findings demonstrate the effectiveness of our method in data scarcity scenarios. §.§ Cross-domain Transferability We evaluate the cross-dataset transferability of our pretrained 3D encoder by finetuning it on another dataset. To isolate the effects of data heterogeneity, we investigate the performance across two real-world V2V datasets (V2X-Real V2V and V2V4Real), which have similar sensor configurations. As shown in Table <ref>, pretraining on a different domain improves performance compared to the train-from-scratch baseline. However, it performs worse than pretraining on the source domain due to the domain gap issue. We also conduct an experiment where we combine the V2X-Real V2V and V2V4Real datasets to create a large pretraining corpus. While this approach improves performance over pretraining on the source domain, the gains are less significant. This difference could be attributed to the scenario differences between the V2X-Real V2V and V2V4Real datasets. The former primarily focuses on intersection scenarios, whereas the latter includes a broader range of corridor scenarios in its training set. §.§ Ablation Studies In this section, we conduct extensive experiments to explain how, where, and why our pretraining framework benefits current cooperative perception backbones. Effectiveness of cooperative agents during pretraining. To assess the impact of collaboration during pretraining, we compare the performance of our multi-agent pretraining method with the ego-agent pretraining method<cit.>. As shown in Table <ref>, multi-agent cooperative pretraining leads to further improvements in model performance. Extent of improvements in different perception ranges. We also explore the enhancements across various perception ranges. As shown in Table<ref> and Table<ref>, the most substantial improvements are observed in middle and long ranges compared to the baselines. Given that our multi-agent pretraining method achieves a significantly larger perception field compared to the ego-agent pretraining method, our approach exhibits greater robustness in handling long-range perception and occlusions, as illustrated in Fig.<ref>. Improvements with different cooperative fusion strategies. While our primary focus is on the intermediate fusion strategy, our method also applies to other fusion strategies, as depicted in Fig.<ref>. We observe that our pretraining strategy benefits other cooperative fusion strategies as well, with the intermediate fusion strategy consistently demonstrating the best performance. Robustness assessment. Following <cit.>, we evaluate our method's robustness towards localization error and time delay compared to the baseline method on the V2V4Real dataset, as illustrated in Fig.<ref>. Our method is less sensitive to localization errors and shows its strong robustness against time delay. Masking Ratio. Table <ref> demonstrates the effect of the masking ratio within the range of 0.6 to 0.8. With a masking ratio of 0.7, the pretraining framework achieves the best performance. § CONCLUSION We introduce a multi-agent pretraining framework that prompts the representation to learn a holistic prior knowledge of the 3D environment before performing the perception task. The framework explores the intrinsic geometrical and topological information of scenarios and sensor distributions. Extensive experiments on representative datasets demonstrate the efficacy of the method as it outperforms the previous state-of-the-art methods in all V2X settings. Furthermore, we demonstrate this framework's strong generalizability, cross-domain adaptability, and data efficiency in cooperative perception. Future work includes extending this self-supervised cooperative pretraining paradigm to the field of cooperative perception and prediction task. IEEEtran
http://arxiv.org/abs/2408.11547v1
20240821115534
Asymptotic Normality of Chatterjee's Rank Correlation
[ "Marius Kroll" ]
math.PR
[ "math.PR", "math.ST", "stat.TH", "62H20, 60F05, 62G20, 62G30" ]
Impurities in a trapped 1D Bose gas of arbitrary interaction strength: localization-delocalization transition and absence of self-localization Michael Fleischhauer August 26, 2024 ============================================================================================================================================== § ABSTRACT We prove that Chatterjee's rank correlation based on i.i.d. copies of a random vector (X,Y) is asymptotically normal whenever Y is not almost surely constant. No further conditions on the joint distribution of X and Y are required. MSC2020 Classification: 62H20; 60F05; 62G20; 62G30. Keywords and phrases: Chatterjee's Rank Correlation; Limit Theorems. § INTRODUCTION Suppose that (X_1, Y_1), …, (X_n, Y_n) are i.i.d. copies of some random vector (X,Y) ∈ℝ^2. By reordering this sample according to the X-values, we obtain a new sample (X_1', Y_1'), …, (X_n', Y_n') such that X_k' = X_(k), which denotes the k-th order statistic of the X_1, …, X_n with ties broken at random. <cit.> introduced his rank correlation as ξ_n := 1 - n ∑_i=1^n-1 |r_i+1 - r_i|/2 ∑_i=1^n l_i (n-l_i), where r_i := ∑_j=1^n 1(Y_j' ≤ Y_i') is the rank of Y_i' among the Y_1, …, Y_n, and l_i := ∑_j=1^n 1(Y_j' ≥ Y_i'). It estimates the Dette-Siburg-Stoimenov measure of dependence <cit.> which can be written as ξ = ∫Var(𝔼[1_[y, ∞)(Y)  |  X])  dℙ^Y(y)/∫Var(1_[y, ∞)(Y))  dℙ^Y(y), where ℙ^Y denotes the distribution of Y. This measure of dependence is 0 if and only if X and Y are independent and 1 if and only if Y is a measurable function of X almost surely. The interest in Chatterjee's rank correlation is considerable, as is the body of literature on this simple yet elegant measure of dependence <cit.>. Despite these efforts, the limiting behaviour of Chatterjee's rank correlation is only partly understood. <cit.> in his original paper derives the limiting distribution under the assumption that X and Y are independent, and <cit.> obtain the weak limit for continuous random vectors (X,Y). Further results under weaker assumptions do not exist, yet would be extremely convenient; for instance, <cit.> show that an m out of n bootstrap can be used to construct confidence intervals for ξ, provided that asymptotic normality is established. In this paper, we do just that. More precisely, we show that Chatterjee's rank correlation is always asymptotically normal. The only restriction imposed on X and Y is that Y must not be almost surely constant, which is necessary for ξ_n to be well-defined. In particular, it does not matter whether X and Y are independent or not. § MAIN RESULT If Y is not almost surely constant, then √(n)(ξ_n - ξ) 𝒩(0, σ^2). The limiting variance σ^2 is σ^2 = (2μ_2)^-2{σ_1^2 - 2 σ_1, 2μ_1/μ_2 + σ_2^2 μ_1^2/μ_2^2}, where μ_1 = ℙ(Y_1 Y_2 < Y_3 ≤ Y_1 Y_2  |  X_1 = X_2), μ_2 = ℙ(Y_1 < Y_2 ≤ Y_3) and σ_i^2 = ∑_k=-1^1 𝔼[H_i(Y_1, Y_2) {(H_i(Y_1+k, Y_2+k) - H_i(Y_3, Y_4))} |  X_0 = … = X_4] + Cov(H_i(Y_1, Y_2), H_i(Y_3, Y_4)  |  X_1 = … = X_4), i = 1,2, as well as σ_1, 2 = ∑_k=-1^1 𝔼[H_1(Y_1, Y_2) {(H_2(Y_1+k, Y_2+k) - H_2(Y_3, Y_4))} |  X_0 = … = X_4] + Cov(H_1(Y_1, Y_2), H_2(Y_3, Y_4)  |  X_1 = … = X_4). for the functions H_1(y,y') = ℙ(y y' < Y ≤ y y') + ℙ(Y_1 Y_2 < y ≤ Y_1 Y_2  |  X_1 = X_2), H_2(y) = ℙ(y < Y_1 ≤ Y_2  |  X_1 = X_2) + ℙ(Y_1 < y ≤ Y_2  |  X_1 = X_2) + ℙ(Y_1 < Y_2 ≤ y  |  X_1 = X_2). We also have the identity ξ = 1 - μ_1/(2μ_2). Finally, if the distribution of (Y_1, Y_2) conditional on the event {X_1 = X_2} is continuous, then H_2(y) = 1/2 for all y ∈ℝ, and consequently σ_12 = σ_2^2 = 0. Let us give an overview of the proof idea. Our main idea is to describe the ratio 1 - ξ_n in terms of V-statistics. Consider first the sum in the numerator. Recall that we obtain data Y_1', …, Y_n' by reordering the original observations (X_1, Y_1), …, (X_n, Y_n) according to the X-observations into a new sample (X_1', Y_1'), …, (X_n', Y_n'). Writing sgn(x) for the sign of a real number x, we have ∑_i=1^n-1 |r_i+1 - r_i| = ∑_i=1^n-1sgn(r_i+1 - r_i)(r_i+1 - r_i) = ∑_i=1^n-1sgn(Y'_i+1 - Y'_i)(r_i+1 - r_i) = ∑_i=1^n-1∑_j=1^nsgn(Y'_i+1 - Y'_i){1(Y_j' ≤ Y_i+1') - 1(Y_j' ≤ Y_i')} = ∑_i,j=1^n-1sgn(Y'_i+1 - Y'_i){1(Y_j' ≤ Y_i+1') - 1(Y_j' ≤ Y_i')} + 𝒪(n-1) The denominator can be similarly written as ∑_i=1^n l_i (n-l_i) = ∑_i,j,k = 1^n 1(Y_j' ≥ Y_i') 1(Y_k' < Y_i'). With the notation h_1((s_1, s_2), (t_1, t_2)) := sgn(s_2 - s_1){1(t_1 ≤ s_2) - 1(t_1 ≤ s_1)}, h_2(s,t,u) := 1(t ≥ s) 1(u < s), we can therefore write n ∑_i=1^n-1 |r_i+1 - r_i|/2 ∑_i=1^n l_i (n-l_i) = n^3 V_h_1 + 𝒪(n^2)/2 n^3 V_h_2 = V_h_1/2 V_h_2 + 𝒪(1/n), where V_h_1 denotes the V-statistic with kernel h_1 based on the data (Y_1', Y_2'), …, (Y_n-1', Y_n') and V_h_2 the V-statistic with kernel h_2 based on Y_1', …, Y_n'. If we can establish joint convergence of V_h_1 and V_h_2, we can use the Delta-method to derive the limiting distribution of Chatterjee's rank correlation. Proving joint convergence of two V-statistics is only marginally more difficult than proving convergence of a single V-statistic by virtue of the Cramér-Wold device. Our intermediate goal is now to establish weak convergence of V-statistics based on (Y_1', Y_2'), …, (Y_n-1', Y_n'), and this result is of independent interest. For some fixed r ∈ℕ let h : (ℝ^2)^r →ℝ be bounded and measurable and denote by V_h the V-statistic with kernel h based on the data (Y_1', Y_2'), …, (Y_n-1', Y_n'). Then √(n)(V_h - μ_h ) 𝒩(0, σ_h^2), where μ_h = 𝔼[h([ Y_1; Y_2 ], …, [ Y_2r-1; Y_2r ])  | ∀ i = 1, 3, …, 2r-1 : X_i = X_i+1] and σ_h^2 = ∑_k=-1^1 𝔼[H(Y_1, Y_2) {(H(Y_1+k, Y_2+k) - H(Y_3, Y_4))} |  X_0 = … = X_4] + Cov(H(Y_1, Y_2), H(Y_3, Y_4)  |  X_1 = … = X_4). The function H : ℝ^2 →ℝ is H = ∑_j=1^r h_j, and the h_j : ℝ^2 →ℝ are defined as h_j(y, y') = 𝔼[h([ Y_1; Y_2 ], …, [ y; y' ], …[ Y_2r-1; Y_2r ])  | ∀ i = 1, 3, …, 2r-1 : X_i = X_i+1], with the vector (y,y')^T appearing in the j-th argument of h. If X and Y are independent, then μ_h = 𝔼[h([ Y_1; Y_2 ], …, [ Y_2r-1; Y_2r ])]. If furthermore h is symmetric in its arguments, then h_j(y,y') = 𝔼[h([ y; y' ], [ Y_3; Y_4 ], …, [ Y_2r-1; Y_2r ])] for all j, i.e. each h_j is equal to the usual first-order term in the Hoeffding decomposition of V_h. This also means that σ_h^2 = r^2 ∑_k=-1, 0, 1Cov(h_1(Y_1, Y_2), h_1(Y_1+k, Y_2+k)), which is exactly the limiting variance that we expect from a 1-dependent sequence. How can we prove Theorem <ref>? After all, the data (Y_1', Y_2'), …, (Y_n-1', Y_n') are not very well behaved. First, they are not the initial segment of a sequence, but rather come from a triangular array, since they depend on the triangular array of the order statistics X_1', …, X_n'. Second, they are neither stationary nor independent, and it is not immediately obvious that their dependence falls into one of the well-established frameworks such as mixing. To answer this question, let us first consider the simpler special case where Y_i = g(X_i, U_i) for some measurable function g and a real-valued i.i.d. process (U_i)_i ∈ℕ that is independent of (X_i)_i ∈ℕ. For instance, the U_i might be standard normal errors and Y_i = X_i + σ U_i for some σ > 0, but of course much more complicated models also fall under this special case. Now instead of proving Theorem <ref> directly, we can instead try and establish an analogous result for the data (X_i', U_i', X_i+1', U_i+1'), i = 1, …, n-1, where the U_1', …, U_n' arise from the U_1, …, U_n according to the same permutation that transforms X_1, …, X_n into X_1', …, X_n'. For if we have a result for V-statistics of these data, we can simply consider the kernel function f : (ℝ^4)^r →ℝ, (x_1, u_1, …, x_2r, u_2r) ↦ h([ g(x_1, u_1); g(x_2, u_2) ], …, [ g(x_2r-1, u_2r-1); g(x_2r, u_2r) ]), which will be bounded and measurable if the same is true for the original kernel h. The way to obtain such a result is via empirical process theory, and the main difficulty is the convergence of the four-dimensional distribution function of (X_i', U_i', X_i+1', U_i+1'), i = 1, …, n-1, which we denote by H_n. We can, however, use the fact that the X_1', …, X_n' are not arbitrary random variables, but the order statistic of X_1, …, X_n. More precisely, X_i' = X_(i) = F_n^-1(i/n) for any i = 1, …, n, where F_n denotes the empirical distribution function of X_1, …, X_n and G^-1 is the generalised inverse of a distribution function G. Since F_n^-1(i/n) ≤ s if and only if i/n ≤ F_n(s) we have 1(X_i' ≤ s, X_i+1' ≤ t, U_i' ≤ s', U_i+1' ≤ t') = 1(i/n≤ F_n(s), i+1/n≤ F_n(t), U_i' ≤ s', U_i+1' ≤ t') = 1(i ≤min{n F_n(s), n F_n(t) - 1}, U_i' ≤ s', U_i+1' ≤ t') for all (s, s', t, t') ∈ℝ^4. This implies that H_n(s, s', t, t') ≈1/n-1∑_i=1^⌊ n F_n(s t) ⌋1(U_i' ≤ s', U_i+1' ≤ t'). Because the process (U_i)_i ∈ℕ is independent of (X_i)_i ∈ℕ and i.i.d., the reordered segment U_1', …, U_n' is equal in distribution to the initial segment U_1, …, U_n. An appropriately centred version of H_n is therefore approximately equal in distribution to the process (n^-1 V(s', t', n F_n(s t)))_s,s',t,t' ∈ℝ, where V(s', t', λ) = ∑_i=1^⌊λ⌋{1(U_i ≤ s', U_i+1≤ t') - G(s') G(t')}, and G is the distribution function of U_1. Because the process (U_i)_i ∈ℕ is i.i.d., the observations (U_i, U_i+1), i = 1, …, n-1 are 1-dependent. Using strong approximation methods for the sequential empirical process of weakly dependent data, the process V can be approximated by a Kiefer process, and all that remains are some technical details. The desired result for V-statistics based on (X_i', U_i', X_i+1', U_i+1'), i = 1, …, n-1, can be proven from the convergence of H_n by permanence arguments for Donsker classes, similar to the standard case of i.i.d. observations. We now see that two main features are important in the proof of Theorem <ref>. First, we use the fact that the samples X_1', …, X_n' are the order statistic of X_1, …, X_n. On a technical level, this is what allows us to approximate H_n by a Kiefer process. On a more intuitive level, it dramatically simplifies the behaviour of the data Y_1', …, Y_n'. Since Y_i = g(X_i, U_i) with (U_i)_i ∈ℕ independent of (X_i)_i ∈ℕ, any dependence between Y_i' and Y_j' must come from the dependence between X_i' and X_j', since the reordered segment U_1', …, U_n' is equal in distribution to U_1, …, U_n. But since X_1', …, X_n' are simply the order statistic of X_1, …, X_n, we know at least roughly what X_i' and X_j' will be, since the entire segment X_1', …, X_n' will look like a stretched out version of the quantile function of X. This is particularly obvious if X takes only finitely many values α_1 < … < α_K with probabilities p_1, …, p_K. Then, if i ≪ n p_1, we know that X_i' = α_1 with high probability; if n p_1 ≪ i ≪ n(p_1 + p_2), we know that X_i' = α_2 with high probability, and so on. In a sense, by ordering the original observations X_1, …, X_n, we are taking the randomness out of them to a large extent. But since Y_i = g(X_i, U_i), this also means that we are taking away any dependence between Y_1', …, Y_n', making them a much less daunting sample. The second feature that we rely upon is the assumed model Y_i = g(X_i, U_i) and the fact that the process (U_i)_i ∈ℕ is i.i.d. and independent of (X_i)_i ∈ℕ. This ensures that U_1', …, U_n' is equal in distribution to U_1, …, U_n, and ultimately this is what allows us to use results for sequential empirical processes based on weakly dependent data. Of course, it is also what allows us to even consider the process H_n in the first place. Until now, this representation was an assumption that we made to simplify the situation. It was motivated by specific models such as the aforementioned Y_i = X_i + σ U_i for standard normal errors U_i. But there is another interpretation of the identity Y_i = g(X_i, U_i). The quantity Y_i has, of course, randomness in it. Some of this randomness might be caused by X_i, if the two are dependent. But unless Y_i is a deterministic function of X_i, this does not explain all the randomness in Y_i. The entire randomness of Y_i must come from X_i as well as from `everything else', and this `everything else' should intuitively be independent of X_i. The function g describes how exactly these two influences – X_i and `everything else' – combine to determine the value of Y_i. While this is of course purely heuristic, it suggests that we might always be able to find a representation Y = g(X,U). Indeed, this claim can be proven rigorously <cit.>: For any two real-valued random variables X and Y, there exists a measurable function g and a real-valued random variable U, independent of X, such that Y = g(X,U). Thus, the identity Y_i = g(X_i, U_i) is not just a simplifying assumption; it is a fact that we can freely use. We can therefore employ the exact same strategy outlined above to prove Theorem <ref> for any pair of random variables X and Y. § PROOFS §.§ Proof of Theorem <ref> Throughout the remaining article, we may use the notation (-∞, x] := ∏_j=1^d (-∞, x_j] for x ∈ℝ^d as well as ξ(f) := ∫ f  dξ for a signed measure ξ. The notation ℓ^∞(T) means the space of bounded real-valued functions on T, and if not specified otherwise, we equip this space with the supremum norm ·_∞. If we write X_n → X in distribution for two bounded processes on T, we mean convergence of distribution as elements of ℓ^∞(T), unless specified otherwise. This weak convergence is meant in the sense of <cit.>. We do not use special notation such as X_n ⇝ X for this. Instead we write X_n X, as in the case of traditional weak convergence. There exists a copy (X̅_k, Y̅_k)_k ∈ℕ of (X_k, Y_k)_k ∈ℕ and a sequence (U_k)_k ∈ℕ of real-valued random variables along with a measurable function g with the following properties: * (X̅_k, Y̅_k)_k ∈ℕ = (X̅_k, g(X̅_k, U_k))_k ∈ℕ almost surely, * (U_k)_k ∈ℕ is i.i.d. and independent of (X̅_k)_k ∈ℕ, * for all n ∈ℕ, U_1', …, U_n' are i.i.d. and independent of (X̅_k)_k ∈ℕ, where U_1', …, U_n' are a reordering of U_1, …, U_n according to the same permutation that transforms X̅_1, …, X̅_n into X̅_(1), …, X̅_(n). By Proposition 4.1 in <cit.>, there exists a measurable function g and a real-valued random variable U_1 such that (X_1, Y_1) = (X_1, g(X_1, U_1)) almost surely. Let (X̅_k, U_k) be i.i.d. copies of (X_1, U_1), k ≥ 2, set X̅_1 := X_1 and Y̅_k := g(X̅_k, U_k). This proves (<ref>) and (<ref>). To prove (<ref>), notice that we can write U_1', …, U_n' as σ_X̅_1, …, X̅_n(U_1, …, U_n), where σ_X̅_1, …, X̅_n is a random permutation fully determined by X̅_1, …, X̅_n. Furthermore, with η denoting the distribution of U_1, it holds that (U_1, …, U_n) has distribution η^n by the i.i.d. property of the U_k. Let M be a measurable set, and write μ_n for the joint distribution of the X̅_1, …, X̅_n, then ℙ((U_1', …, U_n') ∈ M) = ∬1_M(σ_x_1, …, x_n(u_1, …, u_n))  dη^n(u_1, …, u_n)  dμ_n(x_1, …, x_n) = ∬1_M(u_1, …, u_n)  dη^n(u_1, …, u_n)  dμ_n(x_1, …, x_n) = ∫ℙ((U_1, …, U_n) ∈ M)  dμ_n = ℙ((U_1, …, U_n) ∈ M), where the first equality holds because the U_1, …, U_n are independent of the X̅_1, …, X̅_n and the second one holds because product measures are invariant under permutations. Therefore the U_1', …, U_n' are i.i.d. since the same holds for the U_1, …, U_n. By the same argument, it holds that 𝔼[1_M(U_1', …, U_n')  | (X̅_k)_k ∈ℕ] = ∫1_M(σ_X̅_1, …, X̅_n(u_1, …, u_n))  dη^n(u_1, …, u_n) = ∫1_M(u_1, …, u_n)  dη^n(u_1, …, u_n) = ℙ((U_1', …, U_n') ∈ M) almost surely, and so (U_1', …, U_n') is independent of (X̅_k)_k ∈ℕ. From now on we will assume without loss of generality that the process (X_k, Y_k)_k ∈ℕ fulfills statements (<ref>) through (<ref>) in Lemma <ref>, otherwise we may replace it with (X̅_k, Y̅_k)_k ∈ℕ. While we do not make it explicit in our notation, recall that X_1', …, X_n' is not a sequence but rather the n-th row in a triangular array, and the same holds for U_1', …, U_n'. A more precise notation would be X_n,1', …, X_n,n' and U_n,1', …, U_n,n'. Define the triangular array W := (W_n,i)_1 ≤ i ≤ n, n ∈ℕ, where W_n,i:= (X_i', U_i') = (X_n,i', U_n,i'). Then the empirical distribution of (W_n,1, W_n,2), …, (W_n,n-1, W_n, n) is determined by the four-dimensional cumulative distribution function H_n, H_n(s, s', t, t') = 1/n-1∑_i=1^n-11(X_i' ≤ s, X_i+1' ≤ t, U_i' ≤ s', U_i+1' ≤ t'). Let F and G denote the distribution functions of X and U_1, respectively, and define H by H(s, s', t, t') := F(s t) G(s') G(t'). Then it holds that √(n)(H_n - H) B_H. B_H is a centred process given by B_H(s, s', t, t') = K_G(s', t', F(s t)) + B_F(s,t) G(s') G(t'), where in turn K_G and B_F are two independent centred Gaussian processes on ℝ^2 × [0, ∞) and ℝ^2, respectively. The process K_G has covariance function Γ_G((s_1', t_1', λ_1), (s_2', t_2', λ_2)) = (λ_1 λ_2) Λ_G((s_1', t_1'), (s_2', t_2')), where Λ_G((s_1', t_1'), (s_2', t_2')) := G(s_1') G(t_2'){G(s_2' t_1') - G(s_2')G(t_1')} + G(s_2') G(t_1'){G(s_1' t_2') - G(s_1')G(t_2')} + G(s_1' s_2')G(t_1' t_2') - G(s_1')G(s_2')G(t_1')G(t_2'), and the process B_F has covariance function Γ_F((s_1, t_1), (s_2, t_2)) = F(s_1 t_1 s_2 t_2) - F(s_1 t_1) F(s_2 t_2). By Lemma 21.1 in <cit.>, it holds that J^-1(p) ≤ x if and only if p ≤ J(x) for any distribution function J and any 0 < p < 1, where J^-1 denotes the generalised inverse of a distribution function. Thus, X_i' = X_(i) = F_n^-1(i/n) ≤ s if and only if i/n ≤ F_n(s) for all 1 ≤ i < n, which implies 1(X_i' ≤ s, X_i+1' ≤ t, U_i' ≤ s', U_i+1' ≤ t') = 1(i/n≤ F_n(s), i+1/n≤ F_n(t), U_i' ≤ s', U_i+1' ≤ t') = 1(i ≤min{n F_n(s), n F_n(t) - 1}, U_i' ≤ s', U_i+1' ≤ t') for all 1 ≤ i < n-1. Writing m = min{n F_n(s), n F_n(t) - 1}, we therefore get H_n(s, s', t, t') = 1/n-1∑_i=1^⌊ m ⌋1(U_i' ≤ s', U_i+1' ≤ t') + 𝒪(1/n). Furthermore, m = n F_n(s t) + 𝒪(1), and so H_n(s, s', t, t') = 1/n-1∑_i=1^⌊ n F_n(s t) ⌋1(U_i' ≤ s', U_i+1' ≤ t') + 𝒪(1/n). Because each U_1', …, U_n' is independent from X_1, …, X_n and equal in distribution to U_1, …, U_n, H_n is equal in distribution to the process defined by 1/n-1∑_i=1^⌊ n F_n(s t) ⌋1(U_i ≤ s', U_i+1≤ t'), up to an error term of 𝒪(1/n). This equality in distribution holds not only for specific s,t,s', t', but for the entire process H_n. Now define the process V = (V(s', t',λ))_λ≤ 0, s',t' ∈ℝ by V(s', t', λ) = ∑_i=1^⌊λ⌋{1(U_i ≤ s', U_i+1≤ t') - G(s') G(t')}. Because (U_i)_i ∈ℕ is i.i.d., the block observations (U_i, U_i+1) are strictly stationarity and 1-dependent; in particular, they are absolutely regular (or β-mixing). Theorem 3.1 in <cit.> therefore implies the existence of a Kiefer process K_G, indexed on ℝ×ℝ× [0, ∞), such that sup_0 ≤λ≤ n, s', t' ∈ℝ| V(λ, s', t') - K_G(s', t', λ)| = 𝒪_a.s.(n^1/3 (log n)^6). By Kiefer process, we mean a centred Gaussian process with covariance function Λ_G, defined by Γ_G((s_1', t_1', λ_1), (s_2', t_2', λ_2)) = (λ_1 λ_2) Λ_G((s_1', t_1'), (s_2', t_2')), where Λ_G((s_1', t_1'), (s_2', t_2')) := G(s_1') G(t_2'){G(s_2' t_1') - G(s_2')G(t_1')} + G(s_2') G(t_1'){G(s_1' t_2') - G(s_1')G(t_2')} + G(s_1' s_2')G(t_1' t_2') - G(s_1')G(s_2')G(t_1')G(t_2'). The extra terms in Λ_G, compared to the Kiefer processes arising from sequential empirical processes based on i.i.d. observations, come from the dependencies at lag 1. In particular, for any fixed n, the process (n^-1/2 K_G(s', t', nλ))_s',t',λ is equal in distribution to K_G itself. Now the process √(n)(H_n - H) is equal in distribution to the process B_n,H defined by B_n,H(s, s', t, t') := √(n)/n-1 V(s',t', n F_n(s t)) + √(n) ⌊ n F_n(s t)⌋ - ⌊ n F(s t)⌋/n-1 G(s') G(t') = √(n)/n-1 K_G(s',t', n F_n(s t)) + √(n) ⌊ n F_n(s t)⌋ - ⌊ n F(s t)⌋/n-1 G(s') G(t') + 𝒪_a.s.(n^-1/6(log n)^6) 𝒟= K_G(s',t', F_n(s t)) + √(n)(F_n(s t) - F(s t)) G(s') G(t') + 𝒪(n^-1/2) + 𝒪_a.s.(n^-1/6(log n)^6), by Eq. (<ref>). The equality symbol overset with the letter 𝒟 signifies equality in distribution. Since K_G only depends on (U_k)_k ∈ℕ which is independent of (X_k)_k ∈ℕ, the process (F_n(s t) - F(s t))_s,t ∈ℝ is independent of K_G. Furthermore, it holds that √(n)(F_n(s t) - F(s t))_s,t ∈ℝ (B_F(s,t))_s,t ∈ℝ for a centred Gaussian process B_F with covariance function Γ_F((s_1, t_1), (s_2, t_2)) = F(s_1 t_1 s_2 t_2) - F(s_1 t_1) F(s_2 t_2). This follows by Theorems 2.6.4 and 2.5.2 as well as Eq. (2.1.2) in <cit.> because the class {(-∞, s t]  |  s,t ∈ℝ} has a finite Vapnik-Červonenkis (VC) dimension. For the same reason, (F_n(s t))_s,t ∈ℝ converges as a process to (F(s t))_s,t ∈ℝ because a measurable VC-class is also Glivenko-Cantelli; cf. Theorems 2.4.3 and 2.6.4 in <cit.>. This also implies that the process (K_G(s', t', F_n(s t)))_s, t, s', t' converges almost surely to the process (K_G(s', t', F(s t)))_s, t, s', t', since the sample paths of K_G(s', t', λ) are almost surely uniformly continuous in λ with respect to the usual metric on ℝ by Theorem 3.1, Item 2, in <cit.>. Thus, by Eq. (<ref>), Slutsky's theorem and the continuous mapping theorem, it holds that B_n,H B_H, where the process B_H is defined by B_H(s, s', t, t') := K_G(s', t', F(s t)) + B_F(s,t) G(s') G(t'). Using the notation u_i := (s_i, s_i', t_i, t_i'), the covariance function Γ of the limiting process B_H in Proposition <ref> can be written as Γ(u_1, u_2) = ∑_k=-1, 0, 1{𝔼[1_(-∞, u_1][ X_1; U_1; X_1; U_2 ]1_(-∞, u_2][ X_1; U_1+k; X_1; U_2+k ]] - 𝔼[1_(-∞, u_1][ X_1; U_1; X_1; U_2 ]1_(-∞, u_2][ X_1; U_3; X_1; U_4 ]]} + 𝔼[1_(-∞, u_1][ X_1; U_1; X_1; U_2 ]1_(-∞, u_2][ X_1; U_3; X_1; U_4 ]] - 𝔼[1_(-∞, u_1][ X_1; U_1; X_1; U_2 ]1_(-∞, u_2][ X_2; U_3; X_2; U_4 ]]. While this form is somewhat cumbersome, we can readily generalise Γ to take arbitrary square-integrable functions f and g as arguments by replacing the indicator functions 1_(-∞, u_1] and 1_(-∞, u_2] in the above expression by f and g, respectively, i.e. Γ(f,g) = ∑_k=-1, 0, 1{𝔼[f[ X_1; U_1; X_1; U_2 ] g[ X_1; U_1+k; X_1; U_2+k ]] - 𝔼[f[ X_1; U_1; X_1; U_2 ] g[ X_1; U_3; X_1; U_4 ]]} + 𝔼[f[ X_1; U_1; X_1; U_2 ] g[ X_1; U_3; X_1; U_4 ]] - 𝔼[f[ X_1; U_1; X_1; U_2 ] g[ X_2; U_3; X_2; U_4 ]]. For this generalised version of Γ, the monotone convergence theorem implies that if f_k and g_k, k ∈ℕ, are non-negative measurable functions that converge monotonically to functions f and g, respectively, then Γ(f_k, g_k) →Γ(f,g). Finally, if either one of the functions f and g are constant, then Γ(f,g) = 0. If we write Q_n for the empirical measure of (W_n,1, W_n,2), …, (W_n,n-1, W_n,n), Q for the distribution of (X_1, U_1, X_1, U_2) and ℱ = {1_(-∞, x] |  x ∈ℝ^4}, then Proposition <ref> can be restated as follows: The process (√(n)(Q_n(f) - Q(f)))_f ∈ℱ converges weakly to a centred Gaussian process whose covariance function Γ is given in Eq. (<ref>). Let X_n = (X_n(t))_t ∈ T be a real-valued processes indexed in some set T. If X_n X for some tight process X, then Y_n := (X_n(t_j))_t ∈ T^ℕ(X(t_j))_t ∈ T^ℕ =: Y as random elements of ℓ^∞(T)^ℕ, where t_j denotes the j-th coordinate of t = (t_1, t_2, …) ∈ T^ℕ. For k ∈ℕ, define the processes Y_n^(k) := (X_n(t_j))_t ∈ T^k and Y^(k) := (X(t_j))_t ∈ T^k. By Theorem 1.4.8 in <cit.> it suffices to show that Y_n^(k) converges weakly to Y^(k) for any k ∈ℕ. Let us therefore fix such a k. For any t^(1), …, t^(m)∈ T^k, the weak convergence of (Y_n^(k)(t_1), …, Y_n^(k)(t_m)) follows from the weak convergence of the finite dimensional marginals (X_n(t_1^(1)), …, X_n(t_k^(1)), …, X_n(t_1^(m)), …, X_n(t_k^(m))), where t_j^(i) denotes the j-th coordinate of t^(i). By Lemma 1.3.8 in <cit.>, the sequence (X_n)_n ∈ℕ is asymptotically tight as well as asymptotically measurable. Lemmas 1.4.3 and 1.4.4 in <cit.> therefore imply that the same is true for (Y_n^(k))_n ∈ℕ. The convergence of the finite dimensional marginals together with Prohorov's Theorem <cit.> now give us weak convergence of Y_n^(k) to Y^(k). Let S be a normed space, T ⊆ S a subset that is closed under scalar multiplication and write T_+ = {∑_i=1^∞ t_i  |  t_i ∈ T and ∑_i=1^∞ t_i converges} for the set of all convergent series formed from elements of T. Suppose that X and X_n are real-valued processes indexed in S that are also linear over T, i.e. X_n(α_1 t_1 + α_2 t_2) = α_1 X_n(t_1) + α_2 X_n(t_2) and X(α_1 t_1 + α_2 t_2) = α_1 X(t_1) + α_2 X(t_2) for all t_1, t_2 ∈ T and α_1, α_2 ∈ℝ. If (X_n(t))_t ∈ T (X(t))_t ∈ T and (X(t))_t ∈ T is tight, then (X_n(s))_s ∈ T_+(X(s))_s ∈ T_+, i.e. weak convergence as processes on T carries over to weak convergence on T_+. For each s ∈ T_+, there is a non-empty set Σ(s) = {(t_1, t_2, …) ∈ T^ℕ | ∑_i=1^∞ t_i = s}. By the axiom of choice, we can choose one element from each Σ(s) and denote this representative by s̃. We equip the product space ℓ^∞(T)^ℕ with the norm (f_1,f_2, …) := ∑_i=1^∞ 2^-if_i_∞/(1 + f_i_∞), which induces the product topology on ℓ^∞(T)^ℕ. Consider the subspace D_0 of all elements (f_1, f_2, …) such that sup_i ∈ℕf_i_∞ < ∞ and define the operator ϕ : D_0 →(ℓ^∞(T_+), ·_∞), (f_1,f_2, …) ↦[ s ↦∑_i=1^∞ 2^-i f_i(2^i s̃_i)]. ϕ is linear and for any (f_1,f_2,…) ∈ D_0, ϕ(f_1,f_2,…)_∞ = sup_s ∈ T_+|∑_i=1^∞ 2^-i f_i(2^i s̃_i)| ≤∑_i=1^∞ 2^-isup_t_i ∈ T |f_i(t_i)| ≤(1 + sup_i ∈ℕf_i_∞) ∑_i=1^∞ 2^-if_i_∞/1 + f_i_∞ = (1 + sup_i ∈ℕf_i_∞) (f_1, f_2, …). Thus, ϕ is continuous. Recall the objects Y_n and Y from Lemma <ref>. By that lemma and Theorem 1.3.10 in <cit.>, the weak convergence Y_n → Y also holds if we consider Y_n and Y as random elements in D_0 instead of ℓ^∞(T)^ℕ. The continuous mapping theorem <cit.> therefore implies ϕ(Y_n) ϕ(Y), but because X_n is linear over T, it holds that ϕ(Y_n) = (∑_i=1^∞ 2^-i X_n(2^i s̃_i))_s ∈ T_+ = (X_n(∑_i=1^∞s̃_i))_s ∈ T_+ = (X_n(s))_s ∈ T_+, and ϕ(Y) = (X(s))_s ∈ T_+ by the same argument. Under the assumptions of Proposition <ref> and using the notation from Remark <ref>, the process √(n)(Q_n(f) - Q(f))_f ∈ℬ converges in distribution to a centred Gaussian process whose covariance function is given in Eq. (<ref>). ℬ denotes class of all measurable bounded functions. It is easy to see that the claim certainly holds for the process √(n)(Q_n(g) - Q(g))_g ∈𝒢 on the set of scalar multiples 𝒢 = {α f  | α∈ℝ, f ∈ℱ}. This follows from the linearity of the integral and the property Γ(α f, α f) = α^2 Γ(f,f) for all α∈ℝ. Lemma <ref> then implies the claim for the process √(n)(Q_n(g) - Q(g))_g ∈𝒢_+, where 𝒢_+ is the set of all convergent series formed from elements of 𝒢. As such, 𝒢_+ contains the linear span of ℱ. It remains to show that we can extend the process convergence to the closure of the linear span under monotone convergence to a bounded function. Thus, fix some bounded h and a sequence f_k ∈span(ℱ) that monotonically converges to h. Then h - f_k monotonically converges to 0. Define the semimetric ρ by ρ^2(f,g) = Γ(f-g, f-g). Then, since h - f_k ↘ 0, we have ρ^2(h, f_k) = Γ(h - f_k, h - f_k) → 0 by Remark <ref>, i.e. f_k converges to h in ρ as well as pointwise. The proof of the permanence of the Donsker property on span(ℱ) with regards to this convergence follows by standard arguments; cf. Theorem 2.10.2 and its proof in <cit.>. The convergence in ρ is the appropriate generalisation of the L_2-convergence that is required in the standard i.i.d. case. If we define ℋ to be the closure of span(ℱ) under this type of convergence, then ℋ also contains the constant functions, as we can monotonically approximate the constant 1-function with functions from ℱ itself. By the monotone class theorem, ℋ contains the class of all measurable bounded functions ℬ. Let r ∈ℕ be fixed. Under the assumptions of Proposition <ref> and using the notation from Remark <ref>, it holds for any bounded and measurable f : (ℝ^4)^r →ℝ that √(n)(Q_n^r(f) - Q^r(f)) 𝒩(0, Γ(∑_j=1^r f_j,Q, ∑_j=1^r f_j,Q)), where each f_j,Q : ℝ^4 →ℝ is defined by f_j,Q(x) = ∫ f(x_1, …, x_r)  dQ^r-1(x_1, …, x_j-1, x_j+1, …, x_r). Let ℬ and ℬ(r) denote the class of measurable, bounded functions on ℝ^4 and (ℝ^4)^r, respectively. Let D_ϕ be the class of processes on ℬ that are of the form (P(f))_f ∈ℬ for some probability measure P. Define the operator ϕ : D_ϕ→ℓ^∞(ℬ(r)) by ϕ : (P(f))_f ∈ℬ↦(P^r(g))_g ∈ℬ(r). Then ϕ is Hadamard-differentiable on D_ϕ with derivative ϕ_P' : ℓ^∞(ℬ) →ℓ^∞(ℬ(r)), G ↦ G(∑_j=1^r f_j,P). To see this, fix some P ∈ D_ϕ and let t_n be a real sequence converging to 0 and G_n, G ∈ℓ^∞(ℬ) such that G_n → G and t_n G_n ∈ D_ϕ for all n. This also implies that each G_n is a signed measure. Since the product measure operator ⊗ is a non-commutative multiplication on the ring of signed measures, we can use a generalised analogue of the binomial theorem to see that (P + t_n G_n)^r(f) = ∫ f  d(P + t_n G_n)^r = ∫ f  dP^r + t_n ∑_j=1^r ∫ f  d(P^j-1⊗ G_n ⊗ P^r-j) + 𝒪(t_n^2) = P^r(f) + t_n ∫∑_j=1^r f_j,P dG_n + 𝒪(t_n^2) = P^r(f) + t_n G_n(∑_j=1^r f_j,P) + 𝒪(t_n^2). The claimed identity of the Hadamard derivative follows from this. The functional Delta method <cit.> then implies √(n)(ϕ(Q_n) - ϕ(Q)) = √(n)(Q_n^r(g) - Q^r(g))_g ∈ℬ(r)(G(∑_j=1^r g_j,P))_g ∈ℬ(r) for a centred Gaussian process G whose covariance function is given in Eq. (<ref>). This also proves our claim. Apply Lemma <ref> to f : (ℝ^4)^r →ℝ, (x_1, u_1, …, x_2r, u_2r) ↦ h([ g(x_1, u_1); g(x_2, u_2) ], …, [ g(x_2r-1, u_2r-1); g(x_2r, u_2r) ]), where g is the function from Lemma <ref>. §.§ Proof of Theorem <ref> Recall the functions h_1 and h_2 from Eq. (<ref>) and the corresponding objects h_1,j and h_2,j as well as μ_h_1 and μ_h_2 from Theorem <ref>. We have the identities H_1(y,y') := ∑_j=1^2 h_1,j(y,y') = ℙ(y y' < Y ≤ y y') + ℙ(Y_1 Y_2 < y ≤ Y_1 Y_2  |  X_1 = X_2), H_2(y) := ∑_j=1^3 h_2,j(y) = ℙ(y < Y_1 ≤ Y_2  |  X_1 = X_2) + ℙ(Y_1 < y ≤ Y_2  |  X_1 = X_2) + ℙ(Y_1 < Y_2 ≤ y  |  X_1 = X_2), and μ_1 := μ_h_1 = ℙ(Y_1 Y_2 < Y_3 ≤ Y_1 Y_2  |  X_1 = X_2), μ_2 := μ_h_2 = ℙ(Y_1 < Y_2 ≤ Y_3). In particular, if X and Y are independent, it holds that μ_1 = 2 μ_2. If the distribution of (Y_1, Y_2) conditional on the event {X_1 = X_2} is continuous, then H_2 is constant. All these identities follow from the definitions of h_i,j in Theorem <ref> and simple calculations. The last claim follows because if (Y_1, Y_2) has a continuous distribution conditional on the event {X_1 = X_2}, then H_2(y) = ℙ(Y_1 ≤ Y_2  |  X_1 = X_2) = 1/2. It holds that √(n)((V_h_1, V_h_2) - (μ_1, μ_2)) 𝒩(0, [ σ_1^2 σ_1, 2; σ_1, 2 σ_2^2 ]), where σ_i^2 = ∑_k=-1^1 𝔼[H_i(Y_1, Y_2) {(H_i(Y_1+k, Y_2+k) - H_i(Y_3, Y_4))} |  X_0 = … = X_4] + Cov(H_i(Y_1, Y_2), H_i(Y_3, Y_4)  |  X_1 = … = X_4), i = 1,2, and σ_1, 2 = ∑_k=-1^1 𝔼[H_1(Y_1, Y_2) {(H_2(Y_1+k, Y_2+k) - H_2(Y_3, Y_4))} |  X_0 = … = X_4] + Cov(H_1(Y_1, Y_2), H_2(Y_3, Y_4)  |  X_1 = … = X_4). This follows from Theorem <ref> and the Cramér-Wold device. Define ϕ : ℝ^2 →ℝ, ϕ(x,y) = x/(2y). Then ϕ is differentiable on {(x,y)  |  y ≠ 0} with gradient ∇ϕ(x,y) = (2y)^-1 (1, -x/y). Since Y is not almost surely constant, V_h_2 > 0 for all n ∈ℕ. Furthermore μ_2 > 0 by Lemma <ref> unless Y is almost surely constant. ϕ is therefore differentiable at μ = (μ_1, μ_2), and so by the Delta-method <cit.> and Lemma <ref> it holds that √(n)(V_h_1/2 V_h_2 - μ_1/2 μ_2) 𝒩(0, σ^2), with σ^2 = ∇ϕ(μ_1, μ_2) [ σ_1^2 σ_1, 2; σ_1, 2 σ_2^2 ](∇ϕ(μ_1, μ_2))^T = (2μ_2)^-2{σ_1^2 - 2 σ_1, 2μ_1/μ_2 + σ_2^2 μ_1^2/μ_2^2}. This implies the weak convergence. The identity ξ = 1 - μ_1/(2μ_2) follows if we can show that ∫𝔼[Var(1_[y,∞)(Y)  |  X)]  dℙ^Y(y)/∫Var(1_[y,∞)(Y))  dℙ^Y(y) = μ_1/2 μ_2, because Var(𝔼[U  |  V]) = Var(U) - 𝔼[Var(U  |  V)] for any random variables U and V. We only consider the numerator as the denominator is analogous. It holds that Var(1_[y,∞)(Y)  |  X) = ℙ(y ≤ Y  |  X) - ℙ(y ≤ Y  |  X)^2 = ℙ(y ≤ Y  |  X) - ℙ(y ≤ Y_1 Y_2  |  X_1 = X_2 = X). Integrating over X yields ℙ(y ≤ Y) - ℙ(y ≤ Y_1 Y_2  |  X_1 = X_2) = ℙ(y ≤ Y_2  |  X_1 = X_2) - ℙ(y ≤ Y_1 Y_2  |  X_1 = X_2), which is equal to ℙ(Y_1 Y_2 < y ≤ Y_2  |  X_1 = X_2) = ℙ(Y_1 < y ≤ Y_2  |  X_1 = X_2), since {Y_2 < Y_2} is the empty set. By integrating over y, we finally obtain ∫𝔼[Var(1_[y,∞)(Y)  |  X)]  dℙ^Y(y) = ℙ(Y_1 < Y_3 ≤ Y_2  |  X_1 = X_2) = μ_1/2. In a similar way, we can show that ∫Var(1_[y,∞)(Y))  dℙ^Y(y) = μ_2. All other claims in the theorem have already been proven in this section. abbrvnat
http://arxiv.org/abs/2408.11328v1
20240821042130
Measurement-based Fast Quantum State Stabilization with Deep Reinforcement Learning
[ "Chunxiang Song", "Yanan Liu", "Daoyi Dong", "Hidehiro Yonezawa" ]
eess.SY
[ "eess.SY", "cs.SY" ]
Measurement-based Fast Quantum State Stabilization with Deep Reinforcement Learning Chunxiang Song, Yanan Liu, Daoyi Dong, Hidehiro Yonezawa Chunxiang Song is with the School of Engineering and Technology, University of New South Wales, Canberra, ACT 2600, Australia (e-mail: chunxsong@gmail.com). Yanan Liu is with the School of Engineering, University of Newcastle, Callaghan, NSW 2308, Australia (e-mail: yaananliu@gmail.com). Daoyi Dong is with the School of Engineering, Australian National University, Canberra, ACT 2601, Australia (e-mail: daoyidong@gmail.com). Hidehiro Yonezawa is with the Optical Quantum Control Research Team, RIKEN Center for Quantum Computing, 2-1 Hirosawa, Wako, Saitama, 351-0198, Japan (e-mail: hidehiro.yonezawa@riken.jp). August 26, 2024 ===================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The stabilization of quantum states is a fundamental problem for realizing various quantum technologies. Measurement-based-feedback strategies have demonstrated powerful performance, and the construction of quantum control signals using measurement information has attracted great interest. However, the interaction between quantum systems and the environment is inevitable, especially when measurements are introduced, which leads to decoherence. To mitigate decoherence, it is desirable to stabilize quantum systems faster, thereby reducing the time of interaction with the environment. In this paper, we utilize information obtained from measurement and apply deep reinforcement learning (DRL) algorithms, without explicitly constructing specific complex measurement-control mappings, to rapidly drive random initial quantum state to the target state. The proposed DRL algorithm has the ability to speed up the convergence to a target state, which shortens the interaction between quantum systems and their environments to protect coherence. Simulations are performed on two-qubit and three-qubit systems, and the results show that our algorithm can successfully stabilize random initial quantum system to the target entangled state, with a convergence time faster than traditional methods such as Lyapunov feedback control. Moreover, it exhibits robustness against imperfect measurements and delays in system evolution. quantum state stabilization, feedback control, learning control, deep reinforcement learning (DRL) § INTRODUCTION Quantum control theory focuses on manipulating quantum systems using external control fields or operations to regulate their behaviors <cit.>. A significant objective in quantum control is the preparation of target states, particularly quantum entangled states, which serve as vital resources for various quantum applications, including quantum teleportation <cit.>, fast quantum algorithms <cit.>, and quantum computations <cit.>. Achieving high-fidelity entangled states often involves in using classical control methods, with feedback control technology being particularly noteworthy. Quantum systems can be stabilized at target states or spaces through feedback control methods that continuously monitor the system and design feedback controllers based on real-time feedback information. In quantum measurement-based feedback, quantum measurements, while providing valuable information, introduce stochastic noise, complicating the state preparation process. To address the challenges posed by stochastic nonlinear problems in quantum systems due to measurements, some classical control methods, such as the Lyapunov method <cit.>, have been applied. However, devising feedback strategies remains a formidable task, given the vast space of possibilities where different responses may be required for each measurement outcome. Moreover, opportunities exist for further enhancing stability and convergence speed. Recently, quantum learning control, first introduced in <cit.>, has proven potential in addressing various quantum control problems. Its popularity has grown with the incorporation of additional machine learning (ML) algorithms that exhibit excellent optimization performance and promising outcomes. Quantum learning control concerns to apply proper ML algorithms as promising tools for improving quantum system performance <cit.>, and can offer robust solutions for developing effective quantum control and estimation methods <cit.>. For instance, the utilization of gradient algorithms in quantum learning control has demonstrated its significance in addressing quantum robust control problems <cit.>. Another example involves a widely used class of algorithms, evolutionary algorithms (EAs), which gains attention in learning control due to their ability to avoid local optima and their independence from gradient information<cit.>. Nonetheless, the real-time nature and randomness of measurement feedback control pose challenges, expanding the decision space significantly due to randomness <cit.>. This randomness makes it almost impossible to reproduce the same measurement trajectory, bringing challenges for EAs to apply control policies from certain sample trajectories to entirely different ones. Deep reinforcement learning (DRL) emerges as a promising solution to tackle these challenges. Since the introduction of the first DRL algorithm, deep Q-network (DQN) <cit.>, in 2013, and the groundbreaking success of AlphaGo <cit.> defeating the world champion in the game of Go in 2016, DRL has garnered significant attention. As a ML methodology, DRL employs trial-and-error learning to train intelligent agents to make improved decisions in their environment, aiming to maximize cumulative rewards. The primary step involves the DRL agent engaging with the environment, gathering data and enhancing its policy to optimize decision-making across all states. The process continues iteratively, providing “better" control signals to achieve predefined control objectives <cit.>. DRL thus has found successful applications in quantum control fields to achieve different control objectives, which include stabilizing quantum states <cit.>, optimizing high-fidelity quantum gates <cit.>, and cooling a quantum harmonic oscillator <cit.>. An intriguing avenue of research using DRL in quantum measurement feedback control involves employing measurement results as training data to develop control strategies for quantum systems. For instance, in the context of cooling the system of a particle in a potential well to their ground state, position measurements are employed, and the measured current is directly used as feedback information to train the DRL agent, resulting in a relatively high-fidelity final state<cit.>. Recent advancements have also proposed the utilization of the density matrix of the quantum state as input for training DRL agents. These agents use fidelity as a reward function, allowing them to generate simple linear control signals for preparing and stabilizing Fock states of the cavity <cit.>. Our study is motivated by the application of suitable DRL algorithms within feedback loops to exploit information obtained from measurements, thereby achieving predefined objectives. This approach holds the potential to enhance feedback control schemes, leading to a notable reduction in the convergence time to reach target states and exhibiting robustness in the face of uncertainties. In our study, we aim to devise a feedback control scheme based on DRL algorithms to enhance state stabilization, focusing primarily on entangled quantum states. These states present unique challenges due to their geometric symmetries and multi-qubit nature. To achieve the objectives, we exploit the information derived from quantum measurement as the input signal to train our DRL agent. The agent actively interacts with the quantum system, making control decisions based on the received input. We design a generalized reward function that quantifies the similarity between the current quantum state and the desired target state. This incentivizes the DRL agent to iteratively learn control strategies that maximize rewards, ultimately leading to more effective control strategies for stabilizing entangled quantum states. Our work contributes to the field of quantum information processing and quantum computation, shedding light on the potential of DRL in addressing complex quantum control challenges. The main contributions of this paper are as follows: * A DRL algorithm is proposed to achieve the stabilization of given entangled states. We design an effective and versatile reward function based on the distance between the system state and the target state, allowing flexible parameter adjustment for different objectives to enhance the performance of the DRL agent. * We compare the proposed DRL-based control strategy with the Lyapunov method for Bell state and GHZ state preparation through numerical simulations. The DRL-based control fields achieve a faster stabilization for both target states, which effectively reduces the noise generated during system-environment interactions. * We analyze the robustness of our DRL scheme under the presence of imperfect measurements and time delays in the feedback loop. The trained DRL agent exhibits remarkable adaptability to uncertainties in the environment, particularly excelling in the pursuit of robust control fields to achieve quantum state stability. The following is the organization of this paper. Section <ref> briefly introduces the stochastic master equation for quantum systems under continuous weak measurements. Section <ref> explains in detail the logic and implementation behind DRL. Section <ref> gives some details of the implementation of DRL in the quantum measurement feedback control. Numerical results are given in Section <ref>. Section <ref> is the conclusion. § QUANTUM SYSTEM DYNAMICS For a quantum system, its state can be represented by a density matrix ρ defined in the Hilbert space ℍ. This density matrix exhibits essential properties: Hermitian (ρ = ρ^†), trace unity ((ρ) = 1), and positive semi-definite (ρ≥ 0). The dynamics of the quantum system may be observed through continuous weak measurements, enabling us to acquire pertinent measurement information for the design of an appropriate feedback controller. The evolution of the quantum trajectory can be described using the stochastic master equation (SME) <cit.>: dρ_t = -i/ħ[H_0 + ∑_j=1^M u_j (t) H_j,ρ_t]dt + κ_c𝒟[c]ρ_tdt + √(η_cκ_c)ℋ[c]ρ_tdW, where i=√(-1), the reduced Planck constant ħ=1 is used throughout this paper; the Hermitian operator H_0 and H_j (j=1,2,⋯,M) are the free Hamiltonian and control Hamiltonians, respectively; u_j(t) is a real-valued control signal, which can be interpreted as the strength of the corresponding control Hamiltonians H_j; κ_c and η_c are measurement strength and efficiency, respectively; dW is a standard Wiener process caused by measurement; the Hermitian operator c is an observable; the superoperators 𝒟[c]ρ_t and ℋ[c]ρ_t are related to the measurement, e.g., they can describe the disturbance to the system state, and the information gain from the measurement process, respectively <cit.>, and have the following forms: 𝒟[c]ρ_t=cρ_t c^† -1/2c^† cρ_t -1/2ρ_t c^† c, ℋ[c]ρ_t=cρ_t + ρ_t c^† - [(c+c^†)ρ_t]ρ_t. On any given trajectory the corresponding measured current is I(t) = dy(t)/dt <cit.> where dy(t) = √(η_cκ_c)[(c+c^†)ρ_t]dt + dW. With the measurement result y_t, statistical information on the standard Wiener process dW can be collected from Eq. (<ref>). Utilizing Eq. (<ref>), an estimate of the system state can be obtained and utilized to construct a feedback controller. In this paper, we consider the DRL-based feedback stabilization of the target quantum states. We will show our algorithm in stabilizing a GHZ entangled states of a three-qubit system and an eigenstate of an angular momentum system, while our scheme has the potential to be extended to other quantum systems. § DEEP REINFORCEMENT LEARNING Abstracting a real-world problem into a Markov decision process (MDP) serves as the foundational step in applying DRL <cit.>. MDP provides a formal framework for modeling the interaction between an agent and its environment (quantum systems in this work), offering a structured specification of the agent's decision-making problem. The environment is abstracted with essential elements such as states, actions, rewards, and more, effectively describing the dynamic interaction between the agent and the external world. The agent is a pivotal component of DRL, representing a learning entity or decision-maker that, through interactions with the environment, learns to take actions to achieve objectives and continually refines its decision strategies to enhance the effectiveness of its actions. This process of agent-environment interaction and learning constitutes the core mechanism through which DRL efficiently tackles real-world challenges and achieves desirable outcomes. An MDP is a structured representation denoted by the tuple <𝒮,𝒜,R,P,γ>, where each element serves as a crucial role in modeling the problem and applying DRL: * 𝒮={ρ∈ℍ:ρ=ρ^†,(ρ)=1, ρ≥0} represents the set of states. At each time step t, the environment presents a specific quantum state ρ_t to the agent, who subsequently makes decisions based on this state. * 𝒜 signifies the set of actions, incorporating the actions a_t ∈𝒜 that the agent can undertake at each time step. In this context, the actions correspond to the control signals u_j(t) defined in Eq. (<ref>), with values ranging from any bounded control strength, for example, [0,1] in this paper. * R denotes the reward function, where the immediate reward r_t at time t can be influenced by the current state ρ_t, the action a_t, and the subsequent state ρ_t+1 <cit.>: r_t = R(ρ_t, a_t, ρ_t+1). This paper considers the task of stabilizing the current state to the target state, thus the reward r_t can be simplified as r_t = R(ρ_t). At each time step, the agent receives an immediate reward based on the current state ρ_t. In this study, our reward function is associated with the trace-based distance D_ρ_t between the current state ρ_t and the target state ρ_d: D_ρ_t≜ 1 - (ρ_dρ_t). For common entangled states such as Bell states and GHZ states, when the distance D_ρ_t=0, the system state is stabilized at the target state. The agent's objective is to learn an optimal policy by maximizing the long-term cumulative reward. * P(ρ_t+1|ρ_t,a_t) is the state transition function. It indicates how the environment transitions to the next state ρ_t+1 after taking action a_t in the current state ρ_t. It is consistent with the stochastic evolution of the quantum system described in Eq. (<ref>), capturing the dynamic nature and randomness of the environment. * γ is the discount factor, which strikes a balance between the significance of immediate rewards and future rewards. It determines the emphasis placed on future rewards, influencing the agent's decision-making process in a long-term perspective. An MDP is a time-dependent and ongoing process, involving continuous interaction between the agent and the environment. The interaction can be illustrated as shown in Figure <ref>. For simplicity, we consider the interaction between the agent and the environment as a discrete time series (e.g., t = 0, 1, 2, 3, ⋯). Starting from t=0 with the known initial state ρ_0, the agent selects an action a_0 based on the density matrix of ρ_0 and applies it to the environment. Subsequently, the environment undergoes a state transition according to the state transition function P(ρ_t+1|ρ_t,a_t), resulting in the next state ρ_1, and provides an immediate reward r_1 based on the reward function R. The environment also utilizes a classical computer to solve the SME (<ref>) to estimate the density matrix of ρ_1, which is then fed back to the agent. This process is iterated until completion. Therefore, the MDP and the agent jointly generate a sequence or trajectory as follows: ρ_0,r_0,a_0,ρ_1,r_1,a_1,ρ_2,r_2,a_2,ρ_3,r_3,⋯ The function that selects an action a from the set of actions 𝒜 based on the current state ρ is referred to as a policy π(a|ρ). It simulates a mapping function between state inputs and action outputs. The objective of the MDP is to find the policy π(a|ρ) that allows the agent to make optimal decisions, effectively maximizing long-term cumulative rewards. It is important to note that the resulting state ρ may not be a valid quantum state due to the inherent randomness in measurements and cumulative errors in solving the SME equation using classical computers <cit.>. For example, the state matrix may contain non-physical (negative) eigenvalues. To address this issue, a suitable approach is to check the eigenvalues of the estimated matrix at each step. When negative values are encountered, the state should be projected back onto a valid physical state. This can be achieved by finding the closest density matrix under the 2-norm <cit.>. This approximation ensures that a non-physical density matrix is transformed into the most probable positive semi-definite quantum state with a trace equal to 1. DRL has proven to be a highly effective approach for solving MDP problems by using deep neural networks to approximate the policy function π(a|ρ) of the agent. This enables DRL to solve high-dimensional and complex decision-making problems efficiently. DRL can automatically extract meaningful features from raw inputs and undergo end-to-end training within the reinforcement learning framework <cit.>. Existing DRL methods can be categorized into three main types: value-based methods, policy-based methods, and actor-critic methods. The value-based approach aims to learn an action-value function Q(ρ,a) which represents the value of taking an action a in state ρ. The agent typically uses an ϵ-greedy approach to select the appropriate action in a given state, resulting in a policy function π(a|ρ) that guides the agent's decision; there is no explicit policy in the learning process. One well-known algorithm in this category is DQN <cit.>, which uses a deep neural network to approximate the value function and iteratively updates the network to approximate the optimal value function, allowing for optimal decision-making. Policy-based methods directly learn the policy function π(a|ρ), which represents the probability distribution of selecting action a given state ρ. This approach is advantageous for handling continuous action spaces. A common algorithm in this category is policy gradient (PG) <cit.>, which maximizes cumulative rewards using gradient ascent to improve policy performance. Actor-critic methods are a hybrid approach that simultaneously learns the value function and policy function. The policy function, known as the “actor", interacts with the environment by selecting actions based on the current state, while the value function, known as the “critic", evaluates the value of actions. This method leverages the experience of the actor to enhance the critic's evaluation and, in turn, improves the actor's policy through the critic's evaluation <cit.>. Overall, these three categories of DRL methods offer distinct approaches to address various challenges and provide a wide range of techniques for learning optimal decision-making policies. In the following section, we delve into the fundamental principles of the DRL, leading to the application of the highly effective actor-critic style proximal policy optimization (PPO) algorithm <cit.> in this paper. § APPLYING DRL TO QUANTUM MEASUREMENT-BASED FEEDBACK CONTROL In this section, we apply the DRL to the quantum systems and aim to design a measurement-based feedback strategy to stabilize a given target state. The application is comprised of training and testing parts. In the training part, the primary objective lies in the agent's policy function π_θ(a|ρ), constructed by a neural network with the adjustable parameter set θ. This parameter set is updated aiming for a higher reward by using data that is generated through the interaction between the agent and the environment. Once the agent finishes training, it can be applied to the quantum systems to generate real-time feedback control signals in achieving the stabilization of target states. §.§ Environment: States and Actions The two most basic elements of the environment are states and actions. The prior step in applying DRL is to convert the quantum states into a form that can be recognized by DRL. To facilitate agent training and testing, the quantum density matrix is sequentially flattened into a list that contains both the real and imaginary parts (vectorization). For example, the density matrix ρ =[ α_1+β_1i α_2+β_2i; α_3+β_3i α_4+β_4i ] of a single qubit is transformed into ρ :=[ α_1,α_2,α_3,α_4,β_1,β_2,β_3,β_4 ]^T. Additionally, in order to better diagnose the learned policy, the current evolution time is also included as one of the observed states during agent training. By having access to the current time, the agent is more likely to exhibit improved performance<cit.>. Therefore, the observation received by the agent is in the form of ρ(t) := [ α_1,α_2,α_3,α_4,β_1,β_2,β_3,β_4,t ]^T. As for actions, the policy specifies the control signals corresponding to control Hamiltonians. For example, if the system in Eq. (<ref>) has two control Hamiltonians H_1 and H_2, the actions for each time step are in the form of a:=[ u_1,u_2 ]^T. Guided by the policy π_θ(a|ρ), the agent stochastically selects an action a_t for an environmental state ρ_t. The environment then transitions to a new state ρ_t+1 and provides an immediate reward r_t. Subsequently, the agent selects a new action a_t+1 based on the updated state ρ_t+1, and this cycle continues until a predefined termination criterion is met, such as reaching a maximum step count, achieving a goal, or exceeding an acceptable error threshold. This iterative process is termed an episode (or sequence) τ = {ρ_0, a_0, ρ_1, a_1, …, ρ_m }, where m denotes the final step of the episode, and τ∈𝔻 = {τ_n}_n=1,2,…,N, with N denoting the total number of possible sequences. Without any confusion, we use τ in the following to represent one episode. The agent's primary objective is to maximize a measure of expected cumulative reward across all potential episodes. §.§ Reward The immediate reward r_t at each step is closely related to the distance D_ρ_t in Eq. (<ref>). We expect a positive reward as the distance approaches 0. To encourage the system to reach the target state, inspired by the inverse proportional functional form, we define a reward function (IPF Reward) with different behaviors based on the distance to the goal state as r_t =( D-D/𝔣*(D_ρ_t-D)-𝔢*(D_ρ_t-D)-1/𝔣) ×(𝔢𝔣*(R-R)/𝔣-𝔢)+R, where D and D as the upper and lower bounds of the distance D_ρ_t, respectively, and R and R as the upper and lower bounds of the reward, respectively. In this way, we can set the numerical values of these four bounds to assign different rewards to different D_ρ_t. Additionally, we use 𝔢 and 𝔣 to regulate the slope of the reward curve. In this paper, we set 𝔢=2 and 𝔣=10. The different slopes imply that different distances have different importance for the DRL agent. When the distance D_ρ_t≤ 0.001, the state is considered nearly ready for success, and the system deserves a positive reward. As shown in Figure <ref>(a), when the distance is between 0.001 and 0 (i.e., D=0.001 and D=0), we set the reward boundaries to range from 1 to 100 (i.e., R=100 and R=1). The closer the distance is to 0, the larger the positive reward is given. The steepening slope of the reward curve as the system approaches the goal underscores the urgency of achieving the target state rapidly. This design enforces the agent's commitment to swift and accurate goal attainment. However, providing positive rewards only when the target is achieved can lead to the reward sparse problem in reinforcement learning <cit.>. In the absence of supplementary guidance to direct the agent towards the correct trajectory prior to engaging with the objective, the agent is likely to encounter substantial difficulties in accomplishing the mission solely through random exploration. To address this issue, we introduce a negative reward (penalty) at each step when the system is not in the vicinity of the target state. The punishment range is set to [-0.1, 0] as depicted in Figure <ref>(b). When D_ρ_t=1, a punishment of -0.1 is given. As the distance D_ρ_t decreases, indicating that the system state is getting closer to the goal, the punishment decreases faster (represented by a steeper slope). When the distance reaches 0.001, the previously described positive reward in Figure <ref>(a) takes over. Remark 1: When the distance D_ρ_t approaches 0.001, we choose to gradually reduce the penalties rather than increase positive rewards. This decision is made to stabilize the DRL agent during training and mitigate the issues related to reward hacking <cit.>. For instance, if we were to provide progressively larger rewards as D approaches 0.001, the agent might be inclined to persist near 0.001 until reaching the maximum number of steps in the episode, as this would yield simpler and higher cumulative rewards. By applying small penalties at each step, we encourage the agent to explore the state space more extensively. Our aim is to prevent the agent from becoming overly fixated on known high-reward regions while neglecting other unknown regions that may offer higher rewards. Instead, we prompt the agent to reach the goal promptly and iteratively improve its policy to enhance performance. Additionally, recognizing the desirability of swiftly stabilizing the system to the target state, we introduce a small additional penalty associated with the current number of evolutionary steps. Prior to system stabilization at the target, an additional penalty proportional to the number of steps is applied. For instance, the first step incurs a penalty of -1 × 10^-6, the second step incurs -2 × 10^-6, and so forth. In summation, the outlined approach encompasses the design of instantaneous rewards for each step. This reward design encourages the agent to explore and discover more possibilities while guiding it to reach the target state quickly, and effectively. The balance between positive rewards when close to the goal and negative rewards when away from it helps the agent to learn from both “good" and “bad" actions, leading to a more robust and effective learning process. We show the superiority of our design by simulating different reward functions in the Appendix. With this reward function, we are now in the position to maximize the cumulative expected reward of the sequence, so for a complete sequences τ, its cumulative reward can be expressed as R(τ) = ∑_t=0^m A^θ(ρ_t, a_t). A^θ(ρ_t, a_t) = Q(ρ_t, a_t) - V^ϕ(ρ_t) is known as the advantage function in the field of reinforcement learning, which is utilized to assess the desirability of taking a specific action a_t at state ρ_t. Q(ρ_t,a_t)=∑_t^'=t^mγ^t^'-tr_t^' represents the action-value function, indicating the expected discounted reward for choosing action a_t in state ρ_t, i.e., the cumulative sum of rewards until the end of the episode after executing this action. r_t^' is the reward function in Eq. (<ref>). The value of γ lies between 0 and 1, determining the emphasis on long-term rewards (close to 1) or short-term rewards (close to 0). It effectively introduces a discounting mechanism for future rewards, thereby shaping the agent's preference for future reward consideration when making decisions. V^ϕ(ρ_t) is referred to as the state-value function (or baseline) and is modeled by a neural network with the same structure as the policy network but with different parameters ϕ. It is primarily employed to approximate the discounted rewards from state ρ_t to the end of an episode. Specifically, if the current state is ρ_t, and for all possible actions a_t^(1), a_t^(2), …, a_t^(…), they correspond to discounted rewards Q(ρ_t, a_t^(1)), Q(ρ_t, a_t^(2)), …, Q(ρ_t, a_t^(…)). As V^ϕ(ρ_t) represents the expected value of the discounted rewards at ρ_t, we can use Q(ρ_t, a_t^(1)), Q(ρ_t, a_t^(2)), …, Q(ρ_t, a_t^(…)) as features to approximate the value of V^ϕ(ρ_t), representing the expected value of rewards in state ρ_t. When A^θ(ρ_t, a_t) > 0, action a_t is considered better than average and is worth increasing the probability of being chosen in subsequent iterations while decreasing the probability otherwise. §.§ Core Algorithmic Ideas for DRL Agents The probability of each sequence occurring is multiplied by its corresponding cumulative reward, and the sum of these products yields the expected reward. The probability of a specific τ sequence occurring given θ is defined as: p_θ(τ) = ∏_t=0^mp_θ(a_t|ρ_t)p(ρ_t+1|ρ_t,a_t). We denote p(EVENT) to signify the probability of the occurrence of the EVENT. For example, p_θ(a_t|ρ_t) represents the probability of agent to choose action a_t given ρ_t while p(ρ_t+1|ρ_t,a_t) represents the probability of environment transiting at ρ_t+1 from ρ_t given the action a_t applied. When the parameter θ is given, the expected value of the total reward, denoted as J(θ), is evaluated as the weighted sum of each sampled τ sequence, expressed by J(θ) = ∑_τR(τ)p_θ(τ) := τ∼ p_θ(τ)𝔼[R(τ)]. To maximize J(θ), which indicates that our chosen policy parameters θ can lead to higher average rewards, we adopt the well-known gradient descent method. Thus, we take the derivative of the expected reward J(θ) in Eq. (<ref>), resulting in the expression shown in ∇J(θ) = ∑_τ R(τ) ∇p_θ(τ) = ∑_τ R(τ) p_θ(τ) ∇log p_θ(τ) = ∑_τ R(τ) p_θ(τ)∑_t=0^m ∇log p_θ(a_t | ρ_t) = τ∼p_θ(τ)𝔼 [R(τ)∑_t=0^m ∇log p_θ(a_t | ρ_t)] ≈(ρ_t,a_t)∼π_θ𝔼 [∇log p_θ(a_t | ρ_t) A^θ(ρ_t, a_t)]. We use ∇ f(x)=f(x)∇log f(x) to derive the second row of Eq. (<ref>). The last approximate equation is a result of practical gradient computations, where instead of calculating the expected reward for an entire trajectory, rewards contributed by each individual state-action pair (ρ, a) are computed separately. These individual rewards are then summed up to obtain the total cumulative reward for the optimization process. The direction of the policy update π_θ(a|ρ) is biased towards favoring state-action pairs that contribute to higher cumulative rewards within the sequence. For instance, if an action a executed in state ρ leads to a positive cumulative discounted reward, the subsequent update will enhance the probability of choosing action a in state ρ, while diminishing the likelihood of selecting other actions. The update equation for the parameters θ is as follows: θ = θ + η∇ J(θ), where η is the learning rate. Once the policy π_θ(a|ρ) is updated, it necessitates the reacquisition of training data prior to the subsequent policy update. This arises due to the alteration in the probability distribution p_θ(τ) brought about by the modified policy. Following data sampling, the parameter θ undergoes refinement, leading to the discarding of all prior data. Subsequent parameter updates mandate the collection of fresh data, constituting the fundamental principle underlying the conventional PG algorithm. However, in the context of quantum systems, the process of sampling system information is often characterized by time-intensive and computationally demanding operations. For instance, after each measurement, a classical computer is requisitioned to solve the SME (<ref>) to ascertain the system's state in the subsequent moment. This inability to reutilize previously acquired data contributes to a protracted training process. To address this challenge, an additional strategy π_θ^' is introduced, mirroring the architecture of π_θ(a|ρ). Instead of directly engaging with the environment for data gathering, the primary agent π_θ employs the auxiliary agent π_θ^' to interact with the environment and accumulate data. The objective is to subsequently utilize this data to train π_θ multiple times, effectively reducing the computational and resource demands for data collection. Ensuring the consistency of data sampled by π_θ^' with that of π_θ, importance sampling <cit.> is introduced to facilitate this synchronization process. This approach contributes to enhancing data reuse and the overall efficiency of the training procedure. Eq. (<ref>) is updated as: ∇J(θ) = (ρ_t,a_t)∼π_θ^'𝔼 [p_θ(ρ_t,a_t)/p_θ^'(ρ_t,a_t) ∇logp_θ(a_t |ρ_t) A^θ^'(ρ_t, a_t)] = (ρ_t,a_t)∼π_θ^'𝔼 [p_θ(a_t | ρ_t)p_θ(ρ_t)/p_θ^'(a_t|ρ_t)p_θ^'(ρ_t) ∇logp_θ(a_t |ρ_t) A^θ^'(ρ_t, a_t) ] =(ρ_t,a_t)∼π_θ^'𝔼 [p_θ(a_t | ρ_t)/p_θ^'(a_t|ρ_t) ∇logp_θ(a_t |ρ_t) A^θ^'(ρ_t, a_t) ]. Here, all the state-action pairs (ρ_t,a_t) (or alternatively, all trajectories τ∈𝔻) are sampled from π_θ^', where p_θ(ρ_t,a_t) / p_θ^'(ρ_t,a_t) is the importance weight, dynamically adjusting the weight of the data sampled by π_θ^' in real-time to more accurately estimate the expected value under the target policy π_θ. The corresponding objective function from Eq. (<ref>) can be calculated as: J^θ^'(θ)=(ρ_t,a_t)∼π_θ^'𝔼[p_θ(a_t | ρ_t)/p_θ^'(a_t|ρ_t) A^θ^'(ρ_t, a_t)]. Nonetheless, in the absence of constraint, such as when A^θ^'(ρ_t, a_t)>0, indicating the desirability of specific action-state combinations, the agent's inclination would be to elevate their likelihood, effectively amplifying the p_θ(a_t | ρ_t)/p_θ^'(a_t|ρ_t) value. This scenario can lead to policy learning inaccuracies and an erratic learning process, impeding convergence. To counteract this, the PPO introduces a pivotal mechanism, termed the “clip ratio". This clip ratio imposition serves to confine the proportions between the new and preceding policies, thereby ensuring congruence and augmenting the algorithm's dependability. The following equation demonstrates the PPO Clipping algorithm, incorporating the clipping term to bound the difference between p_θ and p_θ^' during the policy update. J_PPO^θ^'(θ) ≈(ρ_t,a_t)∼π_θ^'𝔼 min(ϱA^θ^'(ρ_t, a_t), clip (ϱ, 1-ς, 1+ς)A^θ^'(ρ_t, a_t)), where ϱ = p_θ(a_t|ρ_t)/p_θ^'(a_t|ρ_t). The last two terms 1-ς, and 1+ς in the clip function limit the boundaries of the first term. ς is a hyperparameter, typically set to 0.1 or 0.2. Exhaustively considering all possible sequences is typically infeasible, and thus in practical training, the objective function (<ref>) is often formulated in the following manner: J_PPO^θ^'(θ) ≈1/Nm ∑_τ∈𝔻 ∑_t=0^m min(ϱA^θ^'(ρ_t, a_t), clip (ϱ, 1-ς, 1+ς)A^θ^'(ρ_t, a_t)), where N and m represent finite real numbers that respectively signify the count of collected sequences and the maximum number of steps within each sequence. Based on the above, we can see that PPO has three sets of network parameters for its update strategy: * One set of main policy parameters θ, which is updated every time. * One set of policy parameter copies θ^', which interact with the environment and collect data. They utilize importance sampling to assist in updating the main policy parameters θ. Typically, θ^' is updated only after several updates of θ has been performed. * One set of value network parameters ϕ, which are updated based on the collected data using supervised learning to update the evaluation of states. They are also updated every time. §.§ Training Our DRL algorithm is implemented using the open-source Python library Stable-Baselines3 <cit.>, while the quantum dynamic environment is constructed within the Gymnasium framework <cit.>. All simulations in this study are conducted on a computer equipped with an Apple M1 Pro chip and 32 GB of memory, utilizing Python 3.11.3, stable-baselines3 2.3.2, and Gymnasium 0.29.1. We design a reasonable reward function to guide the DRL agent through iterative learning, aiming to train an excellent DRL agent capable of generating control signals to achieve the stability of the target entangled state. Initial State: During training, the state is randomly reset to a quantum state after the completion of each episode, which means that at each episode in the training iteration, the agent starts from a new state and explores the environment from that point. Neural Network Architecture: Each policy π_θ is represented by a neural network that maps a given state ρ to a probability distribution over actions a. The action distribution is modeled as a gaussian distribution. The input layer is processed by two fully connected hidden layers, each with 128 neurons, accompanied by a linear output layer with the same dimension as the action space (2 in this paper). All hidden layers use the Tanh activation function. The value function V^ϕ(ρ_t) is composed of a similar neural network architecture, with the only difference being that the output layer is a single linear unit used to estimate the state-value function. The value function V^ϕ(ρ_t) is estimated using the temporal difference (TD) method <cit.>. Then, the generalized advantage estimator (GAE) <cit.> is employed to compute the advantage function in Eq. (<ref>), which is subsequently used in Eq. (<ref>) to calculate the gradient for updating the policy π_θ. Learning Rate: The learning rate is a hyperparameter that determines the step size of the algorithm's updates based on observed rewards and experiences during training. In our training process, the learning rate η is not a constant value but follows a linear schedule. Our learning rate starts at η=5*10^-7 and linearly decreases over time during the training process. This allows the algorithm to explore more in the early stages of training when the policy might be far from optimal. As the training progresses, the policy approaches convergence, and the learning rate decreases to promote stability in the learning process and fine-tune the policy around the optimal solution. This helps the DRL agent achieve better performance and stability during the training process. Please refer to <cit.> for more details. Early Termination: Regarding early termination, continuous quantum measurement feedback control can be modeled as an infinite-horizon MDP, but during training, each episode is simulated on a finite time horizon. Additionally, practical applications require a finite system evolution time. Therefore, we set fixed duration or termination conditions to end an episode. The termination conditions include the following: * When the distance D_ρ_t∈ [0, 0.001] for 10 consecutive measurements, multiple measurements verifying that the system is in the target state are considered mission complete. * In a specific system, the maximum training time for a trajectory is set to a fixed value. For example, for the two-qubit state stabilization problem in section <ref>, we set the maximum training time T = 20 arbitrary units (a.u.). When the evolution time reaches 20 a.u., regardless of whether it has converged to the goal or not, the training trajectory is halted. This approach not only greatly saves training time but also significantly reduces the issue of overfitting. These early termination conditions bias the data distribution towards samples that are more relevant to the task, thereby saving training time and preventing undesirable behaviors. During agent testing, the time is typically not limited to evaluate the agent's performance in a real environment and assess its ability to complete tasks within a reasonable time frame. The pseudo-code for the PPO in quantum state stabilization is shown in Algorithm <ref>. § NUMERICAL SIMULATION §.§ Two-Qubit System We first consider a system of two qubits in a symmetric dispersive interaction with an optical probe as proposed in <cit.>. We consider using the DRL control scheme mentioned above to address the two-qubit entangled state preparation problem for arbitrary initial quantum states. Denote the Pauli matrices σ_x = [ 0 1; 1 0 ], σ_y = [ 0 -i; i 0 ], σ_z = [ 1 0; 0 -1 ]. Control Hamiltonians H_1 and H_2 are H_1 = σ_y ⊗ I_2 = [ 0 0 -i 0; 0 0 0 -i; i 0 0 0; 0 i 0 0 ], H_2 = I_2 ⊗σ_y = [ 0 -i 0 0; i 0 0 0; 0 0 0 -i; 0 0 i 0 ], respectively. Angular momentum operator c = σ_z ⊗ I_2 + I_2 ⊗σ_z = [ 2 0 0 0; 0 0 0 0; 0 0 0 0; 0 0 0 -2 ]. Specify the target state as ρ_d = [ 0 0 0 0; 0 0.5 0.5 0; 0 0.5 0.5 0; 0 0 0 0 ], which is a symmetric two-qubit state. We utilize the previously summarized PPO algorithm to train the DRL agent. For the training trajectories, we set a time interval of Δ t = 0.001 a.u. for each measurement step, with a maximum evolution time of T=20 a.u., corresponding to a maximum of 20,000 steps. At each step, the DRL agent interacts with the environment, obtaining system information to generate control signals, which are then stored for iterative updates of the policy. The total number of training steps is set to 10^7. On our computer, training takes approximately 50 minutes. However, as our primary focus is on the performance of the trained agent, the training duration is not of critical importance. In practical experiments, it is feasible to pre-train the agent; once trained, the agent can be directly utilized. In order to evaluate its performance, we test the proposed strategy on 50 randomly selected distinct initial quantum states ρ_0 (corresponding to different initial distances D_ρ_0, as indicated by the blue line in Figure <ref>). The light blue lines represent the evolution trajectory of a specific initial state with respect to the target state, averaged over 50 different trajectories under varying environmental noise, while the dark blue line depicts the average evolutionary trajectory of all the different initial states. A distance of D_ρ_t=0 indicates the system stabilizing at the target state. It is worth noting that the well-trained agent successfully stabilizes any initial state to the target state. Furthermore, for comparison with the Lyapunov method mentioned in <cit.>, we retain the same 50 sets of randomly selected initial states and obtain the orange trajectories in Figure <ref> using the Lyapunov method. It can be observed that the control signals generated by the DRL agent outperform the Lyapunov method. Assuming we take the time when the distance D_ρ_t becomes less than 0.001 as the required evolution time of the system, under the guidance of the DRL agent, the average evolution time is 5.64 a.u., while the average time using the Lyapunov method is 14.36 a.u.. The DRL's average stabilization time is improved by 60.72% over the Lyapunov method. This indicates that our DRL approach successfully stabilizes these quantum states to the target state faster than the Lyapunov method. §.§ GHZ State We then consider a more complex problem of preparing three-qubit entangled GHZ states, which are special entangled states and have been regarded as maximally entangled states in many measures <cit.>, <cit.>. A GHZ entangled state is defined in the following form <cit.>: | GHZ⟩ = 1/√(2) (|0⟩^⊗ n + |1⟩^⊗ n), where n is the number of qubits. Its density matrix can be expressed as ρ_ GHZ≜ | GHZ⟩⟨ GHZ|. For the three-qubit GHZ state, we choose | GHZ⟩ = (1/√(2)) (|000⟩ + |111⟩), which gives the following density matrix: ρ_ GHZ = 1/2[ 1 0 0 0 0 0 0 1; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 1 0 0 0 0 0 0 1 ]. A degenerate observable c is required according to the quantum state collapse after measurement <cit.>. The quantum state collapse states that the system in Eq. (<ref>) will randomly converge to an eigenstate or eigenspace of c without any control. Hence, we choose an observable in the following diagonal form: c = diag[λ_d,λ_2,⋯,λ_n-1,λ_d], where λ_d λ_k (k=2,⋯,n-1), and λ_d is the eigenvalue corresponding to the target state ρ_d, i.e., cρ_d=λ_d ρ_d. Due to the degenerate form of the observable c in Eq. (<ref>), the system may converge to other state in the corresponding eigenspace related to λ_d, two control channels u_1 and u_2 based on Lyapunov method have been applied in <cit.> to solve this problem. For subsequent performance comparisons, two control channels are also used in this paper. For any training trajectory, we take a time interval Δ t = 0.001 a.u. for each measurement step. Given that we have set the maximum evolution time as T=40 a.u., it means the maximum number of evolution steps for any trajectory during training is 40,000. The total number of training steps is 10^8. For all instances, in order to compare with the Lyapunov methods presented in <cit.>, we choose the same system Hamiltonian as H_0= diag[1,-1,-1,1,1,-1,-1,1], the target state as ρ_d ≜ρ_ GHZ (<ref>), and the observable c as c = 2 ×(σ_z ⊗σ_z ⊗I_2) + I_2 ⊗σ_z ⊗σ_z = diag[3,1,-3,-1,-1,-3,1,3]. The control Hamiltonians H_1 and H_2 are chosen as H_1 = I_2 ⊗I_2 ⊗σ_x + σ_x ⊗σ_x ⊗I_2 = [ 0 1 0 0 0 0 -1 0; 1 0 0 0 0 0 0 -1; 0 0 0 1 -1 0 0 0; 0 0 1 0 0 -1 0 0; 0 0 -1 0 0 1 0 0; 0 0 0 -1 1 0 0 0; -1 0 0 0 0 0 0 1; 0 -1 0 0 0 0 1 0 ], and H_2 = σ_x ⊗I_2 ⊗I_2 + I_2 ⊗σ_x ⊗σ_x = [ 1 0 0 1 0 0 0 0; 0 1 1 0 0 0 0 0; 0 1 1 0 0 0 0 0; 1 0 0 1 0 0 0 0; 0 0 0 0 -1 0 0 1; 0 0 0 0 0 -1 1 0; 0 0 0 0 0 1 -1 0; 0 0 0 0 1 0 0 -1 ] . We test the trained DRL agent in various environments to evaluate its performance and robustness. The goal is to assess how well the agent generalizes its learned policies to different scenarios and how it copes with perturbations and variations in the environment. To achieve this, we expose the trained DRL agent to a set of diverse environments, each with unique characteristics and challenges. These environments are carefully designed to represent a wide range of scenarios and potential disturbances that the agent might encounter in real-world applications. During the testing phase, we measure the agent's performance in terms of its ability to achieve the desired objectives and maintain stability in each environment. We also examine its response to changes in the measurement efficiency η_c and time delay disturbances to assess its robustness and adaptability. We first investigate the “perfect case”. In this paper, the “perfect case” entails assuming that negligible delay in solving the SME (<ref>) by classical computers, and perfect detection, that is, measurement efficiency η_c=1. In contrast, situations where there is delay or imperfect detection within the system are collectively referred to as “imperfect cases”. We then show some performance indications for “imperfect cases”. §.§.§ Stabilization of the GHZ state under perfect case We initiate the testing phase to evaluate the ability of arbitrary initial states to stabilize to the target GHZ state within a specified time frame. As shown in Figure <ref>, we employ the comparative approach mentioned in Section <ref>, randomly selecting 50 distinct initial states for control using the DRL agent and the Lyapunov method. The blue and orange lines correspond to the DRL method and the Lyapunov method, respectively. Under the guidance of the DRL agent, the control strategy improved the average evolution time by 22.38% compared to the Lyapunov method (DRL: 33.39 a.u. vs. Lyapunov: 43.02 a.u.). This indicates that our DRL approach successfully stabilizes quantum states to the target GHZ state more rapidly. In addition, we also explore the evolution of two specific initial states, denoted as ρ_0^1= diag[0,1,0,0,0,0,0,0] and ρ_0^2= diag[1,0,0,0,0,0,0,0], mentioned in <cit.> as examples. We repeat their stabilization 50 times each to obtain averaged convergence curves that approximate the system's evolution. Figure <ref>(a) and Figure <ref>(b) depict the evolution of these two distinct initial states. The blue curve represents the evolution controlled by the DRL agent, while the orange curve represents the evolution controlled by the Lyapunov method from <cit.>. It can be observed that the well-trained DRL agent not only achieves stable convergence to the target state but also showcases faster convergence compared to the Lyapunov method. We randomly select a single trajectory under the control of a DRL agent with initial state ρ_0^1. The left subplot of Figure <ref> illustrates the evolutionary trajectory with D_ρ_t, and the top images display the Wigner function for a harmonic mode of the system state at five different evolution times. In contrast, the subplot on the right serves as a reference plot for the target three-qubit GHZ state. A comparison reveals that the phase-space distribution of the system state gradually approaches the target state over time, and at t=20 a.u., the system state is identical to the target state. In practical agent training and application, uncertainties often exist. For example, the efficiency of measurements is typically not perfect, and there are frequently issues related to time delays in the feedback process. In the following two subsections we explore the robustness of our DRL agent to these two imperfections. §.§.§ Stabilization of the GHZ state with imperfect measurement We start from the measurement inefficiency. It's important to note that we are using the agent trained under the assumption of “perfect case” to test the robustness of the agent. We assume a measurement efficiency η_c=0.8, which represents a relatively high efficiency achievable in current laboratory settings <cit.>. As shown in the Figure <ref>, 50 randomly selected initial states are successfully stabilized to the target GHZ state under the control of the agent. Therefore, our trained DRL agent has shown good robustness in terms of measurement inefficiency. §.§.§ Stabilization of the GHZ state with time delay We then consider the issue of time delay in feedback processes. Especially in the rapid evolution of quantum systems, the time required for traditional computers to solve the SME (<ref>) is often non-negligible. Therefore, it is worth considering incorporating a fixed compensation for time during the testing process of the agent. For instance, assuming a time delay of 𝓉=0.05 a.u., starting from t=0 a.u. until t=0.05 a.u., the agent receives only the initial state ρ_0 as input and provides signals to control the system's evolution based on the input state. The agent receives the state ρ_1 at t=0.05 a.u., and so on, where at each step, the agent receives the state ρ_t-0.05, which is the state prior to 𝓉=0.05 a.u.. As shown in the Figure <ref>, by selecting 50 different initial states arbitrarily, it can be observed that a well-trained agent still exhibits excellent performance in dealing with time delays, although longer convergence time is required than in the perfect environment in Figure <ref>. In order to gain a clear and informative understanding of the impact of time delay on system convergence, the following analysis was conducted: First, the following assumptions are made: * It is assumed that the maximum evolution time of the system, denoted as T_max, is equal to 100 a.u.. For any evolution, when the system has not converged to the target within the maximum evolution time T_max (i.e., when the distanceD_ρ_T_max> 0.001), it is referred to as a non-convergent trajectory at the maximum evolution time T_max. * A concept of “Stabilization Success Rate” is defined. The objective is to obtain convergence trajectories for 50 different initial states, where each convergence trajectory for an initial state is constructed as an average curve of 50 different trajectories under varying environmental noise. In other words, for 50 different initial states, we have a total of 50 * 50 = 2500 trajectories. To obtain these 2500 trajectories that achieve successful convergence within the time T_max, we may need data from more than 2500 trajectories. For instance, when there are 100 non-convergent trajectories, the convergence success rate is calculated as 2500 / (2500 + 100) = 0.9615. As illustrated in Figure <ref>, it can be observed that with an increase in time delay, the average time required for the system state to stabilize towards the target increases, while the “Stabilization Success Rate” decreases. For instance, when the time delay is 0.05 a.u., the average convergence time is 38.52 a.u., with a success rate of 0.9913. However, when the time delay is increased to 0.3 a.u., the average convergence time extends to 91.65 a.u., with a corresponding decrease in the success rate to 0.7909. This aligns with common intuition, where a larger time delay leads to poorer performance of the DRL agent. § CONCLUSIONS In this paper, we designed a DRL agent and applied it to a quantum measurement-based feedback control problem, for stabilizing quantum entangled states. By designing an effective reward function, the trained agent can not only stabilize the quantum system to the target state with high fidelity but also exhibits good robustness. We first tested the well-trained agent in “perfect case” and compared the performance with Lyapunov-based switch method. The results showed that our approach achieves comparable fidelity in shorter time, which has the potential to reduce the interaction between quantum systems with their environment thereby significantly reducing additional noise issues arising from prolonged system-environment interactions. We then tested our agent in different “imperfect cases” including measurement inefficiency and time delay in the feedback loop, where our proposed agent showed good robustness. Lastly, due to the generality of our proposed control framework, extending it to stabilize other quantum states poses minimal challenges. We believe that exploring the behaviors that may exist in real-world environments will be an interesting and challenging research direction. In the future, we will also consider reducing the traning time of DRL agents by designing more efficient and faster algorithms. [The Impact of Different Reward Functions on Agent Performance] We take the two problems described in Section <ref> (stabilization of 2- and 3-qubit systems) and train our agents using the following three different reward functions to compare their impact on the agents' performance. * Sparse Reward: Only rewards are given when the agent reaches specific states. For example, in games like Go, it is challenging to set rewards for every intermediate move, and the state space is vast, so rewards are only obtained when the final victory is achieved. In our quantum system environment, we can set a reward when the distance D_ρ_t≤ 0.001. At all other times, the reward is 0. * Linear Reward: As shown in the Figure <ref>, the system's reward is linearly related to the distance D_ρ_t. * IPF Reward: Refer to Section <ref> for details. To ensure fair comparisons, for a target state, we maintain consistent training parameters for all three agents, with the only difference being the reward functions used during training. Subsequently, we test these trained agents using 50 random initial states. As depicted in Figure <ref>(a), in the context of the 2-qubit state stabilization problem, agents trained with linear reward (orange) and IPF reward (blue) perform almost identically, both being able to stabilize the system to the desired state within a reasonable time frame. However, the sparse reward function (green) demonstrates a clear inadequacy, as the agent only receives rewards when reaching specific target states, hindering its ability to effectively explore and learn during the training process. Moving on to the more complex 3-qubit system, as shown in Figure (b), the agent trained with the sparse reward function struggle to stabilize the system to the target state. Conversely, the agent trained with the linear reward function exhibit good performance and the ability to reach the target state. However, in comparison to the IPF reward function, the agent guided by the linear reward function lack speed in achieving stability. In contrast, our proposed IPF reward function excels in stabilizing the target state. It provides the agent with balanced guidance, enabling efficient learning and faster convergence to the desired stable states. These highlight the effectiveness and superiority of the IPF reward approach in improving agent performance. IEEEtran
http://arxiv.org/abs/2408.11532v1
20240821111407
Classification of Mitral Regurgitation from Cardiac Cine MRI using Clinically-Interpretable Morphological Features
[ "Y. On", "K. Vimalesvaran", "S. Zaman", "M. Shun-Shin", "J. Howard", "N. Linton", "G. Cole", "A. A. Bharath", "M. Varela" ]
eess.IV
[ "eess.IV" ]
On et al. Classification of Mitral Regurgitation from Cine MRI Imperial College London, Exhibition Road, London, SW7 2AZ, UK Imperial College Healthcare NHS Trust, Du Cane Road, London, W12 0HS, UK yu.on16@imperial.ac.uk Classification of Mitral Regurgitation from Cardiac Cine MRI using Clinically-Interpretable Morphological Features Y. On 1 K. Vimalesvaran 1 S. Zaman 2 M. Shun-Shin 1 J. Howard 1 N. Linton 1 G. Cole 2 A.A. Bharath 1 M. Varela 1 Aug 2024 ==================================================================================================================== § ABSTRACT The assessment of mitral regurgitation (MR) using cardiac MRI, particularly Cine MRI, is a promising technique due to its wide availability. However, some of the temporal information available in clinical Cine MRI may not be fully utilised, as it requires detailed temporal analysis across different cardiac views. We propose a new approach to identify MR which automatically extracts 4-dimensional (3D + Time) morphological features from the reconstructed mitral annulus (MA) using Cine long-axis (LAX) views MRI. Our feature extraction involves locating the MA insertion points to derive the reconstructed MA geometry and displacements, resulting in a total of 187 candidate features. We identify the 25 most relevant mitral valve features using minimum-redundancy maximum-relevance (MRMR) feature selection technique. We then apply linear discriminant analysis (LDA) and random forest (RF) model to determine the presence of MR. Both LDA and RF demonstrate good performance, with accuracies of 0.72±0.05 and 0.73±0.09, respectively, in a 5-fold cross-validation analysis. This approach will be incorporated in an automatic tool to identify valvular diseases from Cine MRI by integrating both handcrafted and deep features. Our tool will facilitate the diagnosis of valvular disease from conventional cardiac MRI scans with no additional scanning or image analysis penalty. All code is made available on an open-source basis at: <https://github.com/HenryOn2021/MA_Morphological_Features>. § INTRODUCTION §.§.§ Mitral Regurgitation Mitral regurgitation (MR) is one of the most common cardiac conditions: it currently affects 31% of the population in Europe <cit.>, and its incidence is rising due to increases in life expectancy. MR occurs when the mitral valve (MV) leaflets do not close properly, leading to the reflow of blood from the left ventricle (LV) to the left atrium (LA) during systole, which can be categorised as primary or secondary. In the former case, it is usually caused by degenerative MV and patients can be asymptomatic with normal LV function <cit.>; whereas the latter case involves the deformation of the mitral annulus (MA) caused by LV dysfunction <cit.>. According to clinical guidelines <cit.>, transthoracic echocardiography (TTE) is still used as the first choice for diagnosis of MR. However, it lacks reproducibility, subjectivity, and is highly dependent on operator skills <cit.>, especially for the detection of mitral regurgitant flow <cit.>. The assessment of MR severity is challenging even for experienced clinicians, as it comprise of qualitative, semi-quantitative, and quantitative measurements <cit.>. Therefore, computed tomography (CT) and cardiac magnetic resonance imaging (cMRI) <cit.> are increasingly used to characterise MR for functional analysis and evaluation of severity. There are, however, few tools to help identify MR from Cine MRI. This blocks the clinical potential of these images for MR diagnosis and management. §.§.§ Cardiac Cine MRI Cine (or dynamic) MRI is one of the most commonly acquired cardiac MRI sequences, used to assess the function and morphology of the heart chambers for virtually all clinical cardiac MRI indications <cit.>. Conventional Cine MRI usually employs a 2D balanced steady-state free precession (bSSFP) readout reconstructed into approximately 30 or 50 cardiac phases. Cine MRI is acquired in both short-axis stacks (full coverage of the LV) and single-slice long-axis (2-, 3-, and 4-chamber) views <cit.>. The MV leaflets are thin and protein-rich, making them hard to image using most types of MRI. Regurgitant jets can be observed as a region of reduced blood signal in bSSFP; however, they are often not clearly visible <cit.>. In contrast, the MA insertions can be reliably identified in Cine MRI <cit.>, opening the possibility of studying MR annulus morphology and dynamics using Cine MRI. Some past studies have studied MR patients using Cine MRI. Xiao et al. <cit.> developed a semi-supervised neural network pipeline for MR classification using Cine MRI. However, despite being fully automated, it lacked clinical interpretability as we cannot explain the model's decision-making process (known as the 'black-box' problem). Wu et al. <cit.> developed a semi-automatic approach to extract dynamic features from reconstructed MA using Cine MRI to identify diastolic dysfunction, which showed comparative results with TTE measurements. However, they did not include the rich morphological features from the reconstructed MA in their study. Ennis et al. <cit.> used feature tracking to approximate the MA points and extract dynamic and morphological features from the reconstructed MA at end-diastole and end-systole phases. They however focused on evaluating treatment outcomes in patients with degenerative MV disease, not in MR identification. Leng et al. <cit.> also used feature tracking to extract both morphological and dynamics features from reconstructed MA using Cine MRI, and showed MR patients have significantly reduced mitral dynamics and mild annular deformation. However, to date, no studies propose optimally combining a spectrum of morphological features across full cardiac cycles for Cine-based MR classification. §.§.§ Feature Selection Feature selection methods can be used to reduce high-dimensional data to a smaller subset of features, which can be used as inputs to different machine-learning problems. Minimal redundancy and maximal relevance (MRMR) is a filter-based feature selection method which ranks features based on two components - relevance and redundancy <cit.>. In an iterative approach, MRMR uses F-statistics or mutual information to identify the features that correlate the most (are most relevant) with a given binary label (e.g., a clinical diagnosis) <cit.>. The feature is added to an optimal feature set, in which Pearson correlation <cit.> or mutual information is used to remove highly correlated features with lower relevance (to minimise redundancy). §.§.§ Machine Learning We propose to use Linear Discriminant Analysis (LDA) <cit.> and Random Forest (RF) <cit.> to evaluate the performance of the imaging features for MR detection. LDA looks for a linear decision boundary in feature space by fitting the class-conditional densities to the data using Bayes's rule <cit.>. RF is an ensemble learning algorithm that combines the outcomes of multiple decision trees to make predictions. A random subset of the features is trained in each decision tree to reduce over-fitting and improve accuracy. §.§.§ Aim We propose a machine-learning approach to detect mitral regurgitation using imaging features extracted from long-axis (2-chamber, 3-chamber and 4-chamber) bSSFP Cine MRI. § METHODS §.§.§ Patient Demographic 187 subjects referred for clinical cardiac MRI, consisting of 98 with no MR (mean age 54±17 years, 43% female) and 89 with MR (63±16 years, 38% female), were imaged under ethical approval in this retrospective study. Clinical diagnostic reports based on TTE were used to divide patients into each cohort. Among the 89 MR cases, 73% have mild severity, 17% moderate, and 9% are severe. The patient demographics and volumetric characteristics are shown in Supplementary Table 1. §.§.§ Image Acquisition Conventional breath-held, gated bSSFP Cine CMR <cit.> was performed using 1.5 or 3T Siemens (Erlangen, Germany) Prisma or Aera MRI scanners in three different hospitals. Long-axis views were acquired and reconstructed to 30 cardiac phases at a spatial resolution of 0.7 - 2.1 × 0.7 - 2.1 × 5.0 mm^3. The cardiac phases were ordered such that phase 0 corresponds to the electrocardiography R-peak (end-diastole). §.§.§ Data Analysis Two mitral valve insertion points are manually labelled at every 5th cardiac phase (CP, out of 30) in each of the 2-, 3-, and 4-chamber long-axis views (see Fig. <ref>A-C). The inter-operator error in the positioning of mitral valve insertion points is assessed 20 randomly-selected patients. The coordinates of each of these 36 (= 6 points × 6 cardiac frames) points are converted to patient (world) coordinates using DICOM header information and used to study the morphology and dimensions of the mitral annulus (MA). For each labelled CP, the 6 MA points are used to find the best-fit plane in 3D space and the best-fit ellipse on that plane (see Fig. <ref>D and E). From Fig. <ref>D, the MA points depict a close approximation of ellipse, which is proposed as a simple morphological approximation of the saddle-shaped MA. The fitted ellipses are co-registered to the common centroid at (0,0,0) across patients, and the semi-major axis of the ellipse is aligned with the x-axis at CP_0 (see Supplementary Fig. 1). The best-fit plane is identified using singular value decomposition (SVD) <cit.> and the ellipse is fitted by projecting the 3D points to the 2D plane using Rodrigues' rotation <cit.>. At each CP, we extract the following features from the fitted ellipse: area, perimeter, semi-minor axis length (b), semi-major axis length (a), eccentricity, b:a ratio, MA heights (the sum of the most superior and the most inferior points perpendicular to the best-fit plane), and the (x,y,z) components of the normal plane vectors. We additionally estimate changes across consecutive cardiac phases in: the angle of the MA plane with a horizontal plane (tilt) and the in-plane rotation angle (θ) of the ellipse's semi-major axis with respect to the x-axis. The magnitude and individual (x-, y-, z-) axis displacements of the 3D MA point positions across the cardiac cycle are also measured (see Fig. <ref>D, E). §.§.§ Data Selection In total, 187 candidate features are extracted. The comprehensive list of features and the number of values measured from individual features are outlined in Supplementary Table 2. We select the most salient features by estimating individual MRMR coefficients between each feature and MR/no MR clinical labels <cit.>. The top K features with the highest relevance and lowest redundancy are selected as the input vectors for the classification stage. To decide how many features to keep for the subsequent analyses, we tested the K = 5, 10, 25, 50 features and decided on K = 25, which produced the best cross-validation (CV) results. §.§.§ Machine Learning Classification We have selected two machine learning models to classify the optimal feature set (K = 25) into one of the two classes (No MR and MR): linear discriminant analysis (LDA) <cit.> and random forest (RF) <cit.>. To optimise the RF model, we perform a grid search for the number of estimators, the number of features to consider when looking for the best split, maximum tree depth, and maximum leaf nodes using the training and validation set with the optimal feature set. The best RF configuration for the dataset is listed in Supplementary Table 4. §.§.§ Performance Assessment To mitigate the risk of overfitting and provide a reliable evaluation of the models, we perform stratified 5-fold cross-validation (CV) with a constant random seed (for reproducibility) and data shuffling <cit.>. From the 187 cases, 10 cases from each class are randomly selected as a test set, which is not included in the 5-fold CV. Of the remaining 167 cases, 134 cases are used for training and 33 for validation in each fold. The test set is used for independent evaluation of the models to minimise data leakage. We assess the classification performance using the following metrics: accuracy, specificity, sensitivity, F1 score, and Area Under the Receiver Operating Characteristics Curve (ROC AUC) <cit.>. § RESULTS The proposed method can identify MR patients using simple MA morphological features extracted from long-axis Cine MRI. LDA and RF methods achieve a cross-validation accuracy of 0.72±0.05 and 0.73±0.09, respectively. The error in insertion point placements between two independent observers is 2.50±1.99 mm, corresponding to 10.5% of the mean ellipse’s semi-major axis length. §.§ Data Selection According to the MRMR analysis, 12 out of the 25 most relevant imaging features are showed in Table <ref>, rank in descending order using the MRMR F-statistics relevance. From the LDA column, the ellipse perimeter at CP_5& CP_10, the semi-major axis length (a) and the semi-minor axis length (b) at CP_10 have high absolute importance coefficients; whereas the magnitude displacement (u_mag) of P2 between CP_20-15, displacement of P5 along x-axis between CP_15-10, MA height, and the semi-minor axis length (b) at CP_5 have high coefficient values in the RF model. The remaining 13 of 25 features are listed in Supplementary Table 3. §.§ Performance Assessment Table <ref> summarises the evaluation metrics of the two models in classifying 'No MR' and 'MR' cohorts with the hand-engineered features. From the 5-fold CV and test set results, RF performs marginally better than LDA for all metrics. In Fig. <ref>A and B, the 4 most important LDA features (perimeter_CP_10, perimeter_CP_5, a_CP_10, and b_CP_10) already show some separability between the 2 classes when considered in pairs. In Fig. <ref>A-D, the plots illustrate the change in the 4 feature values with the highest RF coefficients across the full cardiac cycle, which further depict the class separation in the selected features at the specific CP (blue box). § DISCUSSION In this study, we propose a semi-automatic pipeline to accurately identify mitral regurgitation from three clinical long-axis Cine MRI views. We use clinically-interpretable hand-engineered features, which can quantitatively characterise the morphology of the MA, reconstructed from 6 manually annotated MA insertions. We have not included features related to the intensity of the blood in Cine MRI, although flow-related dephasing could in principle contribute to the classification of MR <cit.>. Flow-induced dephasing of blood signal depends strongly on the imaging parameters, slice positioning and is therefore extremely variable across the patient cohort. We have performed an offline study to evaluate the flow information in classifying MR with the same cohort of patients and dataset using a 1-layer convolutional neural network (CNN). By using the labelled points to extract the flow's pixel intensities across the MV, to create a 2D array to record the intensities across the cardiac cycle. However, the CNN is unable to capture useful features to distinguish between the 2 classes, in which the performance could be hampered by the limitation of Cine MRIs, and phase contrast (i.e. pixel intensity is correlated to flow velocity) could be a better alternative MRI sequence. Although the MV leaflets cannot be observed in Cine MRI, we show that these images include clinically important MA morphological information that, when optimally combined, can classify MR accurately. In Table <ref>, the magnitude displacement of the labelled points in Cine 2-ch and 3-ch at mid-diastole (CP_20-15) have high relevance with the diagnostic outcomes and high importance in classification, especially in the RF model. This is illustrated in Fig. <ref>A, of mitral septal in Cine 3-ch, which shows the largest class separation at the specific CP (in blue) between the 2 cohorts. The restricted motion of MA is often associated with secondary MR, which the MA becomes dilated and less dynamic <cit.>. The semi-minor length (b) of the ellipse and mitral annulus height during systole also have high relevance and high importance in RF model. From Fig. <ref>B & D, both features show good separation between the 2 cohorts at the specific CPs. The higher feature value in b for MR cohort reflects the dilation of the MA as it often seen in patients with MR, especially moderate to severe severity. <cit.>. In Fig. <ref>, the 2 sets of features with the highest LDA coefficients show some class separation, even without considering the other 8 features. Also, the plots show the sets of the two classes are contiguous at the LDA decision boundary. The overlapping group at the boundary are largely contributed by mild MR cases, as the MA size can still be within the normal range in primary MR patients <cit.>. This poses a challenging problem to find a linear combination of features that best separates the two classes. This may suggest that the potential partitions of the feature space into conditional subgroups performed by the RF is crucial, thus explaining the better performance with the RF model. There are some limitations in the proposed pipeline. First, the landmarks are manually annotated, but we plan to make this step (and thus the entire pipeline) fully automatic. We utilise the labelled landmarks in this study as training data for a CNN that can reliably localise the landmarks from the Cine MRIs. Second, we do not assess the performance of the method separately for primary and secondary MR. We speculate that our method may be better suited for secondary MR, which is expected to bring greater morphological changes to the MA. Third, the limitations of TTE (mentioned in Introduction) can lead to inter- and intra-observer disagreements in MR severity stratification, and we therefore do not attempt to classify MR severity. Our future plans include using the proposed features as one of the auxiliary inputs to a deep NN pipeline that can identify multiple cardiac valvular diseases using clinical Cine MRI. This NN pipeline will potentially not only identify MR, but also classify the MR type (primary/secondary), by bringing in other types of MRI and imaging information. A more detailed automatic analysis of MA features from Cine MRI could enable accurate categorisation of the type of MR (primary/secondary) and locate the possible pathological regions. This additional knowledge is essential for disease management choices (including the types of intervention and/or medical therapy), surpassing the information that can be obtained from standalone echocardiography. unsrt splncs04
http://arxiv.org/abs/2408.11543v1
20240821115046
Detecting virtual homomorphisms via Banach metrics
[ "Liran Ron-George", "Ariel Yadin" ]
math.GR
[ "math.GR", "math.MG", "20F65, 51F30" ]
2cm -1cm genialnumbox 7pt 7pt 0.25em #1 ifnotempty#1upn#2 @notefont(#3) blacknumex 7pt 7pt 0.25em #1 ifnotempty#1upn#2 @notefont(#3) blacknumbox 7pt 7pt 0.25em #1 ifnotempty#1upn#2 @notefont(#3) genialnum 7pt 7pt 0.25em #1 ifnotempty#1upn#2 @notefont(#3) [skipabove=7pt, skipbelow=7pt, rightline=false, leftline=false, topline=false, bottomline=false, backgroundcolor=black!5, linecolor=genial, innerleftmargin=5pt, innerrightmargin=5pt, innertopmargin=10pt, leftmargin=0cm, rightmargin=0cm, innerbottommargin=10pt]tBox [skipabove=7pt, skipbelow=7pt, rightline=false, leftline=false, topline=false, bottomline=false, backgroundcolor=genial!10, linecolor=genial, innerleftmargin=5pt, innerrightmargin=5pt, innertopmargin=5pt, innerbottommargin=5pt, leftmargin=0cm, rightmargin=0cm, linewidth=4pt]eBox [skipabove=7pt, skipbelow=7pt, rightline=false, leftline=true, topline=false, bottomline=false, linecolor=genial!50, innerleftmargin=5pt, innerrightmargin=5pt, innertopmargin=5pt, leftmargin=0cm, rightmargin=0cm, linewidth=4pt, innerbottommargin=5pt]dBox [skipabove=7pt, skipbelow=7pt, rightline=false, leftline=false, topline=false, bottomline=false, linecolor=gray, backgroundcolor=black!5, innerleftmargin=5pt, innerrightmargin=5pt, innertopmargin=5pt, leftmargin=0cm, rightmargin=0cm, linewidth=4pt, innerbottommargin=5pt]cBox [skipabove=7pt, skipbelow=7pt, rightline=false, leftline=false, topline=false, bottomline=false, linecolor=gray, backgroundcolor=black!5, innerleftmargin=5pt, innerrightmargin=5pt, innertopmargin=5pt, leftmargin=0cm, rightmargin=0cm, linewidth=4pt, innerbottommargin=5pt]pBox [skipabove=7pt, skipbelow=7pt, rightline=false, leftline=false, topline=false, bottomline=false, linecolor=genialsol, innerleftmargin=5pt, innerrightmargin=5pt, innertopmargin=0pt, leftmargin=0cm, rightmargin=0cm, linewidth=4pt, innerbottommargin=0pt]solBox genialnumbox thm1Theorem[section] ithm1[thm1]⋆ THEOREM ques1[thm1]Question conj1[thm1]Conjecture blacknumex exer[thm1]Exercise exer*[thm1]∗ Exercise blacknumbox dfn1[thm1]Definition genialnum cor1[thm1]Corollary prop1[thm1]Proposition lem1[thm1]Lemma exm1[thm1]Example theorem ithm exe exe* definition example corollary ques conj proposition lemma thm*[1] Theorem* (#1). [restatement] genialnum clm[thm1]Claim *clm*Claim sol Solution to ex: exer. :( :) remark[thm1]Remark ⊓⊔ 501em=0pt=0 Department of Mathematics, Ben-Gurion University of the Negev lirar@post.bgu.ac.il, yadina@bgu.ac.il We thank Y. Glasner and A. Karlsson for useful discussions and insights. Research supported by the Israel Science Foundation, grant no. 954/21. The first author is also partially supported by the Israel Science Foundation, grant no. 1175/18 § ABSTRACT We introduce the notion of Banach metrics on finitely generated infinite groups. This extends the notion of a Cayley graph (as a metric space). Our motivation comes from trying to detect the existence of virtual homomorphisms into . We show that detection of such homomorphisms through metric functional boundaries of Cayley graphs isn't always possible. However, we prove that it is always possible to do so through a metric functional boundary of some Banach metric on the group. Detecting virtual homomorphisms via Banach metrics Ariel Yadin August 26, 2024 ================================================== § INTRODUCTION Gromov's theorem regarding groups of polynomial growth <cit.> is widely considered a cornerstone of geometric group theory. In that paper Gromov proved that any finitely generated group of polynomial growth is virtually nilpotent. The proof has two main stages to it. The first, is to show that in any finitely generated group of polynomial growth there is a finite index subgroup with a surjective homomorphism onto the additive group of integers. (This is usually done by constructing a representation of the group.) We call such a homomorphism a virtual homomorphism for short, see below, Definition <ref>. (“Virtual” because the homomorphism is only defined on a finite index subgroup.) The second stage in Gromov's proof is an induction on the degree of polynomial growth showing that such groups must be virtually nilpotent. This was not the first time growth of finitely generated groups was studied. Already in 1968 Milnor and Wolf considered the polynomial growth setting for solvable groups in <cit.>, proving that any finitely generated solvable group is either of exponential growth or virtually nilpotent. (Finitely generated nilpotent groups were shown to have polynomial growth by Wolf in <cit.>, and later Bass and Guivarc'h <cit.> computed the exact degree of polynomial growth of a finitely generated nilpotent group.) In the same year 1968, Milnor <cit.> asked if there exist finitely generated groups that are not of exponential growth and not of polynomial growth (the so called groups of intermediate growth). This was finally answered affirmatively by Grigorchuk <cit.>. Grigorchuk also conjectured <cit.> that there is a “gap” in the possible growth functions of finitely generated groups; namely, all finitely generated groups of small enough growth must actually be of polynomial growth, see Conjecture <ref> for a precise statement. One naive thought on how to attack Grigorchuk's gap conjecture would be to reproduce Gromov's strategy: prove that any finitely generated group of small growth admits a finite index subgroup with a surjective homomorphism onto , and then use some sort of induction argument. Detecting surjective homomorphisms onto provides a lot of information about a group. If G/N ≅ for some normal subgroup N G, then it is not difficult to see that G ≅⋉ N, where acts on N by some automorphism of N. (This may be compared to the Gromoll splitting theorem <cit.>, in a different context.) We are thus motivated to try and understand how to find virtual homomorphisms on groups. One suggestion by A. Karlsson <cit.> was to consider the action of the group G on a boundary of the metric-functional compactification of G. Elements of this compactification are functions from G into , and a finite orbit for the canonical action provides a virtual homomorphism. This will be explained precisely in Section <ref>. Naively one may wish to consider compactifications of Cayley graphs of the group, since these metric spaces are the ones used to measure growth, and intimately connect the geometric and algebraic properties of the group. However, we show in Theorem <ref> that on the free group, although there exist surjective homomorphisms onto , the metric-functional boundaries of Cayley graphs of the free group never contain a finite orbit. We therefore extend the notion of Cayley graphs to a broader class of metric spaces on a group, which we dub Banach metrics (see Definition <ref>). These are quasi-isometric to Cayley graphs, so still capture the correct geometry, but are general enough metric spaces so that the metric-functionals can still detect virtual homomorphisms. Our main result, Theorem <ref>, states that a finitely generated infinite group G admits a virtual homomorphism if and only if there exists some Banach metric on G such that its metric-functional boundary contains a finite orbit (and therefore functions in this orbit are virtual homomorphisms). Another issue with working only with Cayley graphs is that this notion is not rich enough for some basic operations in geometric group theory. Specifically, the restriction of the metric to subgroups, even to those of finite index, does not typically result in a Cayley graph. In contrast, we show in Theorem <ref> that the restriction of a Banach metric to a finite index subgroup always results in a Banach metric on that subgroup. Banach metrics are much more flexible than Cayley graphs, as can be seen by the construction in Lemma <ref>. The above mentioned results, Theorems <ref> and <ref>, motivate further exploration of such metrics and their connection to growth in small groups. To summarize the comparison between Banach metrics and Cayley graphs: * Banach metrics pass to finite index subgroups whereas Cayley graphs do not. * Cayley graphs cannot always “detect” virtual homomorphisms, however Banach metrics always do. * Banach metrics are much more flexible, which may be an advantage for creatively constructing useful ones. We now move to precisely define the above notions and state our results. § BACKGROUND §.§ Metric-functionals Let (X,d) be a metric space with a base point x_0 ∈ X and denote L(X,d) the set of all functions h:X →ℝ such that h is 1-Lipschitz (i.e. |h(x)-h(y)| ≤ d(x,y) for all x,y ∈ X) and h(x_0)=0. Equip L(X,d) with the topology of pointwise convergence and note that L(X,d) is compact by Tychonof's theorem. The set X embeds into L(X,d) by identifying x ∈ X with the so called Busemann function b_x:X →ℝ given by b_x(y)=d(x,y)-d(x,x_0). We denote the closure of {b_x | x ∈ X} in L(X,d) by (X,d) and define the metric-functional boundary of (X,d) to be ∂ (X,d)=(X,d)∖{b_x | x ∈ X}. The elements of (X,d) are called metric-functionals, and they play a role corresponding to linear functionals, but on general metric spaces that do not afford a linear structure. See <cit.> and references therein. If we replace the above topology on L(X,d) with uniform convergence on compacts, we would arrive at an analogous compact space, which has been considered extensively (see <cit.>, and many other texts). In this case elements in the boundary are sometimes called horofunctions. Since we will always consider cases where X is countable and discrete, there is no distinction in this paper between horofunctions and metric functionals. §.§ Cayley graphs Let S be a finite symmetric generating set for a group G. That is, G = S, |S| < ∞, S=S^-1. We consider the Cayley graph of G with respect to S. This is a graph whose vertices are the elements of G, and edges are given by the relation x ∼ y x^-1 y ∈ S. Since S is symmetric this defines a graph, denoted Γ(G,S), and therefore a metric space, where the metric is the graph distance (which is incidentally the word metric with respect to S). If d_S denotes the graph metric, then it is easy to see that d_S is left-invariant, d_S(zx,zy) = d_S(x,y) for all x,y,z ∈ G. It is convenient to use the notation |x|_S = d_S(x,1). Recall that two metric spaces (X,d), (Y,ρ) are quasi-isometric if there exists a quasi-isometry : X → Y. This means that there exists C>0 such that for any y ∈ Y there exists x ∈ X such that ρ((x), y) ≤ C and also for any x,x' ∈ X C^-1 d(x,x') - C ≤ρ ( (x), (x') ) ≤ C d(x,x') + C It is a fact that quasi-isometries provide an equivalence relation between metric spaces, see <cit.>. A simple exercise shows that for two finite symmetric generating sets S,T of a group, the corresponding metrics d_S, d_T are bi-Lipschitz. That is, there exists some constant C = C(S,T)>0 such that C^-1 d_S(x,y) ≤ d_T(x,y) ≤ C d_S(x,y) for all x,y ∈ G. Specifically, all Cayley graph metrics on the same group are quasi-isometric to one another. Moreover, if H ≤ G is a subgroup of finite index [G:H] < ∞ of a finitely generated group G, then H is finitely generated as well (see <cit.>). The Cayley graphs of G and of H are quasi-isometric as well (see <cit.>). Cayley graphs are geodesic metric spaces. One can show that this property implies that metric-functionals of Cayley graphs are always unbounded from below. In fact, for any h ∈ (G,d_S) and any integer r ≥ 0 there exists x ∈ G with h(x) = -|x|_S = -r. (For a proof see Lemma <ref> below.) This property separates metric-functionals from interior points b_x, x ∈ G, because one readily verifies that b_x(y) ≥ -|x|_S for all x,y∈ G. Another property of Cayley graphs is that they are proper metric spaces; balls are compact (finite in our case). In fact, any geodesic, integer valued, proper, left-invariant metric on a group G can be easily shown to be a metric arising from a Cayley graph. We call such metrics Cayley metrics. These properties imply that for a converging sequence b_x_n→ h ∈ (G,d_S), we have a dichotomy: Either x_n=x for all large enough n, and h=b_x, or h ∈ (G,d_S) and |x_n| →∞, but both cannot hold simultaneously. So boundary points in (G,d_S) are indeed “points at infinity”, and the structure (G,d_S) is a compactification of G. For an example see Example <ref> below. §.§ Action on the boundary Let d be any metric on a group G. G acts naturally on L(G,d) by x.h(y) = h(x^-1 y) - h(x^-1). One readily verifies that this is a continuous action, and that (G,d) is G-invariant. (Note that x.b_y=b_xy.) Assume that h ∈ L(G,d) is a fixed point for the G-action. That is, x.h=h for all x ∈ G. This precisely means that h is a homomorphism from G into . Furthermore, if h ∈ L(G,d) has a finite orbit |G.h| < ∞, then by taking H to be the stabilizer of h, we obtain a finite index [G:H] < ∞ subgroup, such that the restriction of h to H is a homomorphism from H into . If h |_H ≡ 0 then h is a bounded function on G. Thus, if the situation is such that h ∈ (G,d) and all metric-functionals are unbounded, then we have obtained a non-trivial homomorphism from the finite index subgroup H into . So h(H) is an infinite finitely generated abelian group, implying that H admits some surjective homomorphism onto . Let G be a group. A virtual homomorphism on G is a function :G → such that there exists a finite index subgroup [G:H] < ∞ for which the restriction |_H is a non-trivial homomorphism. Let us remark that in many texts a group admitting a virtual homomorphism is called a virtually indicable group. Thus, we have seen that if h ∈(G,d) is unbounded and has a finite orbit |G.h| <∞, then h:G → is such a virtual homomorphism. When d=d_S is the metric of some Cayley graph, it is however not true that any virtual homomorphism can be found this way, as the next theorem shows. Let _d be the free group of d ≥ 2 generators. Let d_S be the metric of some Cayley graph on _d. Then, there are no finite orbits in (_d,d_S). The proof of Theorem <ref> is in Section <ref>. §.§ Banach metrics In light of Theorem <ref>, if we consider the “detection problem” for virtual homomorphisms via the metric-functionals, we must broaden the types of possible metrics we use to more than just Cayley graphs. As mentioned above, metric-functionals play an analogous role to linear functionals. In the linear setting (vector spaces), the only bounded function which is a linear functional is the 0 functional (trivial functional). In the general metric space setting, the properties of triviality and boundedness become distinct. To preserve our analogy to the linear world we require that metric functionals are unbounded. This is in addition to other metric properties, as detailed in the following definition. Let G be a finitely generated infinite group. A metric d on G is called a Banach metric if it has the following properties: For all x,y,z ∈ G, * d(x,y) ∈ (integer valued). * d(zx,zy)= d(x,y) (left-invariant). * For any r>0 the ball B_d(r) = { x ∈ G : d(x,1) ≤ r } is finite ((G,d) is a proper metric space). * (G,d) is quasi-isometric to a Cayley graph (group geometry). * Any metric-functional h ∈ (G,d) is an unbounded function. It follows from Lemmas <ref> and <ref> below that any Cayley metric is a Banach metric. However, Banach metrics are more general than Cayley graphs. A preliminary example: Let G be a group with a finite symmetric generating set S. It is not difficult to verify that M · d_S is a Banach metric for any positive integer M. Also, if M>1 then M · d_S cannot be a Cayley metric for the technical reason that Cayley metrics have minimal distance 1, and this metric has minimal distance M. A more intriguing example will be given in Lemma <ref>. §.§ Finite index subgroups When discussing metrics on groups, it makes sense to consider subgroups as subspaces of the original metric space (by inducing the metric on the subgroup). However, in the Cayley graph setting, this does not result in the same structure. Typically, the induced metric on a subgroup will not be a Cayley metric. For example, it may fail to be a geodesic metric. This is the case even for subgroups of finite index. Thus, one sees that the category of Cayley graphs may not be useful if we wish to permit ourselves to pass freely to finite index subgroups (as is usually the situation when considering geometric properties of the group). Contrary to the situation for Cayley graphs, we prove that Banach metrics do induce Banach metrics on finite index subgroups. This stability is another motivation to consider these more general types of metrics. Let G be a finitely generated group, and let H ≤ G be a finite index subgroup [G:H] < ∞. Let d be a Banach metric on G, and let d_H be the induced metric on H (as a subset). Then, d_H is a Banach metric on H. The proof of Theorem <ref> is in Section <ref>. §.§ Detection of virtual homomorphisms The notion of a Banach metric is not just a generalization, but, as mentioned, it is useful for detecting virtual homomorphisms. This is our main result. Let G be a finitely generated group. The following are equivalent: * G admits some virtual homomorphism. * There exists a Banach metric d on G and some h ∈ (G,d) such that h has a finite orbit |G.h| < ∞. Moreover, if G admits an actual homomorphism onto , then h ∈ (G,d) above can be chosen so that it is a fixed point of G. The proof of Theorem <ref> is at the end of Section <ref>. §.§ Nilpotent groups It is well known that any finitely generated nilpotent group admits a homomorphism onto (indeed it always has an infinite Abelianization). Thus, virtually nilpotent groups always admit some virtual homomorphism. Walsh <cit.> has shown that in any Cayley graph of a nilpotent group there is a finite orbit in the metric-functional boundary (see also <cit.>). To our knowledge, there is no such result for virtually nilpotent groups. As an immediate consequence of Theorem <ref> we have: Let G be a finitely generated virtually nilpotent group. There exists some Banach metric d on G with a finite orbit in (G,d). Corollary <ref> is new even for virtually Abelian groups, as far as we know. In order to prove Theorem <ref> we are required to specifically analyze virtually Abelian groups. As part of the proof of Theorem <ref>, we prove a stronger version of Corollary <ref> for the (special) case of virtually Ablelian groups, as follows. Let G be a finitely generated virtually Abelian group. There exists some Cayley metric d_S on G with a finite orbit in (G,d_S). Theorem <ref> is proven in Section <ref>, see Corollary <ref> there. While this work was being prepared, Bodart & Tashiro uploaded a preprint to the arXiv where they conjectured that a group is virtually Abelian if and only if it has some Cayley metric with a countable metric-functional boundary, see <cit.>. Through personal communication Bodart & Tashiro have informed us that they can most likely prove the “only if” part: any Cayley graph of a virtually Abelian group has a countable metric-functional boundary. As already observed by Karlsson <cit.>, by considering a stationary measure on the boundary, one obtains that a countable boundary implies the existence of a finite orbit (take a maximal atom of this measure). It is not difficult to see that the proof of Theorem <ref> and Corollary <ref> actually proves that for a virtually nilpotent group there always exists some Cayley graph with a countable metric-functional boundary. We have chosen not to expand on this too much, as Bodart & Tashiro's yet unpublished result is stronger than our Theorem <ref>, since it asserts the same for any Cayley graph of a virtually Abelian group. §.§ Gap conjecture We recall the usual partial order on monotone functions. For monotone non-decreasing f,h : → [0,∞) we write f ≼ h if there exists C>0 such that f(n) ≤ C h(Cn) for all n ∈. This provides an equivalence relation on such functions by f ∼ h if and only if f ≼ h and h ≼ f. The growth of a finitely generated group is the equivalence class of the function 𝗀𝗋(r) = |B(1,r)|, where B(1,r) is the ball of radius r in some fixed Cayley graph. Since different Cayley graph metrics are quasi-isometric, they provide the same equivalence class of growth, so this definition does not depend on the specific choice of Cayley graph. As mentioned in the introduction, the following has been conjectured by Grigorchuk <cit.>. Let G be a finitely generated group of growth ≼ r ↦exp ( r^α ) for some α < 12. Then, G is virtually nilpotent (and so actually has polynomial growth). In light of our main result, Theorem <ref>, and the fact that virtually nilpotent groups always admit virtual homomorphisms, we conjecture the following (logically weaker) conjecture. Let G be a finitely generated group of growth ≼ r ↦exp ( r^α ) for some α < 12. Then, there exists a Banach metric d on G with a finite orbit in (G,d). § METRIC FUNCTIONALS AND QUOTIENT GROUPS §.§ Basic properties of metric functionals We include some basic properties of metric functionals which we will require. The proofs are well known, and we include them only for completeness. Recall that a metric is geodesic if there is a geodesic path connecting any two points. Specifically, for an integer valued metric d on G, we say that d is geodesic if for any x,y ∈ G there exist x=z_0 , z_1 , …, z_n=y such that d(z_j , z_k)= |j-k| for all 0 ≤ j,k ≤ n. Let d be an integer valued proper geodesic metric on X. Fix a base point x_0 ∈ X. Let (x_n)_n be a sequence such that d(x_n,x_0) →∞. Then, for any r ∈ there exists x ∈ X and an infinite subset I ⊂ such that b_x_n(x) = -d(x,x_0) = -r for all n ∈ I. As a consequence, if b_x_n→ f ∈ (X,d) for a sequence such that d(x_n,x_0) →∞, then for every r ∈ there exists x ∈ X with d(x,x_0)=r = -f(x). Specifically the function f is unbounded from below. Since the metric is integer valued, the topology induced by the metric is discrete. As X is proper, balls must be finite sets. Fix r ∈ and let S = { x : d(x,x_0) = r }. Let y ∈ X be such that d(y,x_0) > r. We assumed that d is geodesic, so we may choose a finite geodesic from the base point x_0 to y; a finite sequence x_0 = z_0 , z_1, …, z_m = y such that d(z_j , z_k)= |j-k| for all 0 ≤ j,k ≤ m. Consider the point w = z_r. Since d(w,x_0) = d(z_r,z_0) = r, we have that w ∈ S and b_y(w) = d(z_r,z_m) - d(z_0,z_m) = - r = - d(w,x_0) . Thus, for any n such that d(x_n,x_0)>r, there exists some w_n ∈ S such that b_x_n(w_n) = - r = - d(w_n,x_0). Since S is finite, there must exist some x ∈ S such that w_n = x for infinitely many n. Setting I = { n : w_n = x} completes the proof of the first assertion. For the second assertion assume that b_x_n→ f and d(x_n,x_0) →∞. Fix r ∈. The first assertion tells us that for some infinite subset I ⊂ N we have b_x_n(x) = -r for all n ∈ I and some x. This implies that f(x) = lim_n →∞ b_x_n(x) = lim_I ∋ n →∞ b_x_n (x) = - r . Let d be an integer valued proper metric on X. Fix a base point x_0 ∈ X. Then, if b_x_n→ h ∈ (X,d), it must be that d(x_n, x_0) →∞. Assume that (d(x_n,x_0))_n is a bounded sequence and that b_x_n→ h. We will show that h ∉(X,d), that is h = b_x for some x ∈ X. As before, the topology induced by the metric is discrete, and being proper, balls must be finite sets. So there is some finite set B such that x_n ∈ B for all n. Hence there must be x ∈ B such that x_n=x for infinitely many n. As b_x_n→ h, it must be that h=b_x. §.§ Banach metric construction We now move to construct a Banach metric by combining two Cayley graphs in the right way. This construction exhibits how Banach metrics offer more flexibility than just Cayley graphs. It will be central to the proof of Theorem <ref>. Let G be finitely generated infinite group, and let π: G → H be a surjective homomorphism. Suppose that d_G is a Cayley metric on G and d_H is a Cayley metric on H. Write | x|_G = d_G(x,1_G) and |q|_H = d_H(q,1_H) (here 1_G,1_H denote identity elements in the respective groups). Assume that there exists C ≥ 1 such that |π(x)|_H ≤ C|x|_G for every x ∈ G, and also for any q ∈ H there exists x ∈π^-1({q}) such that |x|_G ≤ C |q|_H. Fix an integer M > C and define D(x,y) = max{ d(x,y) , M · d_H(π(x), π(y)) } Then, D is a Banach metric on G. It is easy to verify that D is indeed a metric, which is proper, integer valued, and left-invariant. Note that d_G(x,y) ≤ D(x,y) ≤ C d_G(x,y), implying that the identity map on G is a quasi-isometry between (G,D) and (G,d_G). Since d_G is a Banach metric, it itself is quasi-isometric to any Cayley graph, hence so is (G,D). This verifies the first 4 properties of a Banach metric from Definition <ref>. Denote 1=1_G and |x|_D = D(x,1). To differentiate between the different metrics, we denote b_x^D(y) = D(x,y) - |x|_D, b_x^G(y) = d_G(x,y) - |x|_G and b_q^H(p) = d_H(q,p) - |q|_H. We now prove the fifth property in Definition <ref>. Let F ∈ (G,D). Choose some sequence (g_n)_n such that b_g_n^D → F. We know by Lemma <ref> that |g_n|_D →∞. We have 3 cases: Case I. lim sup_n →∞( |g_n|_G - M · |π(g_n)|_H ) = ∞ In this case, without loss of generality (by passing to a subsequence), we can assume that |g_n|_G - M · |π(g_n)|_H →∞. Since |g_n|_G →∞, and since d_G is assumed to be geodesic, we can use Lemma <ref>, so that by passing to a further subsequence, we may assume without loss of generality that b_g_n^G → f ∈ (G,d_G), and f is unbounded from below. Fix some r ∈. Choose x so that f(x) ≤ -r. Since |g_n|_G - M · |π(g_n)|_H →∞, there exists n(r) such that |g_n|_G - M · |π(g_n)|_H > |x|_G + M · |π(x)|_H for all n ≥ n(r). We thus obtain for n ≥ n(r) that d_G(x,g_n) ≥ |g_n|_G - |x|_G > M ·( |π(g_n)|_H + |π(x)|_H ) ≥ M · d_H (π(g_n), π(x) ) . So D(g_n,x) = d_G(g_n,x), implying that b_g_n^D(x) ≤ b_g_n^G(x) for all n ≥ n(r). Taking n →∞ implies that F(x) ≤ f(x) ≤ -r. This holds for arbitrary r, so F is unbounded from below in Case I. Case II. lim inf_n →∞( |g_n|_G - M · |π(g_n)|_H ) = - ∞ As in Case I, without loss of generality, we can assume that M · |π(g_n)|_H - |g_n|_G →∞. Since d_H is assumed to be geodesic, by Lemma <ref> we can assume without loss of generality that b_π(g_n)^H → h ∈ (H,d_H) and h is unbounded from below. Fix r ∈. Choose x ∈ G such that h(π(x)) ≤ -r. Since M · |π(g_n)|_H - |g_n|_G →∞, there exists n(r) such that M · |π(g_n)|_H-|g_n|_G>|x|_G+M · |π(x)|_H for all n ≥ n(r). As in Case I this implies that D(g_n,x) = M · d_H(π(g_n), π(x)) for all n ≥ n(r), which in turn implies that F(x) ≤ M · h(π(x)) ≤ - M r. This holds for arbitrary r, so F is unbounded from below in Case II. Case III. ∃ R > 0 | |g_n|_G - M · |π(g_n)|_H | ≤ R for all large enough n. Since |g_n|_D →∞ it must be that |g_n|_G →∞ and |π(g_n)|_H →∞. As in the first two cases we can pass to a subsequence using Lemma <ref>, to assume without loss of generality that b_g_n^G → f ∈(G,d_G) b_π(g_n)^H → h ∈ (H,d_H) and f,h are unbounded from below. Now fix some r ≥ 4R. Choose x ∈ G such that |x|_G = r = -f(x), using Lemma <ref>. By passing to a subsequence we may assume without loss of generality that b_g_n^G(x) = -r for all n. We now have two further cases: Case III(a). M · b_π(g_n)^H(π(x)) ≤r/2 for all large enough n. Set ρ = ⌊3r/4C⌋ and z_n = x^-1 g_n. Since |π(z_n)|_H →∞, using Lemma <ref>, by passing to a further subsequence we can assume without loss of generality that there exists q ∈ H such that |q|_H = ρ = - b_π(z_n)^H(q) for all n. Now, take y ∈ G such that π(y) = q and |y|_G ≤ C |q|_H. Thus, d_H(π(xy), π(g_n)) = d_H(q, π(z_n)) ≤ d_H(π(x), π(g_n)) - ρ . On the one hand we have that d_G(xy,g_n) ≤ d_G(x,g_n) + |y|_G ≤ |g_n|_G - r + C ρ≤ |g_n|_D - r4 , while the other hand we have that for all large enough n, M · d_H(π(xy), π(g_n)) ≤ M · d_H( π(g_n), π(x) ) - M ρ≤ M · |π(g_n)|_H + r2 - M ρ ≤ |g_n|_D - r4 + M . Taking a maximum over these two inequalities, and a limit as n →∞ we arrive at F(xy) ≤ M - r4. As this holds for arbitrary r, we obtain that F is unbounded from below in Case III(a). Case III(b). M · b_π(g_n)^H(π(x)) > r/2 for infinitely many n. In this case we have that for infinitely many n, M · d_H(π(g_n),π(x))>|g_n|_G-R+r/2≥ d_G(g_n,x)-R+3r/2>d_G(g_n,x) which implies that D(g_n,x)=M · d_H(π(g_n),π(x))>M · |π(g_n)|_H+r/2≥ |g_n|_D-R+r/2≥ |g_n|_D+r/4 and by moving to the limit we get that F(x) ≥r/4. This holds for arbitrary r, so F is unbounded from above in Case III(b). Cases I, II, III(a), III(b) together complete the proof of the fifth property of Definition <ref>. One example of metrics on G and H ≅ G/N satisfying the assumption of Lemma <ref> is as follows. Fix some finite symmetric generating set S for G and let d_G = d_S be the metric arising from the Cayley graph with respect to S. Note that since π : G → H is a surjective homomorphism, the set π(S) ⊂ H is a finite symmetric generating set for H. Let d_H be the metric corresponding to the Cayley graph with respect to π(S). It is easy to verify that |π(x)|_H ≤ |x|_G. Also, for any q ∈ H, write q = π(s_1) ⋯π(s_n) for n = |q|_H and s_j ∈ S. Then x = s_1 ⋯ s_n satisfies that |x|_G ≤ n = |q|_H and π(x) = q. Let G, π:G → H, d_G,d_H,C,M and D be as in Lemma <ref> (with the same assumptions). Recall that we assume M>C. Then for any h ∈ (H,d_H) there exists f ∈ (G,D) such that f(x) = M · h(π(x)) for all x ∈ G. Specifically, |G.f| ≤ |H.h|. We use the notation |x|_D, |x|_G, |q|_H, b_x^D, b_x^G, b_q^H as in the proof of Lemma <ref>. Let h ∈ (H,d_H), and choose a sequence (q_n)_n in H such that b_q_n^H → h. Note that by Lemma <ref> it must be that |q_n|_H →∞. Since d_H is assumed to be a geodesic metric, by Lemma <ref>, h is unbounded from below. For each n choose x_n ∈ G such that π(x_n) = q_n and |x_n|_G ≤ C|q_n|_H. Since M · |q_n|_H = M · |π(x_n)|_H ≤ |x_n|_D, by passing to a subsequence we may assume without loss of generality that b_x_n^D → f ∈(G,D). We assumed that M > C, so M · |π(x_n)|_H > C |π(x_n)|_H ≥ |x_n|_G, so that |x_n|_D = M · |π(x_n)|_H for all n. Also, for any x ∈ G we have M · d_H(π(x) , π(x_n)) ≥ M · |π(x_n)|_H - M · |π(x)|_H ≥ |x_n|_G + (M-C) |q_n|_H - M · | π(x)|_H ≥ d_G(x,x_n) + (M-C) |q_n|_H - M · |π(x)|_H - |x|_G for all n. This implies that for all large enough n (as soon as (M-C) |q_n|_H > M · |π(x)|_H + |x|_G), we have b_x_n^D(x) = M · b_q_n^H (π(x)). Hence f(x) = M · h(π(x)) for all x ∈ G. Note that for any x ∈ G the function b^D_x is bounded below (by -|x|_D). Since h is unbounded from below, we also have that f=M · h ∘π is unbounded from below. This implies that f ≠ b_x^D for any x ∈ G, so that f ∈ (G,D). This proves the first assertion. For the second assertion, consider the map x.f ↦π(x).h. The identity x.f(y) = M · (h(π(x^-1 y)) - h(π(x^-1))) = M · (π(x).h) (π(y)) . shows that the map is well defined and injective. Hence |G.f| ≤ |H.h|. §.§ Virtual homomorphisms Recall the notion of a virtual homomorphism, Definition <ref>. Also, throughout the text we use the notation g^γ = γ^-1 g γ for group elements g,γ, as well as A^g = { a^g : a ∈ A } for a subset A of a group and an element g. The following lemma is well known, the proof is included for completeness. Let G be a finitely generated infinite group and assume that G admits a virtual homomorphism. Then G has a virtually abelian quotient. A virtual homomorphism implies the existence of a finite index subgroup [G:H] < ∞ and some K H with H/K ≅. Consider the normal subgroup N =∩_g ∈ G K^g. First, [G/N:H/N]=[G:H]<∞, so we only need to show that H/N is abelian. Note that for any g ∈ G we have that K^g H. Also, H/K^g ≅ for every g ∈ G. Therefore H'=[H,H] ≤ K^g for every g ∈ G, and thus also H' ≤ N=∩_g ∈ G K^g, which implies that H/N is abelian. We now discuss Cayley graphs, with a focus on virtually Abelian groups. If S is a finite symmetric generating set for a group G, we denote by Γ(G,S) the Cayley graph with respect to S, by d_S the corresponding metric, and by |x|_S = d_S(x,1). We also use the notation Γ(G,S) = (G,d_S). A Cayley graph Γ(G,S) provides a metric d_S which is a geodesic metric. Γ(G,S) always contains infinite geodesics (a sequence γ= (γ_n)_n is an infinite geodesic if every finite subsequence (γ_k, γ_k+1, …, γ_k+n) is a geodesic). It is not too difficult to prove (see <cit.> for a proof) that any infinite geodesic γ converges to a boundary point b_γ_n→γ_∞∈ (G,d_S). (Such points which happen to be limits of geodesics are called Busemann points, and are the subject of a different discussion, see references in <cit.>.) It is also easy to see that if γ is an infinite geodesic, then x.γ given by (x.γ)_n = x γ_n is also a geodesic, which converges to x.γ_∞. Walsh provides a characterization of which geodesics converge to the same boundary point: Let Γ be a Cayley graph. Two infinite geodesics α, β converge to the same boundary point α_∞ = β_∞∈Γ, if and only if there exists an infinite geodesic γ such that |γ∩α| = |γ∩β| = ∞. Here we slightly abuse notation and denote by α∩γ = {α_n : ∃ k ∈ , α_n = γ_k } the set of elements which are in both geodesics α and γ. Consider G = ^d with the standard generating set S = ± e_1, …, ± e_d (the standard basis and their inverses). The space (G ,d_S) in this case is composed of all functions of the form h_α_1, … , α_d(z_1 ,…, z_d) = ∑_j=1^d h_α_j(z_j) where α_1 , …, α_d ∈∪{ - ∞, ∞} h_α : → h_α(z) = |α-z| - |α| if α∈ , -z if α = ∞ , z if α = - ∞ For α⃗∈^d we have that h_α⃗ = b_α⃗. So h_α_1,…, α_d∈ (G,d_S) if and only if there exists some 1 ≤ j ≤ d such that α_j ∈{- ∞, ∞}. The action of the group on (G,d_S) is given by z⃗ . h_α_1,…, α_d = h_α_1 + z_1, …, α_d + z_d, with the convention ∞ + z = ∞ and -∞ + z = -∞. Note that if α_j ∈{- ∞ , ∞} for all 1 ≤ j ≤ d, then h_α_1,…, α_d is a fixed point in the boundary. See Figure <ref>. Specifically in this case, all boundary points are limits of geodesics. For example, the point h_∞, …, ∞ is obtained from the limit of any infinite geodesic whose coordinates all tend to ∞. γ_dn+j = n 1⃗ + e_1 + ⋯ + e_j ∈^d for n ∈ and 0 ≤ j < d, defines such a geodesic. See <cit.> for much more on metric-functional boundaries of Abelian groups. Let G be a finitely generated infinite virtually abelian group, and let S be a finite symmetric generating set for G. There exists a finite index subgroup N ≤ G, [G:N] < ∞, such that N ≅^d for some d ≥ 1, and there exists a finite symmetric generating set U for N such that the following hold: * |U|=2d and Γ(N,U) is the standard hypercubic lattice. * S ∩ U=∅ * Denoting T=S ⊎ U, we have that |x|_U = |x|_T for any x ∈ N. In particular, every geodesic in Γ(N,U) is a geodesic in Γ(G,T). Since G is finitely generated, infinite and virtually abelian, there exists a finite index normal subgroup H ≅ℤ^d in G, for d ≥ 1 (see <cit.> for a proof). Let R be a finite set of representatives for the cosets of H such that 1 ∈ R; that is G = ⊎_r ∈ R Hr. For every g ∈ G let x_g ∈ H and r_g ∈ R be the unique elements such that g=x_g r_g. Fix an isomorphism π:H →ℤ^d, and let b_1, … ,b_d ∈ H be such that { e_j=π(b_j) : 1 ≤ j ≤ d } is the standard basis for ^d. For x ∈ H and p ∈ [1,∞] denote x_p=π(x)_p and also denote π_j(x)=π(x)_j. Thus, for every x ∈ H we have: π(x)=b_1^π_1(x)· ... · b_d^π_d(x) Note that B={b_1^± 1,…,b_d^± 1} is a finite symmetric generating set for H and |x|_B=x_1 is the distance to the identity element in Γ(H,B). Define: F={(x_s)^g , x_rr' | s ∈ R ∪ S , r,r' ∈ R , g ∈ G} . Since H is abelian, for every x ∈ H it holds that H ≤ C_G(x)={g ∈ G | x^g=x}, so [G:C_G(x)] < ∞ for every x ∈ H. Thus, every x ∈ H has a finite orbit under conjugation, by the orbit-stabilizer theorem. Hence, F is a finite set (because R,S are finite). So we can define M=max{x_∞ | x ∈ F} . Step I. First, we prove by induction on n that for any r_1, …, r_n ∈ R such that r_1 ⋯ r_n ∈ H it holds that r_1 ⋯ r_n _∞ ≤ M n . For n=1 this is obvious as r_1 ∈ H ∩ R implies that r_1=1 so || r_1 ||_∞ = 0. Now, for n ≥ 1, if r_1 ⋯ r_n+1∈ H then we can write y = x_r_1 r_2 and ρ = r_r_1 r_2 (recall the decomposition g=x_g r_g from above), so that ρ r_3 ⋯ r_n+1 = y^-1 r_1 ⋯ r_n+1∈ H. By induction, we conclude that r_1 ⋯ r_n+1_∞≤ y _∞ + ρ r_3 ⋯ r_n+1_∞≤ M + Mn = M(n+1) proving (<ref>). Step II. Next, we prove that for every x ∈ H we have x_∞ ≤ 2M · |x|_S . Indeed, let x ∈ H and write x=s_1 ⋯ s_n for n=|x|_S and s_j ∈ S. Denote y_j=x_s_j and r_j=r_s_j for every 1 ≤ j ≤ n (recall the decomposition g=x_g r_g from above). Denote q_1=1, q_j+1= (r_1 ⋯ r_j)^-1 for 1 ≤ j ≤ n, z=x_(q_n+1)^-1 and r=r_(q_n+1)^-1. With this notation we have: x=y_1r_1 ⋯ y_nr_n=(y_1)^q_1⋯ (y_n)^q_n· (q_n+1)^-1 =(y_1)^q_1⋯ (y_n)^q_n· z · r . Since x ∈ H, it follows that r=1, so x_∞ ≤∑_j=1^n (y_j)^q_j_∞+z_∞≤ Mn+z_∞ . Since r=1, we have that H ∋ z = zr = (q_n+1)^-1 = r_1 ⋯ r_n. Using (<ref>) z _∞ = r_1 ⋯ r_n _∞≤ M n = M |x|_S . Together with (<ref>), this completes a proof of (<ref>). Step III. Our next step is to show that for any x ∈ H we have |x|_B ≤ 2dM · |x|_S (recall the symmetric generating set B = { b_1^± 1 , …, b_d^± 1} from above). Indeed, for any x ∈ H |x|_B = || x ||_1 = ∑_j=1^d |π_j(x)| ≤ d ·π(x)_∞ = d ·x_∞ , which combined with (<ref>) proves (<ref>). Step IV. Now fix some integer K>2Md and let N be the subgroup generated by U = { b_1^± K, … ,b_d^± K}. Clearly |U|=2d, N ≅ℤ^d, [G:N] < ∞ and Γ(N,U) is the standard hypercubic lattice. We want to show that S ∩ U=∅ and that for T=S ⊎ U it holds that |x|_U=|x|_T for every x ∈ N. First, for any 1 ≠ x ∈ N, by (<ref>), using K > 2dM, we have that |x|_S > 1/K· |x|_B=∑_j=1^d 1/K|π_j(x)| =|x|_U . So S ∩ N=∅, and in particular S ∩ U=∅. Set T=S ⊎ U. Since U ⊂ T we have that |x|_T ≤ |x|_U for any x ∈ N. We will use the fact that if s ∈ S then s _1 ≤ d s _∞≤ 2dM and if u ∈ U then u _1 = K. Let x ∈ N and write x = t_1 ⋯ t_n for n = |x|_T and t_j ∈ T for all 1 ≤ j ≤ n. Let J = #{ 1 ≤ j ≤ n : t_j ∈ S }. We have that K |x|_T ≤ K |x|_U = x _1 ≤∑_j=1^n t_j _1 ≤ 2dM · J + K · (n-J) = K |x|_T - J(K-2dM) . This implies that J=0, so that t_j ∈ U for all 1 ≤ j ≤ n, and hence |x|_U ≤ n = |x|_T ≤ |x|_U. Let G be a finitely generated infinite virtually abelian group. There exists a Cayley graph Γ(G,T) of G with a finite orbit in ∂Γ(G,T). By Lemma <ref>, we can choose N ≤ G of finite index [G:N] < ∞ such that N ≅^d, and we can also find U ⊆ T two finite symmetric sets such that G=⟨ T ⟩, N=⟨ U ⟩ and such that every geodesic in Γ(N,U) is also a geodesic in Γ(G,T). Also by Lemma <ref> the graph Γ(N,U) is just the standard hypercubic lattice ^d. It is well known that ∂Γ(N,U) contains a fixed point under the action of N. See Example <ref>. We use γ to denote a geodesic in Γ(N,U) which converges to h ∈Γ(N,U) such that x.h=h for all x ∈ N. By Lemma <ref>, γ is also a geodesic in Γ(G,T), and thus in the space (G,d_T), γ converges to some point f ∈Γ(G,T). Fix any x ∈ N. We know that x.h = h, implying that the geodesic x.γ converges to h as well. By Proposition <ref>, there exists some infinite geodesic α in Γ(N,U) such that |α∩γ| = |α∩ x.γ| = ∞. By Lemma <ref>, α is also a geodesic in Γ(G,T). Using Proposition <ref> in the graph Γ(G,T) we obtain that in the space (G,d_T), the geodesics γ, x.γ converge to the same boundary point f = x.f ∈Γ(G,T). Since this holds for any x ∈ N we get that N is contained in the stabilizer of f. N ≤{ x ∈ G : x.f=f }. Since [G:N] < ∞, we get that this stabilizer has finite orbit, and thus, |G.f| < ∞. This is the required finite orbit. We now move to prove Theorem <ref>, stating that G admits a virtual homomorphism if and only if there exists a Banach metric on G with a finite orbit in the boundary. As mentioned after Definition <ref>, if h ∈ (G,d) has a finite orbit, then h is a virtual homomorphism. For the other direction, we assume that G admits some virtual homomorphism (G is virtually indicable). By Lemma <ref>, there exists a surjective homomorphism π : G → H such that H is virtually Abelian. Fix some Cayley graph metric d_G on G, with S the corresponding finite symmetric generating set. Note that π(S) is a finite symmetric generating set for H. By Corollary <ref>, there exists a finite symmetric generating set T for the group H such that Γ(H,T) contains a finite orbit. That is, for some h ∈Γ (H,T) we have |H.h| < ∞. Let d_H = d_T. Since (H,d_H) and (H,d_π(S)) are two Cayley graphs of the same group, there is some C>0 such that d_π(S)(p,q) ≤ C d_H(p,q) for all p,q ∈ H. As explained in Example <ref>, this shows that G, H, π, d_G, d_H satisfy the assumptions of Lemmas <ref> and <ref>, with this constant C>0. Hence, by taking some integer M> C, using Lemmas <ref> and <ref>, we have a Banach metric D on G and some f ∈ (G,D) such that f(x) = M · h(π(x)) for all x ∈ G. Specifically, |G.f| ≤ |H.h| < ∞, providing us with the required finite orbit. § FINITE INDEX SUBGROUPS In this section we prove Theorem <ref>, stating that a Banach metric induces a Banach metric on a finite index subgroup. Let d_G be a Banach metric on a group G. Let H ≤ G be a subgroup of finite index [G:H] < ∞. Let d(x,y) = d_G(x,y) for all x,y ∈ H, which is the metric on H as a subspace of G. The first three properties in Definition <ref> are immediate to verify. The fourth property follows from the fact that quasi-isometry is an equivalence relation, and the fact that any Cayley graph on H is quasi-isometric to a Cayley graph of G. The identity map on H into G provides a quasi-isometry from (H,d) to (G,d_G). This implies that (H,d) is quasi-isometric to a Cayley graph of G, and thus to a Cayley graph of H. To prove the fifth property in Definition <ref>, choose any h ∋ (H,d). Then b_x_n|_H → h for some sequence x_n ∈ H. By perhaps passing to a subsequence, we may assume without loss of generality that b_x_n→ f ∈ (G,d_G), and we find that f |_H = h. If f ∈ (G,d_G) then it is unbounded by the fifth property in Definition <ref>. Since d_G is proper, any b_x is also unbounded. Hence f is unbounded in any case. So, we can find a sequence (g_n)_n in G such that |f(g_n) | →∞. Since [G:H] < ∞ the sequence (g_n)_n must be in some coset of H infinitely many times, implying that by passing to a subsequence we may assume that g_n = y_n r for y_n ∈ H and fixed r ∈ G. By the Lipschitz property we have that |f(y_n) - f(g_n)| ≤ |r| for all n, so that |h(y_n)| = |f(y_n)| ≥ |f(g_n)| - |r| →∞, implying that h is unbounded. § NO DETECTION IN THE FREE GROUP In this section we prove Theorem <ref>, stating that in any Cayley graph of a non-Abelian free group, there does not exists a finite orbit in the boundary. Fix a free generating set A for the free group _d. That is, each non-trivial element 1 ≠ g ∈_d has a unique reduced word in A ∪ A^-1 representing it. We denote |g|_A = |g|_A ∪ A^-1 and d_A = d_A ∪ A^-1. For a ∈ A ∪ A^-1 define B_a = { g ∈_d : g = a_1 ⋯ a_|g|_A , a_1=a } and E_a = { g ∈_d : g = a_1 ⋯ a_|g|_A , a_|g|_A=a } . (B,E for begin and end respectively.) Note that _d ∖{1} = _a ∈ A ∪ A^-1 B_a = _a ∈ A ∪ A^-1 E_a . Let S be some finite symmetric generating set of the free group _d on d ≥ 2 generators. Assume that A ⊂ S is a free generating set. Define M = max_s ∈ S |s|_A. Let a ∈ A ∪ A^-1, and suppose that g ∈ E_a^-1 and y ∉B_a are such that |y|_A ≥ (2M+1)M. Then |gy|_S ≥ |g|_S+1. Write g=a_1 ⋯ a_n and y=b_1 ⋯ b_k for n=|g|_A and k=|y|_A, with a_j , b_i ∈ A ∪ A^-1 for all 1 ≤ j ≤ n and 1 ≤ i ≤ k. Also write gy= s_r ⋯ s_1 where r=|gy|_S and s_j ∈ S for all 1 ≤ j ≤ r. For 1 ≤ j ≤ r define γ_j = gy s_1^-1⋯ s_j^-1 and γ_0 = g y. For each 0 ≤ j ≤ r-1 write the reduced word γ_j = c_j,1⋯ c_j,n_j for n_j = |γ_j|_A and c_j,i∈ A ∪ A^-1 for all j,i. One notes that n_0 = n+k and c_0,i = a_i for 1 ≤ i ≤ n and c_0,i = b_i-n for n+1 ≤ i ≤ n+k. Define ℓ = min{ 1 ≤ j ≤ r : ∃ 1 ≤ i ≤ n , c_j,i≠ a_i } . In words, ℓ is the first time that γ_j = g y s_1^-1⋯ s_j^-1 has cancelled out enough elements from γ_0 = gy = a_1 ⋯ a_n · b_1 ⋯ b_k on the right, so that some of the a_i's have been changed. Since |γ_ℓ|_A = |γ_ℓ-1 s_ℓ^-1 |_A ≥ |γ_ℓ-1 |_A - M, by the definition of ℓ we have that γ_ℓ = a_1 ⋯ a_q · d_1 ⋯ d_p where this is the reduced word in A ∪ A^-1 representing γ_ℓ, and q ≥ n-M and p ≤ M. Since A ⊂ S, distances in S are bounded by distances in A ∪ A^-1, so that d_S(g,γ_ℓ) ≤ | d_p^-1⋯ d_1^-1· a_q+1⋯ a_n |_S ≤ p+n-q ≤ 2M . Going back to the fact that |γ_j|_A = |γ_j-1 s_j^-1 |_A ≥ |γ_j-1|_A - M, for all 1 ≤ j ≤ r, we find that it cannot be that k > ℓ M. This implies that |g|_S ≤ 2M + |γ_ℓ |_S = 2M + |s_r ⋯ s_ℓ+1 |_S ≤ 2M + r-ℓ≤ |gy|_S + 2M - k/M . If k = |y|_A ≥ (2M+1)M we have |g|_S+1 ≤ |gy|_S as required. Let _d be a free group on d ≥ 2 generators, with a free generating set A. Then, for every a ∈ A ∪ A^-1, any 1 ≠ x ∈_d and all ℓ∈, there exists t ∈_d such that |t^-1xt|_A > ℓ, and such that t^-1xt ∉B_a and t^-1x^-1t ∉B_a. The proof is straightforward. Given a ∈ A ∪ A^-1, 1 ≠ x ∈_d and ℓ∈, write the reduced word x = a_1 ⋯ a_n for n = |x|_A and a_j ∈ A ∪ A^-1 for all 1 ≤ j ≤ n. Since d ≥ 2, there exists some b ∈ A ∪ A^-1∖{ a^-1 , a_1 , a_n^-1}. Choose t = b^ℓ. Note that t^-1 x t = b^-ℓ a_1 ⋯ a_n b^ℓ and t^-1 x^-1 t = b^-ℓ a_n^-1⋯ a_1^-1 b^ℓ are reduced words, so that |t^-1 xt |_A = 2ℓ+n and t^-1 x t , t^-1 x^-1 t ∉B_a. Let S be some finite symmetric generating set of _d the free group on d ≥ 2 generators. Let f ∈Γ(_d,S) and let K = K_f = { x ∈_d : ∀ h ∈_d.f , x.h=h } be the pointwise stabilizer of the orbit of f. Then, K = {1}. Assume for a contradiction that K=K_f ≠{1} for some f ∈Γ(_d,S). Let g_n ∈_d be a sequence such that b_g_n→ f. Let 1 ≠ x ∈ K. Let A ⊂ S be a free generating set. For every n ∈ write the reduced words g_n = a_1^(n)⋯ a_|g_n|_A^(n), with a_j^(n)∈ A ∪ A^-1 for all 1 ≤ j ≤ |g_n|_A. Since A ∪ A^-1 is a finite set, by passing to a subsequence we may assume without loss of generality that a_1^(n) = a ∈ A ∪ A^-1 for all n. Recall M = max{ |s|_A : s ∈ S }. By Lemma <ref>, there exists t ∈_d such that |t^-1xt|_A > (2M+1)M, and such that t^-1xt ∉B_a and t^-1x^-1t ∉B_a. Set y = t^-1 x t. By Lemma <ref>, since g_n^-1∈ E_a^-1 and y, y^-1∉B_a, we have that |y g_n|_S = |g_n^-1 y^-1 |_S ≥ |g_n^-1 |_S + 1 = |g_n|_S + 1 , and similarly |y^-1 g_n|_S ≥ |g_n|_S + 1 . Taking n →∞ we obtain that f(y) ≥ 1 and f(y^-1) ≥ 1. However, since K _d we have that y ∈ K, so 1 ≤ f(y) = (y.f)(y) = - f(y^-1) ≤ -1 a contradiction! We conclude that K = {1} as required. The proof of Theorem <ref> follows easily: If f ∈Γ(_d,S) has a finite orbit |_d.f| < ∞, then K = { x ∈_d : ∀ h ∈_d.f , x.h=h } = ⋂_h ∈_d.f{ x ∈_d : x.h=h } is a finite index subgroup as a finite intersection of finite index subgroups. But then it cannot be that K = {1}, contradicting Lemma <ref>. 10 arosio2024horofunction L. Arosio, M. Fiacchi, S. Gontard, and L. Guerini. The horofunction boundary of a Gromov hyperbolic space. Mathematische Annalen, 388(2):1163–1204, 2024. Bass72 H. Bass. The degree of polynomial growth of finitely generated nilpotent groups. Proceedings of the London Mathematical Society, 3(4):603–614, 1972. BT24 C. Bodart and K. Tashiro. Horofunctions on the Heisenberg and Cartan groups. arXiv preprint arXiv:2407.11943, 2024. busemann2005geometry H. Busemann. The geometry of geodesics. Courier Corporation, 2005. advancedproblems L. Carlitz, A. Wilansky, J. Milnor, R. Struble, N. Felsinger, J. Simoes, E. Power, R. Shafer, and R. Maas. Advanced problems: 5600-5609. The American Mathematical Monthly, 75(6):685–687, 1968. Gromoll J. Cheeger and D. Gromoll. The splitting theorem for manifolds of nonnegative ricci curvature. Journal of Differential Geometry, 6(1):119–128, 1971. develin M. Develin. Cayley compactifications of abelian groups. Annals of Combinatorics, 6(3):295–312, 2002. Grigorchuk80 R. I. Grigorchuk. Burnside problem on periodic groups. Funktsional'nyi Analiz i ego Prilozheniya, 14(1):53–54, 1980. Grigorchuk84 R. I. Grigorchuk. Degrees of growth of finitely generated groups, and the theory of invariant means. Izvestiya Rossiiskoi Akademii Nauk. Seriya Matematicheskaya, 48(5):939–985, 1984. Grigorchuk90 R. I. Grigorchuk. On growth in group theory. In Proceedings of the International Congress of Mathematicians, volume 1, pages 325–338, 1990. Gromov81 M. Gromov. Groups of polynomial growth and expanding maps. Publications Mathématiques de l'IHÉS, 53:53–78, 1981. gromov1981hyperbolic M. Gromov. Hyperbolic manifolds, groups and actions. In Riemann surfaces and related topics: Proceedings of the 1978 Stony Brook Conference (State Univ. New York, Stony Brook, NY, 1978), volume 97, pages 183–213, 1981. Guivarch73 Y. Guivarc'h. Croissance polynomiale et périodes des fonctions harmoniques. Bulletin de la société mathématique de France, 101:333–379, 1973. karlsson2001non A. Karlsson. Non-expanding maps and Busemann functions. Ergodic Theory and Dynamical Systems, 21(5):1447–1457, 2001. karlsson2008ergodic A. Karlsson. Ergodic theorems for noncommuting random products. Lecture notes available on the author’s website, 2008. karlsson2021linear A. Karlsson. From linear to metric functional analysis. Proceedings of the National Academy of Sciences, 118(28):e2107069118, 2021. karlsson2021hahn A. Karlsson. Hahn-Banach for metric functionals and horofunctions. Journal of Functional Analysis, 281(2):109030, 2021. karlsson2024metric A. Karlsson. A metric fixed point theorem and some of its applications. Geometric and Functional Analysis, pages 1–26, 2024. Milnor68 J. Milnor. Growth of finitely generated solvable groups. Journal of Differential Geometry, 2(4):447–449, 1968. Gabor G. Pete. Probability and geometry on groups. available on author's website. RY23 L. Ron-George and A. Yadin. Groups with finitely many Busemann points. accepted, Groups, Geometry and Dynamics, 2023. arXiv:2305.02303. Walsh C. Walsh. The action of a nilpotent group on its horofunction boundary has finite orbits. Groups, Geometry, and Dynamics, 5(1):189–206, 2011. Wolf68 J. A. Wolf. Growth of finitely generated solvable groups and curvature of Riemannian manifolds. Journal of Differential Geometry, 2(4):421–446, 1968. HFbook A. Yadin. Harmonic Functions and Random Walks on Groups. Cambridge University Press, 2024.
http://arxiv.org/abs/2408.10934v1
20240820151711
SDI-Net: Toward Sufficient Dual-View Interaction for Low-light Stereo Image Enhancement
[ "Linlin Hu", "Ao Sun", "Shijie Hao", "Richang Hong", "Meng Wang" ]
cs.CV
[ "cs.CV", "cs.AI", "eess.IV" ]
SDI-Net: Toward Sufficient Dual-View Interaction for Low-light Stereo Image Enhancement LinLin Hu2, Ao Sun2, Shijie Hao12, Richang Hong2, Meng Wang2 2Hefei University of Technology August 26, 2024 ======================================================================================================== § ABSTRACT Currently, most low-light image enhancement methods only consider information from a single view, neglecting the correlation between cross-view information. Therefore, the enhancement results produced by these methods are often unsatisfactory. In this context, there have been efforts to develop methods specifically for low-light stereo image enhancement. These methods take into account the cross-view disparities and enable interaction between the left and right views, leading to improved performance. However, these methods still do not fully exploit the interaction between left and right view information. To address this issue, we propose a model called Toward Sufficient Dual-View Interaction for Low-light Stereo Image Enhancement (SDI-Net). The backbone structure of SDI-Net is two encoder-decoder pairs, which are used to learn the mapping function from low-light images to normal-light images. Among the encoders and the decoders, we design a module named Cross-View Sufficient Interaction Module (CSIM), aiming to fully exploit the correlations between the binocular views via the attention mechanism. The quantitative and visual results on public datasets validate the superiority of our method over other related methods. Ablation studies also demonstrate the effectiveness of the key elements in our model. Low-light stereo image enhancement, cross-view sufficient interaction module, dual-view interaction, pixel and channel attention § INTRODUCTION Low-light image enhancement aims to obtain images with good visibility from low-light images, which has been a main low-level vision task in both academic and industrial communities. The research on low-light image enhancement has made great progress in the past decades. Conventional non-learning-based methods <cit.> for low-light enhancement primarily concentrate on brightness adjustment and contrast enhancement based on prior knowledge such as the Retinex theory. However, these approaches frequently yield suboptimal enhancement results, often resulting in less satisfying visual quality. In comparison to the conventional methods, low-light image enhancement methods based on deep neural network <cit.> have achieved significant advancements in recent years. These methods employ convolutional neural networks (CNNs) or generative adversarial networks (GANs) as the backbone framework to learn the mapping function from low-light images to normal light images. The above-mentioned models are oriented for monocular images. In another word, the input of these models is one single image. In recent years, the emergence of stereo cameras has sparked interest in stereo vision across diverse fields, which provides richer information than monocular image processing systems. For example, as for binocular images, there exist sufficient horizontal discrepancies between the left view and the right view of a same object. Based on this characteristic, many works have been dedicated to developing stereo image restoration techniques for enhancing image visual quality<cit.>, such as stereo super-resolution, stereo deblurring, and stereo dehazing. The primary challenge faced by stereo image restoration methods is how to fully leverage the correlations between the two views, specifically, how to effectively associate the left and the right views for obtaining more effective feature representation. For example, for the stereo image super-resolution task, iPASSENet<cit.> aims to capture the correspondence between the left and the right views through modeling the parallax attention mechanism. As a typical image restoration task, low-light image enhancement is also striving for advancements by incorporating stereo vision<cit.>. In comparison to the monocular (single-image) low-light enhancement methods, low-light stereo image enhancement (LLSIE) methods obtain better performance in general. However, due to the problem that low-light images suffer from low contrast and imaging noise, the current LLSIE research is still limited in well restoring image details. For example, as shown in Fig.<ref>, the result of LLSIE method DCI-Net<cit.> also has restoring errors as large as the single-image low-light enhancement method MIRNet<cit.>. One of the main reasons is that the interaction between the left and right views of the current LLSIE methods is not sufficient. To address this problem, we propose SDI-Net aiming at fully exploring dual-view interaction for low-light stereo image enhancement. The proposed model is able to restore the illumination and details of low-light images through sufficient interaction between the left and right views. We develop an intermediate interaction module named Cross-View Sufficient Interaction Module (CSIM) to explore and strengthen the interaction between features of both views. The first part of CSIM is the Cross-View Attention Interaction Module (CAIM), which is adept at calculating the disparity between the left and right views and aligning them with high precision. The second part comprises a Pixel and Channel Attention Block (PCAB), designed to differentially enhance areas of varying brightness levels, thereby restoring richer details and textures. The proposed SDI-Net is highlighted in the following aspects: * SDI-Net adopts two identical UNets as its backbone structure. Each UNet acts as the image encoder and decoder for either of the left-view image and the right-view image. This symmetry model structure ensures the two encoder-decoder branches have the same learned feature representations at different levels, facilitating sufficient interactions between the learning pipelines of the two views. * Between the encoders and decoders of SDI-Net, we design the Cross-View Sufficient Interaction Module (CSIM) to make the learned feature representations fully interact with each other at different aspects. * SDI-Net has superior performance on the Middlebury dataset and the Holopix50k dataset over other low-light stereo image enhancement methods, as well as several representative single-image low-light image enhancement methods. The rest of this paper is organized as follows. Section II briefly introduces the related works. In Section III, we describe the overview and details of the proposed SDI-Net. Section IV reports the experimental results on two public datasets. Section V finally concludes the paper. § RELATED WORKS §.§ Single-image low-light enhancement methods The traditional single-image low-light enhancement methods mainly include the histogram equalization and retinex-based methods. Histogram equalization methods directly adjust the dynamic range of low-light images, thereby enhancing image contrast<cit.>. Retinex-based methods aim to decompose a low-light image into reflection and illumination layers, and adjust the brightness of the illumination layer to achieve the low-light enhancement<cit.>. These methods are highlighted for their clear physical interpretability. However, they are limited in its capability of fitting the complex mapping function between low-light and normal-light images. By leveraging deep learning models, learning-based low-light enhancement methods well solve this limitation and greatly improve the performance. These methods are able learn end-to-end appearance mapping functions between low-light inputs and normal-light outputs<cit.>. Recently, models with limited or no supervision also emerge as popular low-light enhancement methods <cit.> due to their lightweight and fast characteristics. However, they are prone to introduce over-enhancement and obvious artifacts into enhancing results. Despite of the success of single-image low-light enhancement methods, their information source is one single input all along. In contrast, methods taking stereo images as inputs pave way for utilizing richer information in this task, which have the potential of achieving better enhancing performance. §.§ Stereo image restoration methods Recently, stereo image restoration methods have begun to attract more attention, and obtained better performance than single-image restoration methods, such as the tasks of super-resolution, deblurring, deraining, dehazing, and low-light enhancement. In the stereo image super-resolution method, Jeoh et al.<cit.> propose the first stereo super-resolution network to compensate for the parallax between stereo images by shifting. Wang et al.<cit.> use a parallax attention mechanism to merge similar features in two views to explore pixel correspondence. In the stereo deblurring method, Zhou et al.<cit.> align features by estimating the difference between the left and right views. Li et al.<cit.> propose a new stereo image deblurring model by exploring two-pixel alignment. In the stereo dehazing method, Pang et al.<cit.> use a stereo transformation module to explore the correlation between binocular images. As for our low-light task, DVENet<cit.> is regarded as the representative method, which is to enhance the network by integrating multi-scale dual-view features, and fuse the features of a single image under the guidance of the light map. Recently, DCI-Net<cit.> tries to further exploits the interaction between the left and right views by considering the spatial connection between multiple scales. In this paper, we focus on the low-light enhancement task. We propose SDI-Net to fully incorporate the information from the left and right views via the attention mechanism imposed on different aspects, such as the view level, the channel level and the pixel level. We introduce the technical details of SDI-Net in the following section. § METHOD The SDI-Net model takes low-light stereo image pairs as inputs, and utilizes two identical UNet branches as the backbone structure to learn the mapping from low-light stereo images to normal-light stereo images. Between the encoders and the decoders of the two branches, we introduce Cross-View Sufficient Interaction Module (CSIM) to refine the learned image feature representation. In the following, we describe the overall framework, the CSIM module, and the employed loss functions. §.§ Overall framework The overall framework of SDI-Net is shown in Fig.<ref>, which can be divided into three stages: Feature Encoder, Cross-View Sufficient Interaction, and Feature Decoder. Feature Encoder. First, we take the left-view low-light image I_l∈ℝ^H × W × 3 and the right-view low-light image I_r∈ℝ^H × W × 3 as the inputs of the encoders. Based on the encoders, the feature maps F_l∈ℝ^H/4×W/4× 3 and F_r∈ℝ^H/4×W/4× 3 are learned. This process can be formulated as: F_l = FE(I_l),F_r = FE(I_r) where FE( · ) means feature encoder. The encoders comprise several convolutional layers followed by down-sampling operations, aiming to initially capture and encode the local and global information of each view. Of note, the learned network weights are shared between the two branches. Then, we send the obtained feature information to the next stage of cross-view sufficient interaction. Cross-View Sufficient Interaction Module. The CSIM module stays between the encoders and the decoders of the two UNet branches, aiming to promote sufficient interaction between the features learned from the two views. The CSIM module is composed of the Cross-View Attention Interaction Module (CAIM) part and the Pixel and Channel Attention Block (PCAB) part. The former part focuses on exploring the mutual attention of F_l and F_r at the view level. Then, the later part concentrates on exploiting the attention mechanism at the channel level and the pixel level. Specifically, channel attention (CA) helps to emphasize informative channels, while pixel attention (PA) highlights relevant spatial locations. In this way, the CSIM module enables sufficient interaction between the two views, obtaining the mutually fused features SF_l∈ℝ^H/4×W/4× 3 and SF_r∈ℝ^H/4×W/4× 3: SF_l,SF_r = CSIM(F_l,F_r) where CSIM( · ) represents the core part of our SDI-Net. Then, SF_l and SF_r are sent to the next feature decoding stage for image reconstruction. Feature Decoder. In this stage, the feature decoders reconstruct the enhanced stereo images based on the enriched feature representation SF_l and SF_r. The two feature maps are respectively fed into the decoders of the two UNet branches. This process involves two up-sampling operations, followed by convolution layers to recover the spatial dimensions and generate the final enhanced stereo images. The first up-sampling operation is to send SF_l∈ℝ^H/4×W/4× 3 and SF_r∈ℝ^H/4×W/4× 3 into the convolutional layer through 8 residual blocks and one 3*3 up-sampling convolutional layer, respectively, to obtain the first recovery of the middle layer features M_l∈ℝ^H/2×W/2× 3 and M_r∈ℝ^H/2×W/2× 3, and then fuse the feature information extracted by the first layer of the Feature Encoder on the spatial domain through the concatenation operation. The second up-sampling operation is to process M_l∈ℝ^H/2×W/2× 3 and M_r∈ℝ^H/2×W/2× 3 identically to obtain E_l∈ℝ^H × W × 3 and E_r∈ℝ^H × W × 3. This process can be formulated as follows: E_l = FD(SF_l),E_r = FD(SF_r) where FD( · ) stands for the feature decoders. Similar as the encoding stage, the learned weights of the decoders are shared between the two branches. To train the whole model, we use the L1 loss and the FFT loss to achieve the low-light enhancement task. §.§ Cross-View Sufficient Interaction Module (CSIM) The CSIM module is composed of two parts, i.e. Cross-View Attention Interaction Module (the left part of Fig.<ref>) and Pixel and Channel Attention Block (the right part of Fig.<ref>), which are described in the following. Cross-View Attention Interaction Module (CAIM). CAIM is used to extract the correlation information at the view level. The correlation computing process is based on Scaled Dot Product Attention<cit.>, which utilizes all the keys to compute the dot products of the query, and applies the softmax function to obtain the weights of the values: Attention(Q,K,V) = softmax (QK^T/√(C))V where Q represents the query matrix, K represents the key matrix, V represents the value matrix, and C is the channel numbers. In our application, we use Q_l=Conv(LN(F_l)) and K_r=Conv(LN(F_r)) to represent the features of the two views, where LN( · ) represents layer normalization, and Conv( · ) represents a 1×1 convolution operation. The refined features can be obtained as: F_r → l = Attention(Q_l,K_r,Conv(F_l))+F_l F_l → r = Attention(K_r,Q_l,Conv(F_r))+F_r From above process, the computing process for obtaining F_r → l and F_l → r fully considers the interaction from each other view. Pixel and Channel Attention Block (PCAB). PCAB is composed on several stacked feature enhancing blocks for further refining F_r → l and F_l → r. In the first feature enhancing block (FEB), we initially pre-process F_r → l and F_l → r based on the following procedure: R_l = Conv(G(Conv(F_r → l)) + F_r → l) R_r = Conv(G(Conv(F_l → r)) + F_l → r) where Conv( · ) refers to a 3×3 convolution operation and G( · ) stands for GeLU activation function. Then, the channel attention function CA( · ) and the pixel attention function PA( · ) <cit.> are introduced to further exploit the feature correlation at the channel level and the pixel level: R_l^*= PA(CA(R_l)), R_r^* =PA(CA(R_r)) The channel attention (CA( · )) learns the importance of each feature channel, aiming to highlight more useful information. The pixel attention (PA( · )) dynamically adjust the importance of each pixel, which aiming to enhance important details and texture features while reducing the impact from noise. In the following, the above process is repeated by stacking multiple FEBs in a sequential way. Of note, instead of F_r → l and F_l → r, the inputs of the blocks other than the first FEB are the previous neighboring FEB's outputs. In our research, we empirically set the number of FEBs as 10 to build the PCAB module. At the end of PCAB, we combine the gradually refined R_l^* and R_r^* with its original feature representation F_l and F_r using a weighted element-wise addition to obtain SF_l and SF_r, which is the final feature representation learned by the CSIM module: SF_l = γ _lR_l^* + F_l SF_r = γ _rR_r^* + F_r where γ _l and γ _r are trainable weights and are initialized with zeros to ensure stable training. §.§ Loss function In this paper, we use the L1 loss and the FFT loss to train the whole network, which can be expressed as follows: L = L_1 + λL_fre where λ stands for a hyper-parameter, empirically set to 0.1. It is important to note that L_fre helps to restore the normal light image via preserving the frequency-domain image characteristics. The two loss terms can be expressed as: L_1 = E_l - E_l^G_1 + E_r - E_r^G_1 L_fre = φ (E_l) - φ (E_l^G)_1 + φ (E_r) - φ (E_r^G)_1 where ·_1 is the L1 loss, which is used to maintain the overall structure and normal-light and low-light details of the image. φ ( · ) stands for Fast Fourier Transform, which helps to recover the texture details and edge information of the image, making the reconstructed image closer to the image under normal-light conditions. E_l and E_r represent the restored left and right normal-light images, and E_l^G and E_r^G represent the ground truth corresponding to E_l and E_r. § EXPERIMENTS §.§ Experiment settings Datasets. We adopt the existing public datasets specifically designed for low-light stereo image enhancement, i.e., Middlebury and Synthetic Holopix50k from DVENet<cit.>, to evaluate the effectiveness of our method. The Middlebury dataset selects 136 pairs of images as the training set with resolution of 512×512 and 36 pairs of images for testing. The Middlebury dataset is carefully built to offer high-quality, precisely calibrated stereo images, which are crucial for the low-light stereo image enhancement task. The Synthetic Holopix50k dataset is a much larger dataset. Its training set includes 1128 images with resolution of 512×512, and its test set includes 159 images. The Holopix50k dataset originates from a larger collection of real-world images captured by dual-camera smartphones, offering a broader but less controlled variety of low-light scenarios. This diversity is vital for training models to generalize across a wide range of real-world conditions. Evaluation Metrics. We use two mostly-adopted full reference quality metrics PSNR and SSIM to evaluate the model performance. Higher PSNR or SSIM values indicate better performance. Methods for Comparison. We compare our proposed SDI-Net with several single-image (monocular) low-light enhancement methods and stereo-image low-light enhancement methods. The single-image low-light enhancement methods for comparison include the non-learning based ones (NPE<cit.>, LIME<cit.>, RRM<cit.>), ZeroDCE<cit.>, and the learning-based ones (RetinexNet<cit.>, MBLLEN<cit.>, DSLR<cit.>, KIND<cit.>, DRBN<cit.>, SNR-Aware<cit.>, LLFormer<cit.> and MIRNet<cit.>). The stereo-image low-light enhancement methods include DVENet<cit.> and the newly proposed DCI-Net<cit.>. Considering the limited number of the existing stereo-image low-light enhancement methods, we additionally introduced iPASSRNet<cit.> for comparison. Originally designed for stereo image super-resolution, iPASSRNet can be easily adapted as a stereo-image low-light enhancement model. With the two datasets, we re-trained iPASSRNet and evaluated its performance in our application. Implementation Details. The hardware for the experiments is an Nvidia RTX 2080 Ti card with 12G memory. The batch size is set to 2, and the epoch number is set as 700 for training. We use the Adam optimizer to optimize with β _1=0.5 and β _2=0.999. The initial learning rate is set to 0.0001 and is reduced by half every 100 epochs. §.§ Quantitative Comparison The quantitative results of SDI-Net and the methods for comparison on Middlebury and Synthetic Holopix50k are reported in Table <ref>. The non-learning traditional methods, such as NPE, LIME, JieP, and RRM, exhibit relatively poor performance. The reason is that it is difficult for the model-driven methods to fit the complex mapping functions between low-light and normal-light images. As for the learning-based monocular low-light enhancement methods, the PSNR and SSIM values are generally much higher, especially for the recently proposed methods such as DRBN and MIRNet. It is noted that the quantitative performance of ZeroDCE is relatively low. The reason is that it focuses on learning an optimal mapping curve function by only using dark images. The absence of full supervision information makes its performance not so well in terms of PSNR and SSIM. As for the stereo-image low-light enhancement, we can observe an obvious advantage over the monocular family, showing the effectiveness of using stereo images in general. Among these methods, our SDI-Net performs the best on both datasets in terms of PSNR and SSIM, showing the effectiveness and superiority of our method. §.§ Visual Comparison Visual results on Middlebury. Fig.<ref> and Fig.<ref> provide the visual comparisons for all the methods. In Fig.<ref>, we can see that some of the methods fail to improve the visibility of the input image, while other methods introduce color distortions or artifacts into their results. By comparing the zoomed-in patch of the enhanced results and the ground-truth image, we can see that our method performs better than others in terms of the visual quality, which well improves the visibility and recovers the detail textures simultaneously. We present another example in Fig.<ref>, in which we use error maps to visualize the discrepancies between the enhanced results and the ground truth <cit.>. From the error maps, we can clearly see that stereo-based methods performs better than the monocular family in general. Furthermore, our method is better than the other three stereo-based methods in handling fine details such as the brim of the straw hat region. The above observations is consistent with the trend of the above quantitative results. Visual results on Synthetic Holopix50k. For the Synthetic Holopix50k dataset, we also utilize error maps to evaluate the enhanced results, as shown in Fig.<ref>. On one hand, the single-image low-light enhancement methods fail to effectively reconstruct normal-light images in terms of color and illumination. On the other hand, in comparison to other stereo vision methods, our approach outperforms them in reconstructing fine details, particularly in the leaves region. In addition, our method exhibits more continuous color textures and smoother image appearance in general. §.§ Ablation study In the ablation study, we demonstrate the effectiveness of the CAIM module, PCAB module, and the employed loss functions, in which Middlebury is the evaluation dataset. Model structure. In Table <ref>, V0 represents the scenario where we perform a simple interaction on the extracted features in the middle of the encoders and decoders. In another word, the CSIM is replaced with a heuristic two-round interaction process. In the first round, F_l and F_r are directly concatenated and then down-sampled. In the second round, the down-sampled feature is respectively concatenated with F_l and F_r and down-sampled again. The refined feature representations are then sent into the decoding stage. V1 means that we add PCAB after the V0 version. V2 indicates that we only utilize CAIM for interaction but do not use the PCAB module. By comparing the performance of our complete model with V0 to V2, we observe that both CAIM and PCAB play indispensable roles in our low-light enhancement task. For example, the usefulness of CAIM are both empirically validated in the comparison between Ours and v1, and the comparison between V2 and V0. Similarly, the comparison between Ours and V2, and the comparison between V1 and V0 also empirically demonstrate the effectiveness of PCAB. Loss function. In Table<ref>, V3 means that the loss term L_fre is not used. Compared to the completed model, we can see that there exists a clear performance decline for V3, which demonstrates the importance of using the loss term L_fre. The rationale is clear that the frequency-based image representation provides a complementary role to the spatial pixel arrays. Therefore, the combination of the two different loss terms effectively promotes the visual quality of enhanced results. § CONCLUSION For the low-light image enhancement task, the research community have begun to explore the usage of more advanced imaging systems. For example, it is beneficial for enhancement models to simultaneously use the binocular images as the inputs. However, the limitation of insufficient interaction between these two views still exists in current research on low-light stereo image enhancement. In this paper, we propose SDI-Net for sufficiently modeling the dual-view interaction for low-light stereo image enhancement. By modeling the attention mechanism at the view level, the channel level and the pixel level, SDI-Net facilitates comprehensive information exchange of the dual views, therefore better restoring the high-quality normal-light images from low-light images. Experimental results on two public datasets demonstrate the effectiveness and superiority of SDI-Net in terms of quantitative and visual comparison. In the future, we plan to extend the interaction between the two views into the frequency domain and semantic domain. IEEEtran
http://arxiv.org/abs/2408.11522v2
20240821105526
Upper Bound on Locally Extractable Energy from Entangled Pure State under Feedback Control
[ "Kanji Itoh", "Yusuke Masaki", "Hiroaki Matsueda" ]
quant-ph
[ "quant-ph" ]
^1Department of Applied Physics, Graduate School of Engineering, Tohoku University, Sendai 980-8579, Japan ^2Center for Science and Innovation in Spintronics, Tohoku University, Sendai 980-8577, Japan § ABSTRACT We introduce an effective thermodynamics for multipartite entangled pure states and derive an upper bound on extractable energy with feedback control from a subsystem under a local Hamiltonian. The inequality that gives the upper bound corresponds to the second law of information thermodynamics in our effective thermodynamics. In addition, we derive a more general bound that is determined only by an initial state and the local Hamiltonian. This bound gives an explicit relationship between the extractable energy and the entanglement structure of the initial state. We also investigate the tightness of the upper bounds and show that the bounds can be achieved in a simple example. Upper Bound on Locally Extractable Energy from Entangled Pure State under Feedback Control Kanji Itoh^1, Yusuke Masaki^1, and Hiroaki Matsueda^1,2hiroaki.matsueda.c8@tohoku.ac.jp August 26, 2024 =========================================================================================== Introduction.—In light of the rapid progress in quantum information technologies, there is an urgent need to elucidate the energy efficiency of quantum information processing <cit.>. This challenge is important not only for applications in quantum technologies, but also for fundamental physics, as it requires deep understanding of the relationship between energy and quantum information. The relationship between energy and information is one of the key issues in modern thermodynamics. In 2008, Sagawa and Ueda derived the second law of information thermodynamics <cit.>. This law extends the applicability of the second law of thermodynamics to processes that involve measurement and feedback control. The extended law shows that feedback control can improve work extraction by exploiting information gain from the measurement. In the case of an isothermal process, the second law of information thermodynamics is expressed as W_ext≤ -Δ F +1/β I_QC, where W_ext is the extracted work, Δ F is the change in the Helmholtz free energy of the system, β is the inverse temperature and I_QC is the QC-mutual information (or Groenewold-Ozawa information) defined as the information gain by measuring the system <cit.>. Although the inequality (<ref>) was derived for quantum systems, the setup does not explicitly include entanglement, which is a necessary resource for quantum information processing. Around the last decade, as extensions of Eq. (<ref>), several information thermodynamic inequalities have been derived in entangled systems <cit.>. These inequalities give quantitative relations between energy and quantum information in entangled systems. In the setup of the above studies, work is extracted from a system at the temperature of the heat bath. On the other hand, local energy extraction from multipartite quantum systems without thermal fluctuations has also been studied in various setups <cit.>. For such setups, general and quantitative energy-information relations, such as the second law of information thermodynamics, have not been established. Establishing such a relationship leads to a new theory of energy efficiency in various entanglement-based quantum protocols at low temperatures. In this Letter, we show that locally extractable energy from a multipartite quantum system is bounded with quantum information derived from entanglement. In our setup, the whole system is in an entangled pure state as an initial state, and energy is extracted by measurement and feedback control performed on some parts of the system. In entangled quantum systems, even if a whole system is not thermally fluctuating, i.e., the whole system is in a pure state, a subsystem can be in a mixed state. One can consider that the mixed state is realized under an effective temperature characterized by the entanglement. Building on this idea, we divide the whole system into several local systems, and introduce an effective thermodynamics. The main result of this Letter is the derivation of two upper bounds on the extracted energy. One bound can be seen as the second law of information thermodynamics in our effective thermodynamics. The other bound is looser, but provides an explicit relation between energy and entanglement structure of the initial state of the energy extraction process. We also examine the tightness of the bounds with a simple four-qubit example. Setup.—Figure 1 schematically shows our setup. We divide the whole system into a system S, an ancilla A, and an environment E. The energy extraction protocol consists of measurement and feedback control. First, a projective measurement P_A(μ) is performed on the ancilla A, and the measurement result μ is obtained. Next, a feedback unitary operation U_S(μ) is performed on the system S, and the amount of energy E_ext is extracted from S. In the protocol, no operation is performed on the environment E. However, the environment E is necessary to define temperature of our effective thermodynamics. At the beginning of the energy extraction process, the state of the whole system ρ_SAE^i is an entangled pure state, i.e., ρ_SAE^i≡|ψ⟩_SAE⟨ψ|_SAE, where |ψ⟩_SAE is an entangled state vector. In this case, the system S is in a mixed state due to the entanglement. The von Neumann entropy of this mixed state represents information derived only from entanglement and is called entanglement entropy. The definition of the entanglement entropy of the system S is given by S(ρ_S^i)≡ -trρ_S^ilogρ_S^i, where ρ_S^i≡tr_AEρ_SAE^i is the reduced density matrix of |ψ⟩_SAE on the system S. Here and hereafter, S(ρ) denotes the von Neumann entropy of a density matrix ρ. After the measurement with a result μ, the state of the whole system is written as ρ_SAE^m(μ)=1/p_μP_A(μ)ρ_SAE^iP_A(μ), where p_μ≡⟨ψ|P_A(μ)|ψ⟩_SAE is the probability of obtaining a result μ. The reduced density matrix of the system S after obtaining the result μ is expressed by ρ_S^m(μ)=tr_AEρ_SAE^m(μ). In our setup, the information gain of the system S due to the measurement is calculated as the average reduction of the entanglement entropy: I_QC≡ S(ρ_S^i)-∑_μ p_μ S(ρ_S^m(μ)). Next, the feedback unitary operation U_S(μ) is performed on the system S. The final state of the whole system is represented by ρ_SAE^f=∑_μ U_S(μ) P_A(μ) ρ_SAE^i P_A(μ) U_S^†(μ). Thus, the final state of the system S is given by ρ_S^f=tr_AEρ_SAE^f=∑_μ p_μ U_S(μ)ρ_S^m(μ)U_S^†(μ). The extracted energy is given by the energy reduction of the system S during the protocol: E_ext≡ E_S^i-E_S^f, where E_S^i≡trρ_S^iH_S and E_S^f≡trρ_S^fH_S are the energy expectation values of the initial and final states of S, respectively, and H_S is the local Hamiltonian of S at the beginning and the end of the protocol. Note that the maximally extracted energy by feedback unitary operations under a fixed projective measurement is called daemonic ergotropy <cit.>. In this Letter, we will derive upper bounds on the extracted energy. As will be mentioned later, when our bound can be achieved, the bound is equal to the daemonic ergotropy; in this case, our bound gives information thermodynamic expression of the daemonic ergotropy for pure states. Effective Thermodynamics and Upper Bounds.—Let us introduce an effective thermodynamics for entangled pure states. In our effective thermodynamics, the energy extraction protocol is considered as a nonequilibrium process of the system S at an effective temperature, which is defined by the entanglement between the system S and the environment E. In the following, we derive a second law-like inequality of the effective thermodynamics. First, we define the effective thermal equilibrium state. As mentioned above, the system S is in a mixed state due to the entanglement. Thus, we consider the system S to be at an effective temperature 1/β_eff>0, for which we define the thermal equilibrium state as σ_S≡1/Ze^-β_effH_S, where Z≡tr e^-β_effH_S is the partition function. The thermal equilibrium state σ_S depends only on β_eff, which is defined by the entanglement between the system S and the environment E. We postpone the explicit definition of β_eff until Eq. (<ref>). Next, we rewrite the extracted energy E_ext with quantities based on the effective thermodynamics. In the effective thermodynamics, the initial state ρ_S^i and the final state ρ_S^f of the protocol are considered as nonequilibrium states at the effective temperature β_eff. Thus, we define nonequilibrium free energy <cit.> of the states ρ_S^α for α=i,f as ℱ(ρ_S^α;H_S) ≡ E_S^α-1/β_effS(ρ_S^α) =F(σ_S)+1/β_effD(ρ_S^α||σ_S), where F(σ_S)≡ -1/β_efflog Z is the Helmholtz free energy of σ_S and D(ρ_S^α||σ_S) is the Kullback-Leibler (KL) divergence (or quantum relative entropy). Using the nonequilibrium free energy, E_ext defined by Eq. (<ref>) can be rewritten as E_ext=-Δℱ_S - 1/β_effΔ S_S, where Δℱ_S≡ℱ(ρ_S^f;H_S)-ℱ(ρ_S^i;H_S) is the change in the nonequilibrium free energy and Δ S_S≡ S(ρ_S^f)-S(ρ_S^i) is the change in the von Neumann entropy. Note that Δ S_S is always zero when U_S(μ) is not feedback control, i.e., U_S(μ) is independent of μ. Let us now move on to the derivation of upper bounds on E_ext. First, from the positivity of Kullback-Leibler divergence D(ρ_S^f||σ_S), we obtain -Δℱ_S≤ℱ(ρ_S^i;H_S)-F(σ_S). The equality holds if and only if ρ_S^f=σ_S. Next, we focus on the entropy term in Eq. (<ref>). Using the convexity of entropy, we have S(ρ_S^f)≥∑_μ p_μ S(ρ_S^m(μ)) ≥ E_F^S-E(ρ_SE^i), where ρ_SE^i≡tr_A ρ_SAE^i and E_F^S-E(ρ_SE^i) is entanglement of formation <cit.>, representing mixed state entanglement between the system S and the environment E. In our setup, E_F^S-E(ρ_SE^i) can be expressed as E_F^S-E(ρ_SE^i) ≡min_{P_A(μ)}∑_μ p_μ S(ρ_S^m(μ)), where the minimization is performed over all sets of possible orthogonal projection operators. This minimization is equivalent to optimizing the representation of the mixed state ρ_SE^i by mixture of pure states: ∑_μ p_μ|ϕ_μ⟩_SE⟨ϕ_μ|_SE = ρ_SE^i, where we put |ψ⟩_SAE=∑_μ√(p_μ)|a_μ⟩_A ⊗|ϕ_μ⟩_SE and P_A(μ)=|a_μ⟩_A ⟨a_μ|_A. Note that {|ϕ_μ⟩} need not be an orthogonal basis set. From Eq. (<ref>), we obtain -Δ S_S ≤ I_QC≤ℰ_SA, where we define the new quantity ℰ_SA which represents asymmetric entanglement between S and A. The quantity ℰ_SA is defined as entanglement between S and AE minus entanglement between S and E: ℰ_SA≡ S(ρ_S^i)-E_F^S-E(ρ_SE^i). Plugging inequalities (<ref>) and (<ref>) in Eq. (<ref>) yields upper bounds on E_ext: E_ext ≤ℱ(ρ_S^i;H_S)-F(σ_S)+1/β_effI_QC ≤ℱ(ρ_S^i;H_S)-F(σ_S)+1/β_effℰ_SA. The first bound in Eq. (<ref>) depends on the choice of the set of projection operators {P_A(μ)} as well as I_QC. When this bound can be achieved, the bound is the daemonic ergotropy itself. The second bound in Eq. (<ref>) is determined only by the initial state |ψ⟩_SAE and the local Hamiltonian H_S. A necessary condition for the both of the equalities in Eq. (<ref>) to hold is given by S(σ_S)=E_F^S-E(ρ_SE^i). To make the bounds (<ref>) achievable, we define the effective temperature 1/β_eff so that Eq. (<ref>) holds. By this definition, 1/β_eff is always uniquely determined for H_S≠0. This can be confirmed as follows. The left side of Eq. (<ref>) decreases monotonically with respect to β_eff from S(σ_S)=log(σ_S) for β_eff→0 to S(σ_S)→0 for β_eff→∞. On the other hand, E_F^S-E(ρ_SE^i) satisfies 0≤ E_F^S-E(ρ_SE^i) ≤min{log(ρ_S^i), log(ρ_E^i)}≤log(σ_S), where ρ_E^i≡tr_SAρ_SAE^i. Therefore, there exists only one value of β_eff satisfying Eq. (<ref>). Under the above definition of β_eff, E_ext reaches the second bound in Eq. (<ref>) only if the eigenvalues of ρ_S^m(μ) that maximize I_QC are independent of the measurement results μ. The proof of this is given in Supplemental Material. We will show that E_ext can actually reach the bound in a simple model when the above condition is satisfied. Let us discuss the physical meaning of the bounds in Eq. (<ref>). Using Eq. (<ref>), the bounds can be rewritten more simply as E_ext ≤D(ρ_S^i||σ_S)+I_QC/β_eff≤D(ρ_S^i||σ_S)+ℰ_SA/β_eff. The KL divergence D(ρ_S^i||σ_S), which represents how different the initial state of the system S is from the equilibrium state σ_S, is a resource of the energy extraction. Besides, the information obtained from the measurement can improve the extracted energy. This situation is similar to the case in the previous studies of nonequilibrium information thermodynamic process <cit.>. Thus, the first bound in Eq. (<ref>) (or Eq. (<ref>)) can be considered as the second law of information thermodynamics in our effective thermodynamics. Obviously from Eq. (<ref>), the whole entanglement of the system S in the initial state can be decomposed into the S-A entanglement ℰ_SA and the S-E entanglement E_F^S-E(ρ_SE^i). The former entanglement ℰ_SA can be considered as the informative resource for the energy extraction, because ℰ_SA is the upper bound on the information gain I_QC and I_QC can contribute positively to the energy extraction. On the other hand, the latter entanglement E_F^S-E(ρ_SE^i) is the origin of the temperature in our effective thermodynamics. Therefore, the second bound in Eq. (<ref>) gives the explicit relationship between the extracted energy E_ext and the entanglement structure of the initial state |ψ⟩_SAE. Example.—Let us examine the tightness of the bound (<ref>) with a simple example of a four-qubit system consisting of qubits 1, 2, 3, and 4. Energy is extracted from qubit 1 by a projective measurement performed on qubit 2 and a feedback unitary operation performed on qubit 1. Thus, qubits 1 and 2 correspond to the system S and the ancilla A, respectively, and qubits 3 and 4 correspond to the environment E. The local Hamiltonian of qubit 1 is defined by H_S≡σ_1^z, where σ_1^z is the z component of the Pauli matrices for qubit 1. We define the initial state |ψ⟩ with a parameter η as follows: |ψ⟩ ≡√(1+η/14)(2|↑↑↑↑⟩+|↑↑↑↓⟩+|↑↓↑↑⟩+|↑↓↑↓⟩) + √(1-η/14)(|↓↑↓↑⟩+|↓↑↓↓⟩-|↓↓↓↑⟩-2|↓↓↓↓⟩), where we use the abbreviated notation such as |↑↑↑↑⟩=|↑⟩_1 ⊗|↑⟩_2 ⊗|↑⟩_3 ⊗|↑⟩_4. The spin bases of qubit i (i=1,⋯,4), |↑⟩_i and |↓⟩_i, are the eigenstates of σ_i^z as σ_i^z |↑⟩_i = |↑⟩_i and σ_i^z |↓⟩_i = -|↓⟩_i. Since the entanglement of the qubit 1 vanishes for η=± 1, we consider -1<η<1. When η=0, the eigenvalues of ρ_S^m(μ) are independent of μ, i.e., the necessary condition for E_ext to reach the upper bound is satisfied. At this value of η, the reduced density matrix of Eq. (<ref>) for qubit 1 is the highly symmetric state, which is invariant to any unitary operation: ρ_S^i=(|↑⟩_1⟨↑|_1+|↓⟩_1⟨↓|_1)/2. This symmetry is broken for |η|>0. Figure <ref> shows the tightness of the second bound in Eq. (<ref>) in the above model. Both the upper bound and the maximally extracted energy increase monotonically with respect to η. This is because the energy of the initial state of qubit 1 is higher for larger η. This increase of the initial state energy corresponds to the increase of the nonequilibrium free energy of the initial state in terms of the effective thermodynamics. In contrast to the η-dependence of the free energy term D(ρ_S^i||σ_S)/β_eff, the entanglement term ℰ_SA/β_eff decreases monotonically with |η|. As shown in the inset, the difference between the upper bound and the maximally extracted energy is always less than 0.04 even though the magnitude of the bound and the maximally extracted energy increase up to 2. Therefore, the bound is tight except for η≈ -1. In addition, the maximally extracted energy reaches the bound at η=0. Thus, our bound is achievable for highly symmetric initial state in this entangled four qubit model. The details of the calculation are given in Supplemental Material. Conclusion.—We introduced effective thermodynamics for entangled multipartite pure states and derived two upper bounds on locally extractable energy with feedback control. One bound corresponds to the second law of information thermodynamics in our effective thermodynamics, and the other bound shows the explicit relationship between energy and the entanglement structure of the initial state. We also investigated the tightness of the bounds with a simple model, and showed that the bounds are achievable for highly symmetric initial states. Our results go beyond the previous framework of information thermodynamics, the validity range of which has recently been verified <cit.>. The results open the door to entanglement-based information thermodynamics without thermal fluctuations and lead to a fundamental theory for the energy efficiency of quantum information processing at low temperatures. This work was supported by JST SPRING, Grant Number JPMJSP2114, JSPS KAKENHI Grant Numbers JP24K06878, JP24K00563, JP24K02948, JP24K17000, JP23K22492, JP21H04446, JP21K03380, and GP-Spin and CSIS in Tohoku University. Supplement[](S.) Supplement Supplemental Material In this Supplemental Material, we present a proof of the equality condition of the upper bound. In addition, we provide details on the calculation of the extractable energy and the upper bound in the simple example presented in the main text. § PROOF OF THE EQUALITY CONDITION As mentioned before Eq. (13) in the main text, when the effective temperature 1/β_eff is defined so that Eq. (12) holds, a necessary condition for E_ext to be reach the second bound in Eq. (11) (or Eq. (13)) in the main text is given by the following theorem. Theorem. Let ρ_S^m(μ) be post measurement states that maximizes I_QC. Then max E_ext=1/β_eff[D(ρ_S^i||σ_S)+ℰ_SA] holds only if the eigenvalues of ρ_S^m(μ) are independent of μ. Proof. Suppose max E_ext=1/β_eff[D(ρ_S^i||σ_S)+ℰ_SA]. Then the equalities in Eq. (7) in the main text hold, i.e., there exists U_S(μ) which satisfies the equality of S(ρ_S^f) ≥∑_μ p_μ S(ρ_S^m(μ)) for ρ_S^m(μ) that maximizes I_QC. This inequality is derived by using positivity of KL divergence: S(ρ_S^f) = -trρ_S^flogρ_S^f = -∑_μ p_μtr U_S(μ)ρ_S^m(μ)U_S^†(μ) logρ_S^f ≥ -∑_μ p_μtr U_S(μ)ρ_S^m(μ)U_S^†(μ) log U_S(μ)ρ_S^m(μ)U_S^†(μ) = ∑_μ p_μ S(ρ_S^m(μ)), where ρ_S^f = ∑_μp_μtr U_S(μ)ρ_S^m(μ)U_S^†(μ) is used to obtain the second line. In the third line, the equality holds if and only if U_S(μ) ρ_S^m(μ)U_S^†(μ)=ρ_S^f for all μ. Since U_S(μ) does not change eigenvalues of ρ_S^m(μ), the equality condition implies that the eigenvalues of ρ_S^m(μ) are independent of μ. □ Note that the converse of the above theorem is not necessarily true. If the eigenvalues of ρ_S^m(μ) are independent of μ, there exist P_A(μ) and U_S(μ) which satisfy S(ρ_S^f)=∑_μ p_μ S(ρ_S^m(μ))=S(ρ_S^m(μ))=E_F^S-E(ρ_SE^i)=S(σ_S). However, since S(ρ_S^m(μ))=S(σ_S) does not imply that the eigenvalues of ρ_S^m(μ) and σ_S are equal, Eq. (S.<ref>) does not imply ρ_S^f=σ_S, which is the necessary condition for E_ext to reach the upper bound. It is an open question whether there exist initial states |ψ⟩_SAE that cannot reach the upper bound and whose post-measurement states ρ_S^m(μ)'s have eigenvalues that are independent of the measurement results. § EXAMPLE In this section, we investigate in what initial states the above equality condition of the bound is satisfied in a four qubit example. We also investigate the tightness of the bound and provide details of the calculation. As a simple example, we consider a multipartite system consists of qubits 1, 2, 3, and 4. Measurement is performed on qubit 2, and feedback control is performed on qubit 1. Thus, qubit 1 is the system S, qubit 2 is the ancilla A, and qubits 3 and 4 are the environment E. For simplicity, we define local Hamiltonian of qubit 1 as H_S≡σ_1^z =[ 1 0; 0 -1; ], and the initial state as |ψ⟩_SAE≡ a|↑↑↑↑⟩_1234+b|↑↑↑↓⟩_1234+c|↑↓↑↑⟩_1234+d|↑↓↑↓⟩_1234 +e|↓↑↓↑⟩_1234+f|↓↑↓↓⟩_1234+g|↓↓↓↑⟩_1234+h|↓↓↓↓⟩_1234. The initial state of qubit 1 is written as ρ_S^i =tr_AE|ψ⟩_SAE⟨ψ|_SAE =[ a^2+b^2+c^2+d^2 0; 0 e^2+f^2+g^2+h^2 ]. We chose the initial state so that there is no need of diagonalization in the computation of max E_ext and the upper bounds on E_ext. In this model, projective measurement P_A(μ) and feedback unitary operation U_S(μ) is written as P_A(μ) = 1/2[1+(-1)^μn⃗·σ⃗] =1/2[1+(-1)^μ (n_xσ_x+n_yσ_y+n_zσ_z), U_S(μ) =cosθ_μ+i u⃗_μ·σ⃗sinθ_μ =cosθ_μ+i (u_xμσ_x+u_yμσ_y+u_zμσ_z) sinθ_μ, where μ=0,1 is the measurement results, and n⃗≡ (n_x, n_y, n_z) and u⃗_μ≡ (u_xμ, u_yμ, u_zμ) are unit vectors, i.e., n_x^2+n_y^2+n_z^2=1 and u_xμ^2+u_yμ^2+u_zμ^2=1. Next, we examine the equality condition stated in the theorem. The post measurement state ρ_S^m(μ) is computed as ρ_S^m(μ)= 1/p_μtr_AEP_A(μ)|ψ⟩_SAE⟨ψ|_SAEP_A(μ) = 1/2p_μ[a^2+b^2+c^2+d^2+(-1)^μ{2(ac+bd)n_x+(a^2+b^2-c^2-d^2)n_z}]|↑⟩_1 ⟨↑|_1 +1/2p_μ[e^2+f^2+g^2+h^2+(-1)^μ{2(eg+fh)n_x+(e^2+f^2-g^2-h^2)n_z}]|↓⟩_1 ⟨↓|_1, where p_μ is the probability of obtaining a result μ: p_μ =⟨ψ|P_A(μ)|ψ⟩ =1/2+(-1)^μ[(ac+bd+eg+fh)n_x+1/2(a^2+b^2-c^2-d^2+e^2+f^2-g^2-h^2)n_z]. Thus, if following equations hold, the eigenvalues of ρ_S^m(μ)'s are independent of μ for any measurement. a^2+b^2+c^2+d^2 =e^2+f^2+g^2+h^2=1/2, ac+bd =-(eg+fh), a^2+b^2-c^2-d^2 =-(e^2+f^2-g^2-h^2). If Eq. (S.<ref>) holds, the initial state of qubit 1 is invariant to any unitary operation: ρ_S^i=(|↑⟩_1⟨↑|_1+|↓⟩_1⟨↓|_1)/2. This suggests that the upper bound is tight for highly symmetric initial states. In the following, we provide detailed calculations of the upper bound and the extractable energy for a specific choice of the coefficients with parameter η: a/2=b=c=d=√(1+η)/√(14) and e=f=-g=-h/2=√(1-η)/√(14). It is also shown analytically that the upper bound is achievable when Eq. (S.<ref>) holds. The initial state is written down as |ψ⟩_SAE = √(1+η/14)(2|↑↑↑↑⟩_1234+|↑↑↑↓⟩_1234+|↑↓↑↑⟩_1234+|↑↓↑↓⟩_1234) +√(1-η/14)(|↓↑↓↑⟩_1234+|↓↑↓↓⟩_1234-|↓↓↓↑⟩_1234-2|↓↓↓↓⟩_1234), where -1<η<1. When η=0, the conditions of Eq. (S.<ref>) hold. First, we calculate maximally extracted energy max E_ext. At the beginning of the energy extraction process, the reduced density matrix of qubit 1 is ρ_S^i = tr_AE|ψ⟩_SAE⟨ψ|_SAE =1/2[ 1+η 0; 0 1-η ], and thus the energy expectation value in the initial state of qubit 1 is calculated as E_S^i≡trρ_S^iH_S=η. The post measurement state of qubit 1 with a result μ is ρ_S^m(μ) =1/2p_μ[ 1+η/14[7+(-1)^μ 3(2n_x+n_z)] 0; 0 1-η/14[7+(-1)^μ+1 3(2n_x+n_z)] ] [ λ_↑(μ) 0; 0 λ_↓(μ) ], p_μ =1/14[7+(-1)^μ 3η(2n_x+n_z)]. Thus, the maximally extracted energy can be calculated as maxE_ext =E_S^i-min_{P_A(μ)}, {U_S(μ)}E_S^f =η+max_{P_A(μ)}, {U_S(μ)}{-∑_μ p_μtrU_S(μ) ρ_S^m(μ) U_S^†(μ) H_S} =η+max_{P_A(μ)}∑_μ p_μ |λ_↑(μ)-λ_↓(μ)| =η+1/14max_{P_A(μ)}{|7η+3(2n_x+n_z)|+|7η-3(2n_x+n_z)|} = 0 for -1 < η≤ -3√(5)/7, η+3√(5)/7 for -3√(5)/7≤η≤3√(5)/7, 2η for 3√(5)/7≤η <1, where we used -√(5)≤ 2n_x+n_z ≤√(5). The extracted energy is maximized when U_S(0)=U_S(1)=I_S for -1< η≤ - 3√(5)/7, 2n_x+n_z=- √(5), U_S(0)=I_S, U_S(1)=σ_1^x, or 2n_x+n_z= √(5), U_S(0)=σ_1^x, U_S(1)=I_S for - 3√(5)/7≤η≤3√(5)/7, U_S(0)=U_S(1)=σ_1^x for 3√(5)/7≤η < 1. Note that when -1<η<-3√(5)/7 and 3√(5)/7<η<1, the operations on qubit 1 that maximize E_ext is not feedback control. In this region, ρ_S^i is almost equal to |↓⟩_1 ⟨↓|_1 or |↑⟩_1 ⟨↑|_1, and thus the optimal operations for extracting energy is to do nothing or the spin flip. Next, we calculate the upper bound presented in Eq. (13) in the main text. The entanglement entropy of the qubit 1 is S(ρ_S^i) =trρ_S^ilogρ_S^i =-1+η/2log1+η/2-1-η/2log1-η/2. After the measurement P_A(μ), the entanglement entropy of qubit 1 can decrease on average to ∑_μ p_μ S(ρ_S^m(μ))= -∑_μ p_μtrρ_S^m(μ) logρ_S^m(μ) = -∑_μ p_μ [λ_↑(μ) logλ_↑(μ) + λ_↓(μ) logλ_↓(μ)] = -1+η/2log1+η/2 -1-η/2log1-η/2 -1/14(7+3(2n_x+n_z)) log(7+3(2n_x+n_z)) -1/14(7-3(2n_x+n_z)) log(7-3(2n_x+n_z)) + 1/14(7+3η(2n_x+n_z)) log(7+3η(2n_x+n_z)) +1/14(7-3η(2n_x+n_z)) log(7-3η(2n_x+n_z)) ≥ -1+η/2log1+η/2 -1-η/2log1-η/2 -1/14(7+3√(5)) log(7+3√(5)) -1/14(7-3√(5)) log(7-3√(5)) + 1/14(7+3√(5)η) log(7+3√(5)η) +1/14(7-3√(5)η) log(7-3 √(5)η) = E_F^S-E(ρ_SE^i), where we used -√(5)≤ 2n_x+n_z ≤√(5). From Eq. (S.<ref>), QC-mutual information I_QC≡ S(ρ_S^i)-∑_μ p_μ S(ρ_A^m(μ)) and the entanglement ℰ_SA≡ S(ρ_S^i)-E_F^S-E(ρ_SE^i) can be computed as I_QC= 1/14(7+3(2n_x+n_z)) log(7+3(2n_x+n_z)) 1/14(7-3(2n_x+n_z)) log(7-3(2n_x+n_z)) - 1/14(7+3η(2n_x+n_z)) log(7+3η(2n_x+n_z)) -1/14(7-3η(2n_x+n_z)) log(7-3η(2n_x+n_z)), ℰ_SA= 1/14(7+3√(5)) log(7+3√(5)) +1/14(7-3√(5)) log(7-3√(5)) - 1/14(7+3√(5)η) log(7+3√(5)η) -1/14(7-3√(5)η) log(7-3 √(5)η). In our effective thermodynamics, the thermal equilibrium state σ_S is defined by Eq. (3) in the main text. Thus, the von Neumann entropy of σ_S is S(σ_S) =-trσ_S logσ_S =log(1+e^2β_eff)/1+e^2β_eff+log(1+e^-2β_eff)/1+e^-2β_eff, and the KL divergence D(ρ_S^i||σ_S) is calculated as D(ρ_S^i||σ_S) =trρ_S^ilogσ_S -trρ_S^ilogσ_S =-log 2 + 1+η/2log (1+η)(1+e^2β_eff)+ 1-η/2log (1-η)(1+e^-2β_eff). The effective inverse temperature β_eff can be determined numerically by Eq. (12) in the main text. When η=0, β_eff can be calculated analytically as β_eff=1/2log7+3√(5)/7-3√(5). At this value of η, D(ρ_S^i||σ_S)=-log 2 +log 7, ℰ_SA=log 2 -log 7 + 3√(5)/14log7+3√(5)/7-3√(5), and thus 1/β_eff[D(ρ_S^i||σ_S)+ℰ_SA]=maxE_ext=3√(5)/7. Therefore, in this model, E_ext reaches the upper bound if the eigenvalues of ρ_S^m(μ) are independent of μ.
http://arxiv.org/abs/2408.11419v1
20240821082354
Limit shapes and fluctuations for $(GL_n, GL_k)$ skew Howe duality
[ "Dan Betea", "Anton Nazarov", "Pavel Nikitin", "Travis Scrimshaw" ]
math.PR
[ "math.PR", "math-ph", "math.CO", "math.MP", "math.RT", "05A19, 60C05, 60G55" ]
D. Betea]Dan Betea [D. Betea]Université d’Angers, CNRS, LAREMA, SFR MATHSTIC, Angers, F-49045, France dan.betea@gmail.com https://sites.google.com/view/danbetea A. Nazarov]Anton Nazarov [A. Nazarov]Department of High Energy and Elementary Particle Physics, St. Petersburg State University, University Embankment, 7/9, St. Petersburg, Russia, 199034 and Beijing Institute of Mathematical Sciences and Applications (BIMSA), Bejing 101408, People’s Republic of China antonnaz@gmail.com http://hep.spbu.ru/index.php/en/1-nazarov P. Nikitin]Pavel Nikitin [P. Nikitin] Beijing Institute of Mathematical Sciences and Applications (BIMSA), Bejing 101408, People’s Republic of China pnikitin0103@yahoo.co.uk T. Scrimshaw]Travis Scrimshaw [T. Scrimshaw]Department of Mathematics, Hokkaido University, 5 Chōme Kita 8 Jōnishi, Kita Ward, Sapporo, Hokkaidō 060-0808 tcscrims@gmail.com https://tscrim.github.io/ [2010]05A19, 60C05, 60G55 § ABSTRACT We consider the probability measures on Young diagrams in the n× k rectangle obtained by piecewise-continuously differentiable specializations of Schur polynomials in the dual Cauchy identity. We use a free fermionic representation of the correlation kernel to study its asymptotic behavior and derive the uniform convergence to a limit shape of Young diagrams in the limit n,k→∞. More specifically, we show the bulk is the discrete sine kernel with boundary fluctuations generically given by the Tracy–Widom distribution with the Airy kernel. When our limit shape touches the boundary corner of the rectangle, the fluctuations with a second order correction are given by the discrete Hermite kernel, and we recover the discrete distribution of Gravner–Tracy–Widom (2001) restricting to the leading order. Finally, we demonstrate our limit shapes can have sections with no or full density of particles, where the Pearcey kernel appears when such a section is infinitely small. Limit shapes and fluctuations for (_n,_k) skew Howe duality [ August 26, 2024 =========================================================== § INTRODUCTION AND MAIN RESULTS Skew Howe duality <cit.> for (_n,_k) states that the natural action of the Lie group _n×_k on the exterior algebra ⋀(^n⊠^k) has a multiplicity-free decomposition into irreducible representations ⋀(^n⊠^k ) ⊕_λ V__n(λ) ⊠ V__k(λ'), where the sum is taken over all partitions λ contained inside of an n × k rectangle. By taking characters, we obtain the classical dual Cauchy identity ∑_λ⊆ k^n s_λ(x_1, …, x_n) s_λ'(y_1, …, y_k) = ∏_i=1^n ∏_j=1^k (1 + x_i y_j). The dual Cauchy identity then yields a probability measure on Young diagrams λ inside of the n× k rectangle, with λ' denoting a conjugate diagram, depending on the non-negative character specialization parameters X = (x_1, …, x_n) and Y = (y_1, …, y_k): μ_n,k(λ|{x_i}_i=1^n,{y_j}_j=1^k)=s_λ(x_1, …, x_n) s_λ'(y_1, …, y_k)∏_i=1^n ∏_j=1^k (1 + x_i y_j). The goal of this manuscript is to study the measure (<ref>) in the limit as n, k →∞. The character s_λ(x_1,…,x_n) is the Schur polynomial, which can be written as a sum over all semi-standard Young tableaux of the shape λ with entries at most n: s_λ(x_1,…,x_n)=∑_T∈ SSYT(λ|n)∏_i=1^nx_i^T_i, where T_i is number of boxes with the value i in the tableau T. An alternative way to prove the dual Cauchy identity (<ref>) is by using the dual Robinson–Schensted–Knuth (RSK) algorithm <cit.> (in the nomenclature of <cit.>) to show pairs of semistandard Young tableaux of shape λ with entries ≤ n and λ' with entries ≤ k are in one-to-one correspondence with the n × k matrices of zeros and ones. Thus, the dual RSK bijection can be used to sample random diagrams from the distribution (<ref>) (see Section <ref> for precise details). Moreover, this leads to another interpretation as induced from a certain measure on lozenge tilings of the “skew” hexagon glued from two trapezoids or on domino tilings of the Aztec diamond glued from two rectangular parts. Let us describe this relationship in more detail. Each Young tableau T corresponds to a tiling of a trapezoid or half hexagon in the following way. There are ℓ horizontal lozenges on vertical line number ℓ, their positions are encoded by the number of boxes in rows of the tableau T with values not larger than ℓ. In another combinatorial language, the positions are given by the Gelfand–Tsetlin pattern corresponding to T. Row lengths of the diagram λ correspond to positions on the rightmost vertical line. The positions on the conjugate diagrams are complementary, therefore we need to cut horizontal lozenges in triangles (on the rightmost vertical line) and glue two half hexagons together to obtain the measure (<ref>) as depicted in left panel of Fig. <ref>. Alternatively, we can obtain the measure (<ref>) by considering the tiling of the (n+k)× (n+k) Aztec diamond that is glued from the rectangular parts of sizes n× (n+k) and k× (n+k), depicted on the right panel of Fig. <ref>. Here we require that only three of four[While there are only horizontal and vertical dominoes, we consider the Aztec diamond to be a checkerboard, yielding the four types of dominoes once we take into account the checkerboard coloring.] possible types of dominoes are used to tile these rectangular parts. Reformulating this, we study the distribution on the gluing line in this paper. In contrast to ordinary tilings that are connected to Howe duality (corresponding to the action on Sym(^n ⊠^k) and yielding the Cauchy identity), these skew tilings appear to be less studied. This construction is a special case of the more general Schur process <cit.>, and as such, it is known to be a determinantal point process <cit.>. Contrary to the usual definition of Schur measure as μ(λ|X,Y)=s_λ(X)s_λ(Y)∏_i,j=1^∞(1-x_iy_j) that is defined starting with Cauchy identity for Schur polynomials, we do not need to require x_i, y_j to be less than 1 for all i,j. Moreover the transition to the Miwa variables p_1,p_2,…, where p_ℓ = p_ℓ(X) = 1/ℓ∑_i=1^∞ x_i^ℓ is the (rescaled) powersum symmetric function, is not needed in the skew case that we consider. Our first result is integral representation for the correlation kernel that we obtain using free fermions (see, e.g., <cit.> or <cit.>) following the approach in <cit.>. In this paper, we will always use = √(-1) and i as a variable (typically an indexing variable). Let a_i := λ_i - i + 1/2. The measure μ_n,k( λ |{x_i}_i=1^n,{y_j}_j=1^k) is a determinantal ensemble μ_n,k( λ |{x_i}_i=1^n,{y_j}_j=1^k)= [ (a_i,a_j) ]_i,j=1^n, where the correlation kernel 𝒦(m,m') has an integral representation (m,m')=∮∮_w<zdz/2π zdw/2π wK(z)/K(w) z^-mw^m'√(zw)/z-w, with K(z)=∏_i=1^n1/1-x_iz∏_j=1^k1/1+y_j/z, and the contour for z contains -y_j for all j and does not contain x_i^-1 for all i, and the contour for w encircles zero. We are interested in the limit n,k→∞ such that limk/n=c. We need a way to specify the parameters x_i,y_j in a meaningful way for the limiting procedure to work. Since Schur polynomials are symmetric, we can always assume the sequences {x_i}_i=1^n and {y_j}_j=1^k to be non-decreasing. We can also assume x_i,y_j≠ 0, since s_λ(x_1,…,x_n,0,…,0) = s_λ(x_1,…, x_n). (Note that lozenge tilings of a skew hexagon and domino tilings of Aztec diamond glued from two rectangles depend upon the order of specialization parameters as each tiling correspond to a monomial in Schur polynomial; contrast Fig. <ref> with Fig. <ref> below.) We derive the bulk asymptotics of the correlation kernel and demonstrate it convergence to the discrete sine kernel. Assume that f,g [0, 1] →_≥ 0 are piecewise ^1 functions and f(s) > 0, g(s) ≥ 0. Assume the equation ∫_0^1f(s)/(1-f(s)z)^2 - c ∫_0^1g(s) /(z+g(s))^2 = 0 has real roots z^(i)∈⊔{∞} for i = 1, …, m. Set t^(i) = ∫_0^1f(s)z^(i)/1 - f(s)z^(i) + c ∫_0^1g(s)/z^(i)+g(s), and without loss of generality, we take -1 =: t^(0)≤ t^(1)≤ t^(2)≤⋯≤ t^(m)≤ t^(m+1) := c. Then for all a = 0, …, m the equation ∫_0^1f(s)z/1 - f(s)z + c ∫_0^1g(s)/z+g(s) - t = 0 either has exactly two complex conjugate roots z_1(t),z_2(t)= z_1(t) for all t∈ (t^(a), t^(a+1)) or only real roots for all t ∈ (t^(a), t^(a+1)). Assume that limk/n = c as n,k→∞. Assume a is such that we have two complex conjugate roots in (t^(a), t^(a+1)), which we call a support interval. Then for t ∈ [t^(a),t^(a+1)] ⊆ [-1,c], integers l,l', and specializing the parameters x_i = f(i/n), y_j = g(j/k), we have lim_n→∞𝒦(nt+l,nt+l') = sin( πρ(t) · (l-l') )π(l-l') if l ≠ l', ρ(t) if l = l', where ρ(t) = 1/π z_1(t) is given by the argument of the solution z_1(t) of Equation (<ref>) as a function of t. Since we can rearrange the specialization parameters to be non-decreasing, we could also assume our functions f, g are non-decreasing. We will generally assume that the intervals [-1, t^(1)] and [t^(m), c] will only contian real roots. We note there is a canonical way to extend the limit density ρ(t) as either 0 or 1 for these intervals that only has real roots, which we will make precise in Section <ref>. We will mostly concentrate on the case where there is a single support interval as the analysis for each (open) support interval is the same. For convenience in this case, we will denote the support interval (t_-, t_+) and hence z_- := z^(1), z_+ := z^(2) be the corresponding real roots of (<ref>). Note that if conditions on f, g are weakened, Equation (<ref>) can have several pairs of complex conjugate roots or a pair of complex conjugate roots (and several real roots). In these cases, careful analysis of the integration contours in (<ref>) is required. Hence, our results are generic but not very explicit, but these conditions are satisfied in many natural examples of specializations. Indeed, in examples we present in this paper in Sections <ref> and <ref>, we only have one pair of roots (subsequently, in each continuous section for the examples in Section <ref>). Furthermore, it will be possible to deform the integration contours in such a way that only a single pair of complex conjugate roots contribute to the computations and the convergence of the correlation kernel to the discrete sine kernel is established. As a corollary, we demonstrate that random Young diagrams under the measure (<ref>) converge in probability (for fixed f and g) to a limit shape that is described by a solution of a certain equation. We use the assumptions and notation of Theorem <ref>. The upper boundary F_n of a rotated and scaled Young diagram λ converges pointwise in probability with respect to the probability measure (<ref>) to the limit shape on each support interval, as defined in Equation (<ref>), given by the formula Ω(u) = 1 + ∫_-1^u ( 1-2 ρ(t) ), where the limit density is ρ(t) given above. While Theorem <ref> proves weak convergence of the probability measures, one can consider the upper boundaries of the (rotated) random diagrams as random piecewise linear functions. It is natural to ask if this random functions converge to the limit shape in Theorem <ref> uniformly in probability. From the pointwise convergence to the limit shape in probability it is possible to deduce the uniform convergence by a general argument presented in <cit.> (see Section <ref>). This leads to the next corollary. Let F_n denote the upper boundary of a Young diagram λ rotated and scaled by 1/n and regarded as a function F_n(u) of u∈ [-1, c]. Then the functions {F_n}_n=1^∞ converge in probability with respect to the probability measure (<ref>) in the supremum norm ·_∞ to the limiting shape Ω(u) given by the formula (<ref>); that is, for any ε>0 lim_n→∞[sup_u|F_n(u)-Ω(u)|>ε]= 0. We also consider boundary asymptotics of the correlation kernel and demonstrate that in general it is described by the Airy kernel 𝒦_Airy; hence is is distributed according to the Tracy–Widom GUE distribution <cit.>, which we denote by F_ GUE. In particular, we have an analogue of the Baik–Deift–Johannson asymptotics <cit.> for the fluctuations around t_+ of the random diagram in the generic case. The realization of these fluctuations depend on the limit shape solutions, which is either λ_1 or the number of parts that are equal to k, both of which occur when t_+ < c. With setup as in Theorem <ref>, consider the case when t_+ < c, and the asymptotic regime m≈ t_+n+ξ n^1/3σ^-1, m'≈ t_+n+η n^1/3σ^-1 for some explicit constant σ as n→∞. Denote by the real root of Equation (<ref>) that corresponds to t_+, then lim_n→∞σ^-1n^1/3^m-m'𝒦(m,m')=𝒦_Airy(ξ,η) = ∬/(2π i)^2exp(ζ^3/3-ζξ)/exp(ν^3/3-νη)1/ζ-ν. Let L = λ_1, if Ω convex around t_+, n-|{i |λ_i = k}|, if Ω concave around t_+. Then we have lim_n →∞( L - t_+ n/σ^-1 n^1/3) = F_GUE (s). Similar result holds for left boundary t_- when t_-> -1. Note that the term ^m-m' does not contribute to correlation functions and gap probability as it is canceled in determinant computations. This applies to Pearcey and Hermite kernels discussed below as well. Our next result is a method to produce an the appearance of the Pearcey kernel. To do so, we need to consider the case where the limit density ρ(t) has multi-interval support of (t_-, t_+) and (t'_-, t'_+) with -1 < t_i < t_+ ≤ t_d ≤ t'_- < t'_+ ≤ c for some t_d. Note that on the intervals (t_+, t_d) and (t_d, t'_-) will have a density of 0 or 1, which should be the same, depending on the behaviors in each part. By choosing our f, g such that t_+ = t_d = t'_-, we produce the following behavior. The intersection point t_d corresponds to the higher order root of the action S(z,t)=lim_n,k→∞1/nln K(z)-tln z and local fluctuations near it are no longer described by the Airy kernel, but by the Pearcey kernel _Pearcey <cit.>. Assume we have two support intervals (t_-, t_d) and (t_d, t_+). Denote by the real root of Equation (<ref>) that corresponds to t_d. Consider the asymptotic regime m ≈ t_d n+ξ n^1/4σ^-1, m' ≈ t_d n+η n^1/4σ^-1 for some constant σ as n→∞. Then lim_n,k→∞σ^-1n^1/4^m-m'𝒦(m,m') = 𝒦_Pearcey(ξ,η) = ∬/(2π i)^2exp(ζ^4/4-ζξ)/exp(ν^4/4-νη)1/ζ-ν. Therefore, we have the Pearcey process any time two support intervals share a common boundary point. In addition, we conjecture that looking at the three-dimensional picture of the entire lozenge tiling (or Aztec diamond) at this point is described by the extended Pearcey kernel of <cit.>. There is a special case when the boundary asymptotics change. This occurs when the limit shape touches the corner of the diagram (up to a second order approximation), where it has (very) different behavior given by the discrete Hermite kernel ^_s(l,l') := 1/√(π(l-1)! (l'-1)!)∫_s^∞ e^-t^2/2_l-1(t)_l'-1(t) from <cit.>, where _l(t) is the (probabilist's) Hermite polynomial. Consider the asymptotic regime n→∞ with k = cn+s/τ√(n) + o(√(n)), where s is a parameter and τ=1/√(f_2^2+c g^-1_2^2) is a normalization constant. Take m = ⌊ cn+s/τ√(n)⌋-l+1/2 and m' = ⌊ cn+s/τ√(n)⌋-l'+1/2 so that l,l'∈. We assume ∫_0^1 f(t) = c∫_0^1/g(t). Alternatively, these conditions can be written as lim_n,k→∞k/n=c, ∑_i=0^n-1f(i/n)=c∑_j=0^k-1g(j/k)^-1+s/τ√(n). With this scaling and conditions we have full support case of t_+ = c and the correlation kernel converges to the discrete Hermite kernel with parameter s := s∫_0^1/g(t): lim_n→∞n^l-l'/2(m,m')=τ^l-l'√((l'-1)!/(l-1)!)^_s(l,l'). For Δ∈, the probability distribution of the length of the first row of the diagram is given by the determinant lim_n →∞ (λ_1 - n c ≤ -Δ) = _0 ≤ l, l' ≤Δ-1 [ δ_l, l' - ^_s(l,l') ] Similarly, if we have ∫_0^1/f(t)=c∫_0^1 g(t), then t_-=-1 and we have fluctuations described by the discrete Hermite kernel in the left corner of the rectangle. We also have the following conjecture characterizing the critical case. We phrase it only in terms of t_+, but due to the symmetry in the system, we would have the analogous statement for t_- for the boundary -1. The following are equivalent: * t_+ = c (that is, the limit shape has a support interval ending at the right boundary); * z_+ = 0; * Ω'(t_+) = 0; * ∫_0^1 f(s) = c∫_0^1/g(s). As evidence for Conjecture <ref>, when Ω is convex (resp. concave), all of the examples computed show that z_+ ≥ 0 (resp. z_+ ≤ 0), and we believe this is an equivalence that holds in general. Consequently, the critical case would correspond to when z_+ = 0, which is clearly a sufficient condition. Let us consider the boundary asymptotitcs in the corner with only a first order approximation, which amounts to taking s = s = 0. In this case, we show that discrete Hermite kernel _0^ becomes the discrete kernel from the critical case of Gravner–Tracy–Widom <cit.>. In fact, we show a stronger statement, that the matrices defining the kernels are equal up to a overall simple factor (that depends on the diagonal). With the scaling of Theorem <ref>, in the critical support case of t_+ = c and for Δ∈, we have lim_n →∞ (λ_1 - n c ≤ -Δ) = _0 ≤ i, j ≤Δ-1 [δ_i, j - K_ crit(i,j)] with the matrix K_ crit (i, j) = ∑_ℓ = 0^(Δ - j - 1)/21/2 π1/ℓ!sinπ (j-i)/2Γ(ℓ + j-i/2), if ℓ + j-i/2∉_≤ 0, 1/2(-1)^ℓ/ℓ! (i-j/2 - ℓ)!, if ℓ + j-i/2∈_≤ 0. We remark that we encounter the same phase transition that was observed in <cit.>. Indeed, one side of their transition is a deterministic regime as <cit.> only considers the behavior of λ_1, but this corresponds to λ_1 = k fixed. On the other hand, the fluctuations for the number of rows of length k are described by the Tracy–Widom distribution. Let us discuss how our results relate with the literature. If we take x_i = α and y_j = 1 for some positive constant α, then this case has been considered previously in <cit.>. We perform an explicit analysis of this example in Section <ref>, which we then extend to the piecewise constant case in Section <ref>. By using the relationship with Aztec diamonds <cit.>, the limit shape correlation kernel we derive has appeared in <cit.>. When specializing x_i = y_i = 1, we obtain the _n ×_k results in <cit.>, which were also previously known and studied (sometimes under different guises such as the Krawtchouk ensemble) in <cit.>, although this list is likely not exhaustive. For more precise details, see, e.g., <cit.>. For the principal specializations x_i = q^i-1, y_j=q^j-1 and x_i=q^i-1, y_j=q^-j+1, which correspond to exponential functions, we recover the q-Krawtchouk polynomial ensemble. We also take the limit q→ 1 in such a way that lim n(q-1) = γ, the diagrams converge to the corresponding limit shape. In these cases the equations can be solved explicitly, so as a consequence of Theorem <ref>, we obtain another proof of the limit shapes from <cit.>, where they were derived using the q-difference equations for the q-Krawtchouk polynomials in <cit.>. For the usual Schur measure, the simplest way is to take one non-zero value in the sequence of Miwa variables p_ℓ(X) = 1/ℓ∑_i=1^∞ x_i^ℓ with p_1=ξ,p_2=p_3=…=0 to obtain the poissonization of the Plancherel measure <cit.>. This can be seen as letting x_i=ξ/n, y_j=ξ/k and taking the limit n,k→∞, which we can also undertake for the skew case. However, it produces the same results as the diagram does not “feel” the n× k rectangle it is confined in. We expect the discrete Hermite kernel to be universal for the fluctuations at the corner. As per Conjecture <ref>, we believe for our class of functions f,g the limit shape must take a flat approach in the corner; that is, Ω'(c) = 0. However, if we allow f,g to grow to infinity, we can obtain non-flat approach to the corner, which we demonstrate in Section <ref> by taking a generalized principal specialization with constant q. In this case, the limit shape is a linear function, and we conjecture that corner fluctuations are described by a q-analogue of the discrete Hermite kernel. The paper is organized as follows. In Section <ref> we discuss skew Howe duality, the dual Cauchy identity for _n×_k characters, and sampling of random Young diagrams with respect to the probability measure (<ref>). We also explain a combinatorial non-intersecting lattice path realization of the measure (<ref>) and its two graphic representations as domino tilings of the Aztec diamond with gluing condition and as lozenge tilings of the hexagon with gluing condition along the diagonal. In Section <ref> we explain free fermionic represenation for the ensemble and prove Theorem <ref>. Next, we discuss the asymptotics of our measure by splitting into three parts. The first part is Section <ref>, where we study bulk asymptotics of the correlation kernel and prove Theorem <ref> with Theorem <ref> and Corollary <ref>. Then in Sections <ref>, <ref>, <ref>, <ref>, we discuss the asymptotics of the correlation kernel near and at the boundary and prove Theorems <ref>, <ref>, <ref>, <ref>, respectively. In Section <ref>, we give a number of examples of our results for certain specializations: * constant (Section <ref>); * piecewise constant (Section <ref>); * general monomials (Section <ref>); * alternating parameters and the relation to symplectic Young diagrams (Section <ref>). The relation of principal specialization of the _n×_k-characters to the q-Krawtchouk ensemble is discussed in Section <ref>. In particular, in Section <ref>, we provide an alternative proof of Corollary <ref> on the uniform convergence of Young diagrams to the limit shape for the principal specialization by a direct computation. In the limit for the q-Krawtchouk ensemble, we also require to take the limit q → 1 as n,k →∞, but we also consider a limiting case of this behavior with q being a constant in Section <ref>. Note, that q=const does not satisfy our general assumptions as there are no functions f(s) and g(s) corresponding to this specialization. Therefore Theorems <ref>, <ref>, <ref>, <ref>, <ref> are not applicable, but we still describe the limit shape and conjecture the fluctuations in the corner. We conclude by presenting some open problems related to the results of this paper in Section <ref>. § ACKNOWLEDGEMENTS The authors thank Daniil Sarafannikov for deriving formula (<ref>). The authors thank Jérémie Bouttier, Janko Gravner, Arno Kuijlaars, Nicolai Reshetikhin, Walter van Assche, and Anatoly Vershik for useful conversations. The authors thank Cesar Cuenca and Matteo Mucciconi for useful comments on an earlier draft. Dan Betea was supported by ERC grant COMBINEPIC No. 759702. Anton Nazarov was supported by the Russian Science Foundation under grant No. 21-11-00141. Travis Scrimshaw was partially supported by Grant-in-Aid for JSPS Fellows 21F51028 and for Scientific Research for Early-Career Scientists 23K12983. § SKEW HOWE DUALITY AND DUAL RSK ALGORITHM A partition λ = (λ_1, λ_2, λ_3, …) is a weakly decreasing sequence of nonnegative integers with only finitely many nonzero entries. We will consider our partitions to be given to their Young diagrams, which we draw using English convention. We let λ' denote the conjugate shape, given by reflecting over the y = -x line. A tableau T is a filling of the Young diagram of λ by nonnegative integers, and it is semistandard if the rows weakly increase from left-to-right and columns strictly increase from top-to-bottom. Define (T) = λ to be the shape of T. We can encode a basis element of ⋀(^n⊗^k) of the form (e_i_1⊗ e_j_1) ∧ (e_i_2⊗ e_j_2) ∧⋯∧ (e_i_ℓ⊗ e_j_ℓ) with (i_k, j_k) < (i_k+1, j_k+1) in lexicographic order for all 1 ≤ k ≤ℓ, as a {0,1}-matrix M by M_i_k,j_k = 1 for all k and 0 otherwise. Subsequently, we can represent M as the corresponding pair of sequences of row numbers (i_k)_k=1^ℓ and column numbers (j_k)_k=1^ℓ, or written as a biword: [ i_1 i_2 i_3 ⋯ i_ℓ; j_1 j_2 j_3 ⋯ j_ℓ ]. Next, we describe the dual RSK bijection <cit.> (see also <cit.>) that can be considered as a combinatorial realization the decomposition (<ref>) (see, e.g., <cit.>). We start with a pair of empty semistandard tableau (P_0, Q_0), and proceed inductively on m = 0, 1, …, ℓ as follows. Consider (P_m, Q_m), and we perform the following modified Schensted insertion algorithm <cit.> starting with j_m starting from the top row. To row r, we try to insert letter x as follows: * If x is strictly larger than every other letter in r (including if r is empty), we add x to the end of r and terminate. * Otherwise, find the smallest y ≥ x, replace the leftmost such occurrence in r, and insert y into to the row below r. Let P_m+1 be the resulting semistandard tableau, and define Q_m+1 as Q_m with adding a box with entry i_m to Q such that (Q_m+1)' = (P_m+1). This gives a bijection between the m × n {0,1}-matrices that index basis elements of ⋀(^n⊗^k) and pairs of semistandard tableau (P, Q) such that (P)' = (Q) with the entries in P (resp. Q) being at most n (resp. m). The tableau P (resp. Q) is known as the insertion tableau (resp. recording tableau). Using this, we can obtain a sampling algorithm for the random diagram with distribution (<ref>) with parameters x_1, …, x_n, y_1, …, y_k given as follows. We form the random n× k matrix M so that probability to have 0 at the position (i,j) is (1+x_iy_j)^-1 (thus it is 1 with probability x_i y_j/1 + x_i y_j). We then apply dual RSK to M and taking the shape (P) of the insertion tableau P (equivalently, of the recording tableau) gives the sampling algorithm. All of our random Young diagrams will be obtained using this sampling algorithm. § FREE FERMIONIC REPRESENTATION Another way to represent a partition λ = (λ_1, λ_2, …) is by using the coordinates a_i = λ_i - i + 1/2 on the shifted integer lattice := + 1/2. Furthermore, the sequences (a_i ∈)_i=1^∞ such that a_i > a_i+1 for all i and there exists an ℓ such that a_i = -i + 1/2 for all i > ℓ are in bijection with partitions; in particular, we have ℓ(λ) ≤ℓ. The set (a_i)_i=1^λ corresponding to a partition λ is known as the Maya diagram of λ. This can be visually constructed by putting a dot on each horizontal edge of the conjugate Young diagram (including all the infinite number of edges at the top), then rotating it so that the corner points down touching at 0 ∈ (known as Russian convention), and then projecting onto the line. See Fig. <ref> for an example. We want to consider the elements in the Maya diagram as fermions, particles where no two occupy the same position. Thus we encode the positions of particles as basis vectors in the infinite wedge space ⋀[]. This has a subspace V called fermionic Fock space defined as the span of { v_a_1∧ v_a_2∧⋯| a_1 > a_2 > ⋯ and there exists ℓ, C ∈ such that a_i = -i + C + 1/2 for all i > ℓ}. In particular, this gives us the classical correspondence between Maya diagrams (a_i)_i=1^∞ and the subspace of vectors in V_0 ⊆ V spanned by those with C = 0 above, and for a partition λ, we write the corresponding basis element as |λ⟩. This space V is an irreducible representation of the infinite dimensional Clifford algebra generated by {ψ_i, ψ_i^†}_i ∈ that satisfy the canonical commutation relations: ψ_i ψ_j + ψ_j ψ_i = ψ^†_i ψ^†_j + ψ^†_j ψ^†_i = 0, ψ_i ψ^†_j + ψ^†_j ψ_i = δ_ij, where the action on V is given by ψ_i · (v_i_1∧ v_i_2∧⋯) = v_i ∧ v_i_1∧ v_i_2∧⋯, ψ_i^†· (v_i_1∧ v_i_2∧⋯) = (-1)^j-1 v_i_1∧⋯∧v_i_j∧⋯ if i_j = i, 0 otherwise, where v_i_j denotes that the vector is not present. Because of this action, the operators ψ_i and ψ_i^† are known as elementary (fermionic) creation and annihilation operators, respectively. There are also the current operators {α_ℓ}_ℓ∈ defined by[Special care is needed for the case ℓ = 0, where we need a normal ordering on the product. The result is that α_0 returns the value C of any basis vector of V. However, we do not use α_0 here; so we omit the normal ordering.] α_ℓ=∑_j ∈ψ_j-ℓψ^†_j, and these satisfy commutation relations of Heisenberg algebra [α_m, α_ℓ] = mδ_m+ℓ, 0. Equation (<ref>) is one half of the boson-fermion correspondence, and V_0 becomes a representation of the Heisenberg algebra. For more information, see, e.g., <cit.> or <cit.>. There is an anti-involution on the Clifford algebra defined by ψ_i ⟷ψ_i^†, which also sends α_ℓ⟷α_-ℓ, that allows us to define a dual representation V^†. We will denote the dual basis element of |λ⟩ as ⟨λ|, and we denote the natural pairing by ⟨μ|λ⟩ = δ_λμ. Furthermore, this pairing has the property that for any operator Ψ, the notation ⟨μ|Ψ|λ⟩ = (⟨μ|Ψ) |λ⟩ = ⟨μ| (Ψ|λ⟩) is unambiguous. This can be extended to all of V, but we will only be concerned with V_0. Next, we define generating series that are formal fermion fields ψ(z) = ∑_j ∈ψ_j z^j, ψ^†(w) = ∑_j ∈ψ^†_j w^-j. Let X = (x_1, x_2, …) and Y = (y_1, y_2, …). We also define half-vertex operators as Γ_±(X) = exp(∑_ℓ=1^∞p_ℓ(X)α_±ℓ), where p_ℓ(X) = 1/ℓ∑_i=1^∞ x_i^ℓ are the Miwa variables, which are a rescaled version of the powersum symmetric functions. Note that the Clifford algebra anti-involution sends Γ_+(X) ⟷Γ_-(X). From the definition and Heisenberg relations (<ref>), we have [Γ_+(X), Γ_+(Y)] = [Γ_-(X), Γ_-(Y)] = 0, Γ_+(X) |0⟩ = |0⟩, ⟨0|Γ_-(Y) = ⟨0|. Furthermore, by the Baker–Campell–Hausdoff formula, the half-vertex operators satisfy Γ_+(X) Γ_-(Y) = H(X; Y) Γ_-(Y) Γ_+(X), Γ_±(X) ψ(z) = H(X; z^±) ψ(z) Γ_±(X), Γ_±(X) ψ^†(w) = H(X; w^±1)^-1ψ^†(w) Γ_±(X), where H(X; Y) = ∏_i,j1/1 - x_i y_j, E(X; Y) = ∏_i,j (1 + x_i y_j). We can write Schur polynomials as the matrix coefficients (see, e.g., <cit.>) ⟨0|Γ_+(X) |λ⟩ = s_λ(X) by using Wick's theorem and the Jacobi–Trudi formula. From the Clifford algebra anti-involution (or duality), we also have ⟨λ|Γ_-(X) |0⟩ = s_λ(X). There is an involution on symmetric functions defined by ω s_λ(Y) = s_λ'(Y), which sends p_ℓ(Y) ⟷ (-1)^ℓ-1 p_ℓ(Y). If we formally apply ω to the half-vertex operator Γ_±(Y), we obtain the half-vertex operator Γ'_±(Y) := exp(∑_ℓ=1^∞(-1)^ℓ-1 p_ℓ(Y) α_±ℓ) = Γ^-1_±(-Y). By the dual Jacobi–Trudi formua and Wick's theorem, we have ⟨0|Γ'_+(Y) |λ⟩ = s_λ'(Y). Note that (<ref>) with (<ref>) implies the nontrivial relation Γ'_+(Y) |λ⟩ = Γ_+(Y) |λ'⟩. Next we have the dual Cauchy identity by ∑_λ s_λ(X) s_λ'(Y) = ∑_λ⟨0|Γ_+(X) |λ⟩·⟨λ|Γ'_-(Y) |0⟩ =⟨0|Γ_+(X) Γ'_-(Y) |0⟩ = ⟨0|Γ_+(X) Γ_-^-1(-Y) |0⟩ = H(X; -Y)^-1⟨0|Γ_-^-1(-Y) Γ_+(X) |0⟩ = E(X; Y) ⟨0|0⟩ = E(X; Y). The correlation kernel is then obtained by commuting Γ'_- and fermionic operators ψ_m in the correlator ⟨0|Γ_+(t) ψ_mψ^†_m'Γ'_-(t') |0⟩ as (m,m') = ∮∮_w < z/2π z/2π wK(z)/K(w) z^-mw^m'√(zw)/z-w, where the contour for z contains -y_j for all j and does not contain x_i^-1 for all i, and the contour for w encircles zero and K(z) = ∏_i=1^n1/1-x_iz∏_j=1^k1/1+y_j/z. Indeed, we first compute ⟨0|Γ_+(X) ψ(z) ψ^†(w) Γ'_-(Y) |0⟩ = ⟨0|Γ_+(X) ψ(z) ψ^†(w) Γ_-^-1(-Y) |0⟩ = H(X; -Y)^-1H(X; z) H(-Y; z^-1)/H(X; w) H(-Y; w^-1)⟨0|ψ(z) ψ^†(w) |0⟩ = H(X; -Y)^-1H(X; z) H(-Y; z^-1)/H(X; w) H(-Y; w^-1)∑_ℓ=0^∞ z^-1/2-ℓ w^1/2+ℓ = H(X; -Y)^-1H(X; z) H(-Y; z^-1)/H(X; w) H(-Y; w^-1)√(zw)/z - w. To obtain the kernel, we use the above computation with the residue theorem: (m,m') = E(X; Y)^-1⟨0|Γ_+(t) ψ_mψ^†_m'Γ'_-(t') |0⟩ = E(X; Y)^-1∮∮_w < z/2π z/2π w z^-m w^m'⟨0|Γ_+(t) ψ(z) ψ^†(w) Γ'_-(t') |0⟩, where we note that H(X; -Y)^-1 = E(X; Y) is our normalization factor. We thus complete the proof of Theorem <ref>. § ASYMPTOTICS §.§ Bulk and frozen regions In this subsection we prove Theorem <ref>, Theorem <ref> and Corollary <ref>. Assume that the parameters x_i,y_j are given by x_i=f(i/n), y_j=g(j/k), with f,g [0, 1] →_≥ 0 being piecewise ^1 nonnegative functions f(s) > 0, g(s) ≥ 0. Consider the limit n,k→∞ such that limk/n=c. Take m=⌊ nt⌋+l, m'=⌊ nt⌋+l' and denote by S_n(z,t) the following expression S_n(z,t)=-1/n∑_i=1^nln(1-f(i/n) z)-1/n∑_j=1^kln(z+g(j/k))+k-⌊ nt⌋/nln z. The correlation kernel is then written as (m,m') =∮∮_w<zdz/2π zdw/2π w e^n(S_n(z,t)-S_n(w,t))√(zw)/z-wz^-lw^l'. Define the action as the limit of S_n(z,t) as n→∞, which is given by the (Riemann) integral S(z)=-∫_0^1ln(1-f(s)z)-c∫_0^1ln(1+g(s)/z) - t ln z. We choose our branch cuts of the logarithms such that the derivative of S(z) has branch cuts [-max g,-min g] and [(max f)^-1, (min f)^-1]. Then 1/nln K(z)-tln z=S(z)+𝒪(1/n) and the limit shape of Young diagrams with respect to the distribution (<ref>) is determined by the limit density ρ(t)=lim_n,k→∞(nt,nt) = lim_n,k→∞∮∮_w<zdz/2π zdw/2π w e^n(S(z)-S(w))√(zw)/z-w(1+o(1)). Analysis as in <cit.> demonstrates that only critical points of the action (the solutions of ∂ S(z)=0) contribute to the integral (<ref>). Let us demonstrate that the equation z∂_zS(z) = ∫_0^1f(s)z/1 - f(s)z + c ∫_0^1g(s)/z+g(s)-t = 0, has either two complex-conjugate roots z_1,z_2 with z_2=z̅_1, in which case the contours should be deformed as in Fig. <ref>(left), or only real roots. Consider the approximation to Equation (<ref>) for finite n, which from (<ref>) reads z∂_zS_n(z,t)=1/n∑_i=1^n1/1-f(i/n) z+1/n∑_j=1^kg(j/k)/z+g(j/k)-n+⌊ nt⌋/n=0. Equation (<ref>) has at most n+k real roots. We want to consider the (real-valued) function T_n(z)=1/n∑_i=1^n1/1-f(i/n) z+1/n∑_j=1^kg(j/k)/z+g(j/k). The function T_n(z) has poles at {f(i/n)^-1}_i=1^n and {-g(j/k)}_j=1^k. If z→ f(i/n)^-1 from the right (resp. left), then T(z)→-∞ (resp. T_n(z)→+∞). So the horizontal line y=⌊ nt⌋/n+1 intersects the graph of T_n(z) at least once on each interval (f((i+1)/n)^-1,f(i/n)^-1), and similarly for the intervals (-g((j+1)/k),-g(j/k)). Therefore Equation (<ref>) has at least n+k-2 real roots and at most one pair of complex conjugate roots. We now show the same pattern holds in the limit when n,k→∞. Fix some t ∈ [-1, c]. Set T_n(z) = t+1 - T_n(z). If T_n(z)=0 has a pair of complex conjugate roots z_0^(n),z̅_0^(n), we can represent it in the form T_n(z) = (t+1)(-1)^nf(1)∏_i=1^n-1f(i/n) (z-z_i) ∏_j=1^k-1 (z-w_j)(z-z_0^(n))(z-z̅_0^(n))∏_i=1^n(1-f(i/n) z)∏_j=1^k(z+g(j/k)) , where z_i lies between 1/f(i/n) and 1/f(i+1/n), w_j lies between -g(j/k) and -g((j+1)/k) and z_0^(n) > 0. Fix ϵ>0 and set z > ϵ, z^-1<-ϵ . First consider the contributions with the function g, which is bounded, since g is piecewise ^1. Let us estimate the absolute value of the corresponding contribution when k is large enough: ln|∏_j=1^k-1z-w_j/z+g(j/k)| = ∑_j=1^k-1ln| 1 - w_j + g(j/k)/z+g(j/k)|≥∑_j=1^k-1ln| 1 - |g((j+1)/k) - g(j/k)/z+g(j/k)||. Now use that ln(1-x)> -(1+ϵ)x for small x<δ, take k such that |g((j+1)/k) - g(j/k)/z+g(j/k)| < δ and use that z+g(j/k)>ϵ to obtain the estimate ln|∏_j=1^k-1z-w_j/z+g(j/k)|≥ -(1+ϵ) ∑_j=1^k-1ϵ^-1|g(j+1/k)-g(j/k)| ≥ -(ϵ^-1+1) TV(g), where TV(g) < ∞ is the total variation of g on [0, 1]. To have a similar estimate for f, note that z_i^-1∈( f(i/n),f((i+1)/n) ), and write the contribution corresponding to f as (f(1/n)∏_i=1^n-1f((i+1)/n) z_i)·∏_i=1^n-1(z^-1-z_i^-1)/∏_i=1^n(z^-1-f(i/n)). The absolute value of first term is estimated by f(0), and the second term is estimated in the same way as the contribution of the function g. Also T_n(z) tends to zero when n→∞ for any root z of (<ref>). Therefore if there is such a root z_0^∞ with z_0^∞ > 0, then for any n sufficiently large we should have precisely one pair of complex conjugate roots for T_n(z)=0 and lim_n→∞ z_0^(n) = z_0^∞. Therefore T_n(z) is uniformly separated from zero if z is not sufficiently close either to the pair {z_0^(n), z̅_0^(n)} or to the real line. Note that this argument uses f(s)>0 for all s, but we can consider functions that go to zero as s goes to zero; that is, f(s) 0. Then more careful analysis is needed. In Section <ref>, we consider the power functions f(s) = s^m, m∈_≥0 and Equation (<ref>) becomes transcendental, but numerically we can see that we still have at most one complex conjugate pair of roots. Take a support interval [t_-,t_+]⊂ [-1,c] and denote by z_1 and z_2 = z̅_1 two complex conjugate roots of Equation (<ref>), then the correlation kernel converges to the discrete sine kernel lim_n,k→∞(nt+l,nt+l')=sinπρ(t)(l-l')/π(l-l') with ρ(t) = 1/π z_1 = 1/πarccos z_1/√(( z_1)^2+( z_1)^2) for z_1 = z_1 + z_1. The support of the density ρ contains the union of the support intervals. Any support interval [t_-,t_+] is determined by nonzero real roots z_±≠ 0 of the equation (z∂_z)^2S(z) = ∫_0^1f(s)z/1-f(s)z + ∫_0^1(f(s)z)^2/(1-f(s)z)^2 - c ∫_0^1g(s) z/(z+g(s))^2 = ∫_0^1f(s)z/(1-f(s)z)^2 - c ∫_0^1g(s) z/(z+g(s))^2 = 0. We substitute z_± into Equation (<ref>) and obtain t_±. If t tends to t_- or t_+, then the complex conjugate roots z_1,z_2 tend to z_- or z_+. When t is outside of the union of the support intervals, Equation (<ref>) has only real roots and in order for the formula in Theorem <ref> to hold we need to extend ρ(t) to either be 0 or 1. Near a support interval [t_-,t_+] we choose this extension depending on the values of z_- if t < t_- and z_+ if t > t_+. Specifically, we set ρ(t) = 1 if z_- < 0 (resp. z_+ < 0) and ρ(t) = 0 if z_- > 0 (resp. z_+ > 0). To establish the pointwise convergence in probability for F_n, we denote by N(m)=#{ℓ∈|λ⟩|ℓ>m}, the number of particles lying to the right of the position m. Then the expectation and variance of N(m) is expressed in terms of the correlation kernel as [N(m)] = _(m,∞), [N(m)] = _(m,∞)(-^2). Since the kernel is real and symmetric, the trace of its square is non-negative ^2≥ 0 and [N(m)]≤[N(m)]. In the asymptotic regime a_i=nu_i, setting N(nu)=1/nN(nu) we obtain [N(nu)] = 1/n^2[N(nu)] ≤1/n[N(nu)]. For the upper boundary of the rescaled rotated diagram, we have F_n(u) = u+1/n·#{ℓ∈λ|ℓ > nu} for u∈1/n(+1/2). Since we have the limit shape Ω(u) = lim_n→∞[F_n(u)] = u + lim_n→∞[N(nu)], we obtain [F_n(u)]≤1/n[N(nu)] ⟶ 0 as n→∞. Hence we have pointwise convergence in probability for F_n(u) and conclude the proof of the Theorem <ref>. To prove the uniform convergence of Corollary <ref>, we note that for the bounded interval I=[-1,c]⊂, we denote I_ε=I∩ε and by 1-Lipschitz property of F_n we have for each ε>0 (sup_u∈ IF_n(u)-Ω(u) >ε) ≤(sup_u∈ I_εF_n(u)-Ω(u) > ε/2). On the right hand side the supremum is computed over the finite set so the convergence to zero at each point u∈ I_ε implies the convergence of the supremum to zero. This leads to the convergence of the supremum norm over I to zero in probability. §.§ Edge boundary To study the boundary asymptotics and prove Theorem <ref>, we need to consider a critical value t_+ (or t_-) where the roots z_1,z_2 coincide. The integration contours then look like in Fig. <ref>(center). Denote the corresponding value of z by . Then near the double critical point we have S(z)=S()+1/6S”'()(z - )^3+𝒪((z - )^4). We assume that S”'()≠ 0, ≠ 0 and denote by σ a normalization constant σ=(2/S”'())^1/31/. We change the variables z,w to ζ,ν such that z = e^σζ n^-1/3≈ (1+σζ n^-1/3+⋯), w= e^σν n^-1/3≈ (1+σν n^-1/3+⋯) as n→∞ and consider the asymptotic regime m≈ t_+n+ξ n^1/3σ^-1, m'≈ t_+n+η n^1/3σ^-1. The correlation kernel is then expressed as (m,m')≈σ n^-1/3^m'-m∬/(2π i)^2exp(ζ^3/3-ζξ)/exp(ν^3/3-νη)1/ζ-ν, since n( S(z)-S(w) ) ≈1/6σ^3^3 S”'()(ζ^3-ν^3) = 1/6σ^3(.(z∂_z)^3S(z)|_z=)(ζ^3-ν^3) = 1/3(ζ^3-ν^3). Therefore the correlation kernel (<ref>) after multiplication by n^1/3σ^-1^m-m' converges to the Airy kernel lim_n→∞ n^1/3σ^-1^m-m'𝒦(m,m')=𝒦_Airy(ξ,η) = ∬/(2π i)^2exp(ζ^3/3-ζξ)/exp(ν^3/3-νη)1/ζ-ν. We are interested in the value of (z∂_z)^3S(z)|_z=, which is given by the integral (z∂_z)^3S(z)|_z==∫_0^1(2 f^2(s) ^2/(1-f(s) )^3+2c g(s) ^2/(+g(s))^3). If the integral in (<ref>) is zero, the normalization constant σ becomes infinite. For σ < ∞, there are two cases: * we have λ_1<k and then the limit shape Ω is convex near t_+; * we have multiple fully filled rows of length k and Ω is concave near t_+. We conclude that in the first case the normalized fluctuations of the first row of the random Young diagram λ_1-t_+n/σ^-1n^1/3 are described by the Tracy–Widom distribution with β=2. Similarly, in the second case when Ω is concave near t_+ the same distribution describes the normalized fluctuations of the first column of the diagram λ, the complement to λ inside n× k rectangle. Alternatively, we are looking at n - #{i |λ_i=k}. The computation around the other double critical point t_-, which is measuring the fluctuations of the first column of λ (or first row of λ), is similar. Informally, if σ = ∞, then the result corresponds to an infinitely-thin Tracy–Widom distribution; in practice, it means that the fluctuations are described another distribution. We have two possibilities, either S”'()=0 or =0. The former leads to the Pearcey kernel <cit.> or to higher order Airy kernels <cit.> if higher derivatives are also zero. The latter corresponds to the discrete Hermite kernel defined in <cit.>. We discuss both cases in the sequel. §.§ Two touching support intervals To establish Theorem <ref>, assume that we have two support intervals (t_-, t_d) and (t_d, t_+) with corresponding to t_d. Then we have ∂_z^3S(z,t_d)|_z==0 as otherwise on the left (resp. right) of t_d, we have the convergence of the kernel to 𝒦_Airy(ξ,η), (resp. 𝒦_Airy(-ξ,-η)), but the Airy kernel is not symmetric. Next, we proceed similarly to the Airy case. We consider the asymptotic regime m ≈ t_d n+ξ n^1/4σ^-1, m' ≈ t_d n+η n^1/4σ^-1 for some constant σ as n→∞. As previously discussed, the first three derivatives of the action with respect to z at t=t_d and z= are zero. Hence, for nonzero fourth derivative ∂_z^4S(z,t_d)|_z=≠ 0 we set σ=(6/∂_z^4S(z,t_d)|_z=)^1/41/ and change the variables z= e^σζ n^-1/4≈(1+σζ n^-1/4+…), w= e^σν n^-1/4≈ (1+σν n^-1/4+…). Then in our asymptotic regime the normalized correlation kernel converges to the Pearcey kernel (<ref>) since n(S(z)-S(w))≈1/24^4σ^4(∂_z^4S(z,t_d)|_z=)(ζ^4-ν^4)=1/4(ζ^4-ν^4). §.§ Near the corner To obtain the asymptotic behavior near the corner of the rectangle (Theorem <ref>), we again use the integral representation of the correlation kernel (<ref>). In this case, there is an obstruction to our asymptotic analysis of the generic case as the double critical point z_+ of the action under consideration is z_+ = 0. As a result, the constant σ = ∞ and the integral in (<ref>) is equal to zero. The behavior near the corner is described by the discrete Hermite kernel introduced in <cit.>, as demonstrated for the constant specialization in <cit.>. Here we derive this result from the integral representation for the correlation kernel (<ref>) and generalize the derivation to arbitrary specializations. Consider the asymptotic regime n→∞ with k = cn+s/τ√(n) + o(1), where s is a parameter and τ is a normalization constant that we will choose later. Take m = ⌊ cn+s/τ√(n)⌋-l+1/2 and m' = ⌊ cn+s/τ√(n)⌋-l'+1/2 so that l,l'∈. We assume that the condition given by Equation (<ref>) is satisfied. Alternatively, these conditions can be written as lim_n,k→∞k/n=c, ∑_i=0^n-1f(i/n)=c∑_j=0^k-1g(j/k)^-1+s/τ√(n). Under these assumptions, we have ln K(z)≈ n[-∫_0^1ln(1-f(t)z)-c∫_0^1ln(1+g(t)/z)+1/√(n)s/τln z -1/√(n)s/τ∫_0^1ln(z+g(t))], where K(z) is the function from (<ref>). Setting t_+=c in (<ref>) transforms the action into S(z)= -∫_0^1ln(1-f(t)z)-c∫_0^1ln(z+g(t)). Taking into account the cancellation of the terms s/τ√(n)ln z in ln K(z) and in z^-m, we can write the correlation kernel as (l,l') = ∮∮_w<zdz/2π zdw/2π w e^ϕ(z,w)z^lw^-l'+1/z-w, ϕ(z,w) := n(S(z)-S(w))-√(n)s/τ[∫_0^1ln(z+g(t))-∫_0^1ln(w+g(t))]. We are interested in the vicinity of the critical point z=0 as demonstrated in Fig. <ref>(right). Thus, we consider the change of variables z=τ/√(n)ζ, w=τ/√(n)ν. For finite z and w, we then can approximate the logarithms under the integral in the exponent as ln(z+g(t))≈ln(g(t))+τ/√(n) g(t)ζ+𝒪(1/n). The first derivative of the action is S'(0)=0 due to condition (<ref>); therefore we can use the approximation S(z) ≈ S(0)+S”(0)/2z^2 = S(0)+S”(0)τ^2/2nζ^2. The constant contributions are cancelled. We now specify τ=1/√(S”(0)) and note that S”(0)=∫_0^1 f^2(t) + c∫_0^1 g^-2(t) = f_2^2+c g^-1_2^2. Next, define s := s∫_0^1/g(t), and hence the correlation kernel takes the form _s(l,l') := (τ/√(n))^l-l'∮∮_ν<ζ/2π/2π e^ζ^2/2-ζ s+ν s-ν^2/2ζ^l-1ν^-l'/ζ-ν. For l≠ l', the integrals over ζ and ν can be decoupled by integrating by parts and using ( ζ∂/∂ζ+ν∂/∂ν+1)ζ^l-1ν^-l' = (l-l')ζ^l-1ν^-l', (ζ∂/∂ζ+ν∂/∂ν+1)e^ζ^2/2-ζ s+ν s-ν^2/2/ζ-ν = (ζ+ν-s)e^ζ^2/2-ζ s+ν s-ν^2/2. Hence, the kernel can be written as _s(l,l')=(τ/√(n))^l-l'∮∮_ν<ζ/2π/2π e^ζ^2/2-ζ s+ν s-ν^2/2ζ^l-1ν^-l'(ζ+ν-s)/l-l', where the integration contour over ν can be considered as an arbitrary small counterclockwise circle around 0, while the integration contour over ζ includes the ν-contour and can be extended along the imaginary axis from -∞ to ∞. Then integral over ν is the standard contour integral representation of Hermite polynomial _l'-1(s)=(l'-1)!/2π∮ e^ν s -ν^2/2ν^-l'. The integral over ζ can be brought to a real integral representation for the Hermite polynomials _l(s) = 1/√(2π)∫_-∞^∞ (s+ y)^le^-y^2/2 = 1/√(2π)∫_-∞^∞dy (y)^le ^-y^2/2-ys+s^2/2 by the change of variable ζ=y and multiplication by e^s^2/2: ∫_-∞^∞ e^ζ^2/2-ζ sζ^l = √(2π)e^-s^2/2_l(s). Using the recurrence relation for the Hermite polynomials s _l(s)=_l+1(s)+l _l-1(s), we can write the kernel as _s(l,l')=(τ/√(n))^l-l'1/√(2π) e^-s^2/21/(l'-1)!_l(s) _l'-1(s) - _l-1(s) _l'(s)/l-l'. This kernel coincides with the discrete Hermite kernel introduced in <cit.> as ^_s(l,l')=1/√(2π (l-1)! (l'-1)!) e^-s^2/2_l(s) _l'-1(s) - _l-1(s) _l'(s)/l-l' up to a factor (τ/√(n))^l-l'√((l'-1)!/(l-1)!), which is cancelled in the determinant [δ_i,j-𝒦(i,j)]_i,j and therefore does not contribute to the correlation functions. The case l=l' can be obtained by taking the limit l'→ l and using l'Hôpital's rule _s(l,l)= 1/√(2π) e^-s^2/21/(l-1)!([d/dl_l(s)] _l-1(s) - [d/dl_l-1(s)] _l(s)). As we have e^-s^2/2/√(2)_l(s) _l'-1(s) - _l-1(s) _l'(s)/l-l'=∫_s^∞ e^-t^2/2_l-1(t)_l'-1(t) by <cit.>, taking the limit l' → l we see that the correlation kernel for l=l' can be written in the form: _s(l,l)= 1/√(2π)1/(l-1)!∫_s^∞ e^-t^2/2_l-1^2(t). Then the probability of the first row to have length no more than k-Δ is given by the gap probability formula lim_n,k→∞ (λ_1 - k ≤ -Δ) = [δ_ij-_s(i,j)]_i,j=0^Δ-1. In Fig. <ref>, we present the comparison of this discrete distribution for various values of s to samplings by the dual Robinson–Schensted–Knuth algorithm. §.§ At the corner The case when the limit shape ends exactly at the corner corresponds to s = 0. This was considered in <cit.> for the constant specialization f(s) = c and g(s) = 1, where they called this regime “critical” as it represented a phase transition. Their derivation relies on the results of Borodin and Okounkov <cit.> and essentially differs from the previous section only by expanding 1/z-w into series and integrating by terms instead of integration by parts. In particular, the assumption (<ref>) holds automatically. Their result <cit.> in the notation of present paper is lim_n →∞ (λ_1 - n c ≤ -Δ) = _0 ≤ i, j ≤Δ-1[ δ_i, j - ^Δ_ crit(i,j) ] with the entries of the matrix given by ^Δ_ crit (i, j) = ∑_ℓ = 0^(Δ - j - 1)/21/2 π1/ℓ!sinπ (j-i)/2Γ(ℓ + j-i/2) if ℓ + j-i/2∉_≤ 0, 1/2(-1)^ℓ/ℓ! (i-j/2 - ℓ)! if ℓ + j-i/2∈_≤ 0. We wish to compare (<ref>) with (<ref>) and show the determinants give the same gap probability formula. We will show this by essentially identifying the matrices, which we make precise as follows. For all j - i/2∉ and Δ > 0, we have 2^(i-j)/2^Δ_ crit(Δ - 1 - i, Δ - 1 - j) = _0(i, j). We remark that Theorem <ref> implies the determinants are equal as the factor 2^(i-j)/2 will not contribute to the determinant and we will show the diagonal entries of both matrices are all 1/2. We begin by analyzing the kernel ^Δ_ crit in (<ref>). If j-i/2∈_>0, then we clearly get ^Δ_ crit(i,j) = 0. For the diagonal entries i = j, the only term that is nonzero is the ℓ = 0 term, and so ^Δ_ crit (i, j) = 1/2, which also equals the diagonal entries of the matrix [δ_ij - _ crit^Δ(i,j)]_i,j. Next we consider when j-i/2∈_<0, and for simplicity we define A := i-j/2. By the binomial theorem, we have ^Δ_ crit (i, j) = 1/2∑_ℓ=0^A (-1)^ℓ/ℓ! (A - ℓ)! = 1/2A!∑_ℓ=0^A (-1)^ℓ A!/ℓ! (A - ℓ)! = (1 + (-1))^A/2A! = 0. Next for B := j-i/2∉, set B' := B - 1/2 = j-i-1/2, and by well-known properties of the Gamma function, we have sin(π B) Γ(ℓ+B)/πℓ! = (-1)^B'/√(π)ℓ!(2(ℓ+B)-2)!!/2^ℓ+B' if ℓ + B > 0, (-2)^-ℓ-B'/(-2(ℓ+B))!! if ℓ + B < 0, where the double factorial is defined as n!! := ∏_k=0^⌊ n/2 ⌋ (n - 2k) and by convention (-1)!! = (0)!! = 1. On the other hand, we want to examine the (modified) discrete Hermite kernel (<ref>). Note that we shift the indices to [0, Δ - 1] instead of in [1, Δ] to more closely match the above determinant formula. We first simplify the diagonal entries (<ref>) (with replacing i ↦ i+1 for the change in indexing convention), which when s = 0 becomes _0(i, i) = 1/√(2π)1/i!·_i(t)_i(t)_/2 = 1/√(2π)1/i!·i! √(2π)/2 = 1/2, where ··_ denotes the inner product in which the Hermite polynomials are orthogonal. Now we assume i ≠ j, and so we want to compute _0(i, j) = 1/√(2 π) j!_i+1(0) _j(0) - _i(0) _j+1(0)/i-j, When i and j have the same parity, that is j - i/2∈, then _0(i, j) = 0 since H_ℓ(0) = 0 when ℓ is odd. Now we assume i and j have different parity, so j - i/2∉, and using _ℓ(0) = (-1)^ℓ/2(ℓ-1)!! for ℓ even, we compute _0(i, j) = (-1)^(j-i+1)/2/√(2 π) j! (i-j) i!! (j-1)!! if i is odd, (i-1)!! j!! otherwise. Summarizing the above, remains to show for all i and j such that j-i/2∉ that 2^(i-j-1)/2/2π√(2) (-1)^(i-j-1)/2∑_ℓ = 0^⌊ j/2 ⌋Γ(ℓ + i-j2)/ℓ! = (-1)^(j-i+1)/2/√(2 π) j! (i-j) i!! (j-1)!! if i is odd, (i-1)!! j!! otherwise. We can multiply both sides by (-1)^(j-i+1)/2√(2π) and note that (-1)^i-j+1 = 1 by our parity restriction. Next we split the problem into when i > j and i < j. Hence, Equation (<ref>) for i > j is equivalent to ∑_ℓ = 0^⌊ j/2 ⌋(2ℓ+i-j-2)!!/2^ℓℓ! = 1/j! (i-j) i!! (j-1)!! if i is odd, (i-1)!! j!! otherwise, and for i < j, we need to split the sum into two parts, where setting A := j-i+1/2, we will show below (Lemma <ref>) that ∑_ℓ = 0^A-1(-1)^ℓ+A/2^ℓℓ! (j-i-2ℓ)!! + ∑_ℓ=A^⌊ j/2 ⌋(2ℓ+i-j-2)!!/2^ℓℓ! = 1/j! (i-j) i!! (j-1)!! if i is odd, (i-1)!! j!! otherwise. In both (<ref>) and (<ref>), the right hand side becomes i!! (j-1)!!/j! (i-j) = i!!/j!! (i-j) (i odd), (i-1)!! j!!/j! (i-j) = (i-1)!!/(j-1)!! (i-j) (i even). We show the double factorial identities below, which completes the proof. For i > j ≥ 0 with i + j ≡ 1 2, we have ∑_ℓ = 0^j/2(2ℓ+i-j-2)!! j!! (i-j)/(2ℓ)!! = i!!, if i odd and j even, ∑_ℓ = 0^(j-1)/2(2ℓ+i-j-2)!! (j-1)!! (i-j)/(2ℓ)!! = (i-1)!!, if i even and j odd. We prove Equation (<ref>) by using simultaneous induction on i ↦ i + 2 and j ↦ j + 2 by ∑_ℓ = 0^j/2(2ℓ+i-j-2)!! j!! (i-j)/(2ℓ)!! = (i-j)(i-2)!! + j∑_ℓ = 0^(j-2)/2(2ℓ+i-j-2)!! (j-2)!! (i-j)/(2ℓ)!! = (i-j)(i-2)!! + j(i-2)!! = i!!, where the base case of j = 1 an arbitrary i has a single term with (i -2)!! · 1 · i = i!!. Equation (<ref>) is proved analogously, or one can note that it is the same as (<ref>) by adding 1 to i and j. Two noteworthy aspects of (<ref>) is that the right hand sides are independent of j and each term is easily seen to be a positive integer since 2ℓ≤ j. It would be interesting to have a bijective proof of these identities. Equation (<ref>) holds. The proof for the induction step is analogous to the proof of (<ref>) as A and the negative sum does not change. Therefore, we only need to show the base cases. We show the case for i = 1 and arbitrary (even) j as the other case is similar. Hence, setting J := j/2 ∈, Equation (<ref>) becomes ∑_ℓ = 0^J-1(-1)^ℓ+J/2^ℓℓ! (2J-2ℓ-1)!! + ∑_ℓ=J^J (2ℓ-2J-1)!!/2^ℓℓ! = 1/(2J)!! (1-2J) Next, we rewrite (<ref>) by multiplying both sides by (2J-1)!! and bringing the ℓ = J term to the right hand side to the equivalent identity ∑_ℓ = 0^J-1(-1)^ℓ+J/(2ℓ)!! (2(J-ℓ)-1)!! = -(2J-3)!!/(2J-2)!!, since 2^J J! = (2J)!!. It is convenient to introduce double factorial analogs of binomial coefficients, NK := N!!/K!! (N-K)!! with the convention that (NK) = 0 for all K < -1 and K > N + 1. In general, these are not integers, and we give the first few rows in the triangle in Table <ref>. Hence (<ref>) in our new notation becomes ∑_ℓ = 0^J-1 (-1)^ℓ+J2J-12ℓ = -(2J-3)!!/(2J-2)!!. One can immediately verify that an analog of the usual recurrence relation for binomial coefficients holds for all N > 0 and 0 ≤ K ≤ N: NK = N-2K-2 + N-2K. Therefore, by induction on J we have ∑_ℓ = 0^J-1 (-1)^ℓ+J2J-12ℓ = (-1)^J + ∑_ℓ = 1^J-2 (-1)^ℓ+J[ 2J-32(ℓ-1) + 2J-32ℓ] - 2J-12J-2 = (-1)^J + (-1)^J+1 + 2J-32J-4 - 2J-12J-2 = (2J-3)!!/(2J-4)!! - (2J-1)!!/(2J-2)!! = -(2J-3)!!/(2J-2)!!, where the second equality is by telescoping and the base case of J = 1 is simply -1 = -1. § EXAMPLES In this section, we present a number of specific examples of our results. The measure (<ref>) is invariant under the (simultaneous) rescaling x_i ↦ x_i/β, y_j ↦β y_j for any fixed β∈_>0. In particular, if y_1 > 0, we can normalize our measure so that y_1 = 1. §.§ Constant functions and Krawtchouk polynomials As the first example, we consider x_i = α and y_j = 1 for some fixed α∈_>0. This case was studied in <cit.> using saddle point analysis. Then the measure (<ref>) becomes μ_n,k(λ | α)=α^λ V__n(λ) V__k(λ')/(1 + α)^nk, Using the Weyl dimension formula, we can write the measure as μ_n,k(λ|α)=1/Z_n,k(α)∏_i<j(a_i-a_j)^2∏_l=1^nn+k-1a_lα^a_l, where Z_n,k(α) = (1+α)^nk. Setting α = p/1-p, we obtain the weight for the Krawtchouk polynomials: W_n,k(b) = √(Z_n,k(α)^-1)n+k-1b p^b (1-p)^n+k-1-b. By Remark <ref>, if we take y_j=β then rescale α↦α/β, we arrive again at the measure (<ref>). Next, under this specialization Equation (<ref>) becomes α z/1 - α z + c/z + 1 - t = 0, and Equation (<ref>) evaluates as α z/(1 - α z)^2 - c z/(z + 1)^2 = 0. Solving Equation (<ref>) for z, we obtain the roots z_0 = 0, z_± = α (c+1) ± (α + 1) √(α c)/α ( α c - 1). Substituting z_± into (<ref>), we obtain t_± = α (c-1) ± 2√(α c)/α + 1 as the end points for the limit shape, and we compute t_+ - t_- = 4√(α c)/α + 1. as the total length of the interval containing the limit shape. To compute the limit shape function, we solve (<ref>) for z, where we generically have two roots z_1,2 = α (c-1)+t(1-α) ∓√(4α(t+1)(t-c)+(α(c-1)+t(1-α))^2)/2α(t+1). For t ∈ (t_-,t_+), we have a pair of complex conjugate roots. Hence, we have z_1,2 = ±√(4α(t+1)(t-c)+(α(c-1)+t(1-α))^2)/2α(t+1), and therefore ρ(t) = 1/π z = 1/πarccos( α (c-1) + t(1-α)/2√(α(c-t)(t+1))) (t∈ [t_-,t_+]). The earliest paper known to the authors that contains this asymptotic result is <cit.>, where it was first obtained from the study of Krawtchouk polynomials. Next we want to compute the edge asymptotics. We substitute t_+ from (<ref>) to (<ref>) and obtain = α(c+1)-(α+1)√(α c)/α (α c-1), with S”'() = -2α^2(1+√(α c))^5/(α+1)^3(√(α)-√(c))√(c). For α, c > 0 we have σ=(α+1)c^1/6/α^1/6(√(c)-√(α))^2/3(1+√(α c))^2/3. Furthermore, we note that ρ(t_+) = 1/πarccos(√(c)-√(α)/√(c)-√(α)), and hence Ω'(t_+) = c-α/c-α by (<ref>). From this, it is easy to see that the limit shape ends on the upper (resp. lower) right boundary of the (tilted) rectangle if and only if c > a (resp. c < a) if and only if it is (locally) strictly concave (resp. convex). A similar analysis holds for the limit shape around t_-. We give an example of the distribution of the longest row in Fig. <ref>. Note that as α→ c then , z_- → 0, z_+ →2(c+1)/c^2-1, t_+→ c, Ω'(t_+) jumps to 0, and σ diverges. In this case, we have a different asymptotic description as the fluctuations depend on n very weakly and as n→∞ are described by the discrete distribution given by (<ref>). Hence, the probability of λ_1=n is 1/2, of λ_1=n-1 is 1/4+1/2π, of λ_1=n-2 is 1/8-1/8π and is much smaller for smaller values of λ_1 (cf. <cit.>). Now let us look at what happens if we assume t_+ = c. From (<ref>), this means that we must have α + c - 2√(α c) = (√(α) - √(c))^2 = 0, which only occurs if α = c. On the other hand, (<ref>) implies we have z_- = 0 (recall z_- dictates the value of t_+) if and only if √(α)/α+1 = √(c)/c+1 (strictly speaking, we should assume α c - 1 ≠ 0, but it is easy to see this is a removable singularity for z_-). This implies that α = c, c^-1, but α = c^-1 also sends the denominator to 0. By evaluating lim_α→ c^-1 z_- = 1/2(c-1) using l'Hôpital's rule, we conclude that z_- = 0 if and only if α = c again. Moreover, above we saw the limit shape Ω(t) is flat at t_+ if and only if c = α. Hence, we have proven Conjecture <ref> for this specialization. Next, to directly compare with <cit.>, we need the following translation of notation: p ⟷α/1 + α, p_c ⟷c/1 + c, in particular, we can now view α as the transition rate yielding the probability p. The critical regime of <cit.> is when p → p_c, which in our notation becomes α→ c. In terms of our limit shapes, the critical regime is precisely when λ_1 ∼ k but we are allowing λ_2 < k. Thus, we are able to have some fluctuations in λ_1. If instead we consider the deterministic regime with p > p_c, which in our notation is c > α, then the right boundary of the limit shape is at t_+ < c and then has ρ(t) = 1 for t ∈ [t_+, c]. Therefore, we have a large number of rows of λ equal to k almost surely for n ≫ 1; in particular, we have λ _1 = k almost surely. §.§ Piecewise-constant specialization and multi-interval support of limit shape For this example, we are essentially doing the estimation used in the proof of Theorem <ref> in Section <ref>, but extending it for a macroscopic share of parameters. More precisely, we use x_i=α_1 for i=1,…,⌊ A_1n⌋, x_i=α_2 for i=⌈ A_1n⌉,…,⌊ (A_1+A_2)n⌋ and so on with constants α_1,…, α_u and shares A_1,…, A_u, where ∑_i=1^uA_i=1. Similarly denoting the constants for y_j by β_1,…, β_v and shares by B_1,…,B_v such that ∑_j=1^vB_j=c, we get S(z,t)=-∑_i=1^u A_iln(1-α_iz)-∑_j=1^vB_jln(z+β_j)+(c-t)ln z. The critical points of the action are determined from Equation (<ref>), which in this case takes the form t+1=∑_i=1^uA_i/1-α_iz+∑_j=1^vB_jβ_j/β_j+z. To find the support of the limit density we need to solve Equation (<ref>), which reads z(∑_i=1^uA_iα_i∏_j=1^v(z+β_j)^2∏_k≠ i(1-α_kz)^2-∑_j=1^vB_jβ_j∏_i=1^u(1-α_iz)^2∏_k≠ j(z+β_k)^2)/∏_i=1^u(1-α_iz)^2∏_j=1^v(z+β_j)^2=0, for real roots z in the interval [-1,c]. As we have already seen in Section <ref>, for u=v=1 we get the quadratic equation in the parentheses if αβ c≠ 1 (and without loss of generality by Remark <ref>, we can take β = 1). The case αβ c=1 corresponds to one of the roots being z=0 and the support of the density ending in the corner of the rectangle, as discussed above. For u,v>1 this is a difficult problem in general, so we discuss only fourth order equation here. As such, we take u=2, v=1 or u=1, v=2 to obtain (<ref>) as a fourth order equation (times z), but these two cases are equivalent under a change n ↔ k. Hence, without loss of generality, take u=1, v=2, the fourth order equation is α(z+β_1)^2(z+β_2)^2-B_1β_1(1-α z)^2(z+β_2)^2-(c-B_1)β_2(1-α z)^2(z+β_1)^2 = 0. The analysis in generic case involve very cumbersome expressions (but it can be computed explicitly), so we assume for simplicity that α=1, B=c/2, β_1 = β, β_2 = 1/β. We remark that the latter choice corresponds to the skew Howe duality for symplectic or orthogonal groups, which we discuss in more detail in Section <ref> below. The condition for a two interval support then corresponds to the condition of symplectic Young diagram to start away from the corner of the corresponding rectangle. Equation (<ref>) can have four real roots only if discriminant is positive, which reads 1/β^10(β-1)^2 (β+1)^12 c^2 ((β-1)^2-4 β c) (β (2 β+c-4)+2)^2 > 0. This holds for β>2c+1+2√(c(c+1)) and for β<2c+1-2√(c(c+1)). For the roots to be real we also need P = -16 c β ^7-4 (-c^2+12 c+4) β ^6-4 (4 c^2+4 c) β ^5-4 (10 c^2-8c-8) β ^4 -4 (4 c^2+4 c) β^3-4 (-c^2+12 c+4) β ^2-16 c β<0 and D=-β ^2 16 c (4 c β^12+ (32 c+16) β^11+ (-c^3+8 c^2+48 c+32) β ^10. . +(-8 c^3+64 c^2-16) β^9+(-12 c^3+128 c^2-20 c-64) β ^8. . +(8 c^3+64 c^2+32 c) β^7+(26 c^3-16 c^2+64 c+64) β^6. . +(8 c^3+64 c^2+32 c) β^5+(-12 c^3+128 c^2-20 c-64) β^4. . +(-8 c^3+64 c^2-16) β^3+(-c^3+8 c^2+48 c+32) β^2+(32 c+16) β +4 c)<0. For positive c it is possible to check that these polynomials do not have roots greater than 2c+1+2√(c(c+1)), so for all β>2c+1+2√(c(c+1)) we have four real roots in (<ref>) and two-interval support of the density ρ(t). The graph of the corresponding function T(z) from Equation (<ref>) is presented in Fig. <ref>. We present the examples of random Young diagrams with two-interval supports of the density in Fig. <ref>. For mutually inverse values of β=2c+1± 2√(c(c+1)), the two intervals join in one point. This leads to the behavior described by Pearcey kernel as per Theorem <ref>. In more detail, for β=2c+1+ 2√(c(c+1)) , Equation (<ref>) becomes -2 (c+1) (8 c (c+√(c (c+1))+1)+4√(c (c+1))+1)(z+1)^2((2 c-1) z^2-(8 c+2) z+2 c-1) = 0, and we see that z=-1 is its multiple root. Substituting β and z=-1 to (<ref>) and solving for x we get x=c-1/2 as the point where two intervals of support are touching. The first three derivatives of the action with respect to z at x=c-1/2 and z=-1 are zero. For the fourth derivative we have ∂_z^4S(z,(c-1)/2)|_z=-1=3(c+1)/8c, and similarly to the Airy kernel case we set σ=-(6/∂_z^4S(z,(c-1)/2)|_z=-1)^1/4=-2√(c/c+1) and change the variables z=-e^σζ n^-1/4≈ -(1+σζ n^-1/4+⋯), w=-e^σν n^-1/4≈ -(1+σν n^-1/4+⋯). Hence, according to Theorem <ref> in the asymptotic regime m≈c-1/2n+ξ n^1/4σ^-1, m'≈c-1/2n+η n^1/4σ^-1 as n,k→∞ the correlation kernel converges to the Pearcey kernel (<ref>). This regime is illustrated in Fig. <ref>, where we present a random lozenge tiling of a skew hexagon. §.§ Monomial functions We consider f(s) = α s^ℓ and g(s) = s^m for some fixed nonnegative real numbers ℓ, m ∈_≥ 0. Then the integrals in Equation (<ref>) and Equation (<ref>) are instances of Chebyshev's differential binomial integral (see, e.g., <cit.>): ∫ s^ (κ + ν s^)^ = κ^++1/ν^-+1// B(-ν s^/κ; 1+/, +1) = κ^ s^+1/1+_2F_1( +1/, -; 1 + + /; - ν s^/κ), where B(y; , ) = ∫_0^y s^-1 (1-s)^-1 is the (lower) incomplete Beta function, and _2F_1(_1, _2; _1; x) is the basic hypergeometric function. Thus, Equations (<ref>) and (<ref>) for ℓ > -1 and m > -1 become, respectively, α z/ℓ+1_2F_1( 1, 1 + 1/ℓ; 2 + 1/ℓ; α z ) + c/(m+1)z_2F_1( 1, 1 + 1/m; 2 + 1/m; -1/z) - t = 0, α z/ℓ+1_2F_1( 2, 1 + 1/ℓ; 2 + 1/ℓ; α z ) - c/(m+1) z_2F_1( 2, 1 + 1/m; 2 + 1/m; -1/z) = 0. We note that there are particular formulas for ℓ, m ∈ that simplify (<ref>), where the integrals can be computed in terms of more elementary functions. For example, if we take ℓ = m = 1, then -1/α zln(1 - α z) - 1 + cz ln(z/1+z) + c - t = 0, (α z - 1) ln(1 - α z) - α z/α z (α z - 1) + c ( zln(z/z+1) + z/z + 1)= 0. However, except for ℓ = m = 0, we are unable to solve (<ref>) (for z) explicitly, but it is possible to compute the limit shape numerically. In more detail, we compute the two roots z_± numerically, where an example for ℓ = m = 1 and α =1 is illustrated in Fig. <ref>. Then the support of the density [t_-,t_+] is computed numerically by substituting the roots into Equation (<ref>), and finally the limit shape can then be obtained numerically from Equation (<ref>). Examples of random diagrams sampled using dual RSK and the limit shapes computed from the solution numerically are presented in Fig. <ref>. §.§ Alternating weights and symplectic Young diagrams As another example consider the groups _2n×_2k and interlacing specializations x_2i-1=f(i/n), x_2i=f(i/n)^-1 and y_2j-1=g(j/k), y_2j=g(j/k)^-1. As the order of specialization parameters is not important, we can take x_i = f(i/n) for i≤ n and x_i=f(i/n)^-1 for i>n and similarly for y_j, but the interlacing order is more natural in comparison to the symplectic groups. This particular case is a bit more general than Theorem <ref>. It is interesting in relation to the skew Howe duality between symplectic groups _2n and _2k. The exterior algebra ⋀(^n⊗^2k) admits a multiplicity-free action of the direct product _2n×_2k. Therefore we have a decomposition into the irreducible representations ⋀(^n⊗^2k)=⊕_λ⊆ k^n V__2n(λ)⊗ V__2k(λ'), where λ' is a conjugate to a diagram that complements λ inside of the n× k rectangle. Writing this decomposition in terms of the characters, we can introduce the probability measure μ_n,k^(λ|x,y)=sp_λ(x_1,…,x_n) sp_λ'(y_1,…,y_k)/∏_i=1^n∏_j=1^k(x_i+x_i^-1+y_j+y_j^-1), where we have denoted by sp_λ character of irreducible representation of symplectic group _2n. On the other hand, the exterior algebra ⋀(^n⊗^2k) can be seen as the spinor representation ⋀(^n) of _2n, raised to the tensor power 2k, (⋀^n)^⊗ 2k. This tensor power can be implemented by the Berele insertion algorithm <cit.>; see also <cit.> for the proof of the (_2n, _2k) duality with the insertion algorithm. This algorithm can be used to sample random diagrams with respect to the probability measure (<ref>). In the paper <cit.>, we have demonstrated that for the trivial specialization {x_i=1, y_j=1}, that is when the measure is given by the formula μ_n,k^(λ)= V__2n(λ) V__2k(λ')/2^2nk, the limit shape of the symplectic Young diagrams when n,k→∞ such that k/n = c + 𝒪(1/n) is half of the limit shape of Young diagrams for _2n×_2k. Using the Berele sampling algorithm we conjecture that the limit shape of symplectic Young diagrams with respect to the measure (<ref>) for x_i=f(i/n), y_j=g(j/k) is half of the limit shape of Young diagrams for _2n×_2k, derived below. It would be interesting to prove our conjecture by using a free fermionic (or vertex operator) construction for the symplectic characters; see, e.g., <cit.>, but it is beyond of scope of the present paper. See also Question (<ref>) in Section <ref>. See Fig. <ref> for a random symplectic diagram and the limit shape for f(s)=e^-γ s, g(s)=e^γ c s. To derive the limit shape we first separate variables with odd and even indices: K(z)=∏_i=1^2n1/1-x_iz∏_j=1^2k1/1+y_j/z =∏_i=1^n1/1-f(i/n)z1/1-z/f(i/n)∏_j=1^k1/1+g(j/k)/z1/1+1/g(j/k)z. Then for the action we have S(z) = 1/2nln K(z)-tln z ≈1/2∫_0^1[ - ln(1-f(s)z) - ln(1-z/f(s)) - c ln(1+g(s)/z) - c ln(1+1/g(s)z) ] - t ln z. Taking the derivative we get the equations (z∂_z)S(z) = 1/2∫_0^1[ f(s)z/1-f(s)z+ z/f(s)-z- cz/z+g(s)- cg(s)z/1+g(s)z]-(x-c)=0, (z∂_z )^2S(z) = ∫_0^1[f(s)z/(1-f(s)z)^2+f(s)z/(f(s)-z)^2-cg(s)z/(1+g(s)z)^2-cg(s)z/(z+g(s))^2]=0. The piecewise-constant specialization with x_i=1, i=1,…, n and y_2j=β, y_2j-1=β^-1, j=1,…,k for _2n×_2k gives f(s)=1, g(s)=β and was considered in Section <ref>. It corresponds to the constant specialization x_i=1, y_j=β for symplectic groups _2n×_2k. Here single-interval support of the limit density for general linear group means that the diagram for the symplectic group starts at the left corner of the n× k rectangle. The two-interval regime leads to the limit shape touching to the boundary of the rectangle before getting to the corner, which is at the center of diagram in Fig. <ref>. The fluctuations in _2n×_2k case in this regime are described by the Pearcey kernel as demonstrated in Theorem <ref> and discussed in Section <ref>. The behavior in _2n×_2k is unknown, but we expect it to be described by the Pearcey kernel, symmetric Pearcey kernel <cit.>, or the related Pearcey-like kernel discussed in <cit.>. Another specialization that admits the explicit solution is exponential. In the case using the specialization f(s)=e^-γ s and g(s)=e^cγ s, solving the first equation we get z_1,2=-(e^γ-e^cγ)(e^cγ+e^γ(1+2t))±√((e^γ-e^cγ)^2(e^cγ+e^γ(1+2t))^2-4e^(c+1)γ(e^2cγ-e^2tγ)(e^2γ(t+1)-1))/2e^cγ(e^2γ(t+1)-1). Solving the second equation we again get the equation for the critical values of z: (e^γ(c+1)+1)z((e^γ-e^cγ)z^2+2(e^γ(c+1)-1)z-e^cγ+e^γ)/γ(e^γ-z)(e^cγ+z)(e^γz-1)(e^cγz+1)=0. The solutions are z_±=e^2γ(c+1)-1±√((e^2γ-1)(e^2γ c-1)(e^γ(c+1)+1)^2)/-e^γ+e^γ c-e^γ(c+2)+e^γ(2c+1). Substituting into (<ref>) we get the support of the density: t_±=1/2γln(2e^2γ(c+1)+2e^3γ(c+1)-e^γ(c+3)-e^γ (3c+1)∓√(e^2γ(c+1)(e^2γ-1)(e^2cγ-1)(e^γ(c+1)+1)^2)/(e^γ(c-1)+1)^2). Using (<ref>) we get the limit density for t∈[t_-,t_+]: ρ(t)=1/πarccos(e^2cγ-e^γ(c+1)-e^2γ(t+1)+e^γ(2t+c+1)/2√((e^γ(2c+1)-e^γ(2t+1))(e^γ(2t+c+2)-e^cγ))) An example of this limit shape is presented in Fig. <ref>. This case can be seen as a particular case of the exponential specialization x_i=α e^-γi-1/n, y_j=β e^-δj-1/k discussed in the next section. To see that we need to rearrange the interlacing parameters {x_2i-1,x_2i}_i=1^n, {y_2j-1,y_2j}_j=1^k in the increasing order to get the sequences {e^-γn-1/n,…, e^-γ1/n,1,1,e^γ1/n,…, e^γn-1/n}, {e^-cγk-1/k,…, e^-cγ1/k,1,1,e^cγ1/k,…, e^cγk-1/k}. Extracting the constants α=e^γ, β=e^cγ and ignoring the single duplicated value we get to the exponential specialization. § PRINCIPAL SPECIALIZATIONS AND Q-KRAWTCHOUK ENSEMBLE Another natural choice of specialization parameters is to take x_i = α q^i-1, y_j = q^j-1 or x_i = α q^i-1, y_j=q^1-j for some positive q. Both of these cases can be interpreted from the point of view of q-Krawtchouk polynomial ensemble <cit.> as follows. If we substitute these specializations into the probability measure (<ref>), as was demonstrated in <cit.>, the measures then take the form μ_n,k(λ|α, q, q) = α^λ q^λ_q(V__n(λ))· q^λ̅'_q(V__k(λ̅'))/∏_i=1^n∏_j=1^k(q^i-1+q^j-1), μ_n,k(λ|α, q,q^-1) = α^λ q^λ_q(V__n(λ))· q^-λ̅'_1/q(V__k(λ̅'))/∏_i=1^n∏_j=1^k(q^i-1+q^1-j), for each specialization respectively, where λ = ∑_i=1^n(i-1)λ_i and q-dimension of the irreducible _n representation is defined as the principal gradation (see <cit.>) that is the weighted sum of the dimensions of weight subspaces: _q(V__n(λ))=∑_(u_1,…,u_n-1)∈ℤ^n-1_≥ 0q^∑_i=1^n-1u_i V(λ)_λ-∑_i=1^n-1u_iα_i, where α_1,…,α_n-1 are the simple roots of _n and we identify the diagram λ with the dominant _n weight λ in the usual way. We use the standard notation for (combinatorial) q-analogs: [n]_q = q^n-1/q-1 = 1 + ⋯ + q^n-1, [n]_q! = [1]_q … [n]_q, nkq = [n]_q!/[k]_q! [n-k]_q!. Using q-analogues of the Weyl dimension formula, the Lindström–Gessel–Viennot lemma, and Dodgson condensation, the measure (<ref>) is then rewritten explicitly as μ_n,k({a_i}| α, q, q) = C^+_n,k,q∏_i<j(q^-a_i-n+1/2-q^-a_j-n+1/2)^2∏_i=1^n W^+_n,k(a_i+n-1/2| α), where W^+_n,k(a|α) = α^a q^a2+a(n-k)n+k-1aq and C^+_n,k,q = α^-n(n-1)/2q^k n/2(n+k-2)∏_i=1^n∏_j=1^k(q^i-1+q^j-1)∏_i=1^n[k+i-1]_q![i-1]_q! [n+k-1]_q!1/(1-q)^n(n-1)/2. The measure (<ref>) similarly takes the form μ_n,k(λ|α,q,q^-1)=C^-_n,k,q∏_i<j(q^-a_i-n+1/2-q^-a_j-n+1/2)^2∏_i=1^nW^-_n,k(a_i+n-1/2 | α), where W^-_n,k(a | α) = α^a q^a2+a(n-1)n+k-1aq and C^-_n,k,q= α^-n(n-1)/2q^n/2(n-1)(n+2k-2)∏_i=1^n∏_j=1^k(q^i-1+q^1-j)∏_i=1^n[k+i-1]_q![i-1]_q! [n+k-1]_q!1/(1-q)^n(n-1)/2. The q-Krawtchouk polynomials K_l^q(q^-a;p,N;q) are defined on the multiplicative lattice {q^-a}_a=0^N and are orthogonal with respect to the weight Naq p^-a q^a2-aN <cit.>. Therefore the weights W^+_n,k(a|α) and W^-_n,k(a|α) are weights of q-Krawtchouk polynomials K_l^q(q^-a; q^1-2n/α, n+k-1; q) and K_l^q(q^-a;q^2-2n-k/α, n+k-1; q). As described in <cit.>, q-difference equations for q-Krawtchouk polynomials give a way to derive the limit shapes in the regime when n,k→∞, while q→ 1 in such a way that c=limk/n and q = 1-γ/n with c,γ being constants. Moreover, recurrence relations for the orthogonal polynomials can be used to study global fluctuations around the limit shape and to prove the convergence to the limit shape. Below we present the derivation of the same limit shapes and study the local fluctuations in this regime from the point of view of our general framework. Generalizing the problem a bit, we take x_i = α q^i-1, y_j = t^j-1 and take the limits n,k→∞ and q,t→ 1 such that q = 1-γ/n and t = 1-δ/n. We will assume γ, δ≠ 0 and α > 0. As such, we instead set x_i=α e^-γi-1/n, y_j = e^-δj-1/n, so that we need to solve Equation (<ref>) for f(s) = α e^-γ s, g(s) = e^-δ c s: t = 1/γln(1 - α e^-γ z/1 - α z) + 1/δln(z + 1/z + e^-δ c). We can exponentiate (<ref>) to get e^γ t = 1 - α e^-γz/1-α z·( z+1/z+ e^-δ c)^γ/δ, but we should check that its roots satisfy (<ref>). For this specialization, Equation (<ref>) becomes 0 = (A z^2 + B z + C ) z/(z + e^-c δ) (1 + z) (e^γ - α z) (α z - 1) with γδ A = αδ(1 - e^γ)+α^2γ (1-e^-cδ), γδ B = αδ(1-e^γ+e^-cδ-e^cδ+γ)+αγ (e^-cδ-1-e^γ+e^-cδ+γ), γδ C = γ (1 - e^-c δ) e^γ + αδ (1-e^γ ) e^-c δ). Hence, the roots of (<ref>) are given by z_0 = 0, z_± = -B ±√(B^2 - 4AC)/2A. Hence, we can compute our endpoints t_± by substituting z_± into (<ref>). Next, we examine (<ref>) in more detail. We see that if γ / δ∈, then Equation (<ref>) becomes a polynomial equation in z. Now if we take γ = δ, then for real t we have t = 1/γln( (1 + z)(1 -α e^-γ z)/(1 -α z)(z + e^-γ c)) ⟺ e^γ t = (1 - α e^-γz)(z+1)/(1-α z)(z+ e^-γ c), which is a quadratic equation for z in terms of e^γ t: α (e^γ t - e^-γ) z^2 + (α (e^γ (t-c) - e^-γ) - e^γ t + 1) z + (1 - e^γ (t - c)) = 0. The solutions are z_1,2 =-α (e^γ (t-c) - e^-γ) + e^γ t - 1/2 α(e^γ t - e^-γ) ±√((α (e^γ (t-c) - e^-γ) - e^γ t + 1)^2 - 4 ·α (e^γ t - e^-γ) · (1 - e^γ (t - c)))/2 α(e^γ t - e^-γ). To determine the domain [t_-,t_+] we need to solve (<ref>), which in this case reads 1/γ( 1/1-α z - 1/1-α e^-γz + z/z+1 - z/z+ e^-γ c) = 0. Transforming to the common denominator we obtain -z/γ·α (α-e^γ(c+1) + (α + 1) e^γ c) z^2 + 2 α (1-e^γ (c+1)) z + α (1-e^γ)+e^γ (c+1) - e^γ/(1-α z)(1+z)(α z-e^γ)(1+e^γ cz) = 0, and the roots are z_± = α (1 - e^γ (c+1)) ∓√(α(1 + α))√((e^γ-1) (e^γ c-1) (e^γ (c+1)+α))/α(e^γ(c+1) -α- (α + 1) e^γ c), so t_±=1/γln(e^γ(c-1)(2α-α e^γ+e^γ(c+1)+2α e^γ(c+1)± 2√(α((1+α)e^γ-1)((e^γ c-1) e^γ(c+1)-α)))/(α+e^γ c)^2). Finally, the limit density is given by the formula (<ref>), that reads ρ(t)= 1/πarccos( α e^cγ - e^(c+1)γ - α e^γ(t+1) + e^γ(c+t+1)/2 √(α (e^γ (c+1) - e^γ (t+1)) (e^γ (c+t+1) - e^γ c))). If we take α=1 and shit the parameter t→ t+1 this formula recovers the limit shape presented in <cit.>. Taking x_i=α e^-γi-1/n, y_j = e^γj-1/n, so that we need to solve Equation (<ref>) for f(s) = α e^-γ s, g(s) = e^γ c s, we similarly obtain the limit shape. Solving Equation (<ref>) we obtain the roots z_1,2=α e^γ c-(α-1)e^γ(t+1)-e^γ/2α(e^γ(t+1)-1)±√(((α-1)e^γ(t+1)-α e^γ c+e^γ)^2 -4α e^γ(e^γ(t+1)-1)(e^γ c-e^γ t))/2α(e^γ(t+1)-1). We obtain the support [t_-,t_+] by solving (<ref>): t_± = - 1 - 2/γln(1+α) + 1/γln(e^γ +2α -α e^γ +α (2 e^γ+α -1) e^γ c± 2√(α(e^γ-1)(α+e^γ)(e^γ c-1)(1+α e^γ c))). By using (<ref>) we obtain the density ρ(t)=1/πarccos(α e^γ c-(α-1)e^γ(t+1)-e^γ/2√(((α-1)e^γ(t+1)-α e^γ c+e^γ)^2 -4α e^γ(e^γ(t+1)-1)(e^γ c-e^γ t))). Taking the limit α→ 1, the density (<ref>) simplifies to ρ(t)=1/πarccos(sign(-γ)×e^γ-γ t/2/21-e^γ(c-1)√((1-e^γ t)(1-e^γ(c+1-t)))), which coincides with the result in <cit.> after the shift of t. Limit shapes for various values of γ are presented in Fig. <ref>. Next, we consider the edge asymptotics governed by the Airy kernel. To make formulas less cumbersome we take α=1, but we can still explicitly do the computations below for generic α. First for f(s)=e^-γ s, g(s)=e^-γ c s we have critical value = 1-e^(c+1)γ+√(2)√((e^γ-1) (e^c γ-1) (e^(c+1)γ+1))/e^(c+1)γ-2 e^c γ+1. Denoting by Δ=√(2)√((e^γ-1) (e^c γ-1) (e^(c+1)γ+1)), for the .(z∂_z)^3S(z)|_z= we have .(z∂_z)^3S(z)|_z==-(-2 e^c γ+e^c γ+γ+1)^2 ×[(Δ(2 e^c γ-3 e^2 (c+1) γ+2 e^(c+2) γ-2 e^c γ+γ+2 e^2 c γ+γ+2 e^γ-3))/γ/2Δ^2(-2 Δ+2 e^c γ+e^(c+2) γ-e^c γ+γ+e^γ-3) (e^c γ(1-2 Δ)-2 e^2 c γ-e^c γ+γ+3 e^2 c γ+γ-1). +.+4 e^c γ+4 e^2 (c+1) γ+4 e^3 (c+1) γ-4 e^(2 c+3) γ-4 e^(3 c+2) γ-4 e^c γ+γ+4 e^γ-4/γ/2Δ^2(-2 Δ+2 e^c γ+e^(c+2) γ-e^c γ+γ+e^γ-3) (e^c γ(1-2 Δ)-2 e^2 c γ-e^c γ+γ+3 e^2 c γ+γ-1)]. By Equation (<ref>), the normalization constant σ for the Airy kernel is given by (<ref>) raised to the power -1/3 and multiplied by 2^1/3. We present a plot of σ as a function of c for various values of γ in Fig. <ref>. Note that for all values of γ, we have a divergence at various values of c ≤ 1. These degenerate cases are described by the discrete Hermite kernel, as stated in Theorem <ref>, discussed in Section <ref> and illustrated in the example of constant specialization in Section <ref> for c →α. In Fig. <ref>(left), we present a histogram of first row lengths and on the right is one randomly sampled diagram. We see that this discrete distribution appears when n t_+ = k, which means that the boundary of the support of ρ(t) is exactly in the corner of the n × k box and first row can not fluctuate freely. Similarly, for f(s)=e^-γ s, g(s)=e^γ c s we use (<ref>) to obtain = e^γ-e^γ c/1-e^γ(c+1)-√((e^2γ-1)(e^2γ c-1)). Denoting the square root by Δ=√((e^2γ-1)(e^2γ c-1)) and computing (z∂_z)^3S(z) using (<ref>), we obtain .(z∂_z)^3S(z)|_z= =1/γ(e^γ-e^γ c)^2 ((e^γ(c+1)-1)(e^γ(c+1)-1+Δ)/(-1+e^2γ+e^2γ c-e^2γ(c+1)+Δ(1-e^γ(c+1)))^2.- .-1/(e^γ c- e^γ(c+2)-e^γΔ)^2 -1/(e^γ- e^γ (c+2)-e^γ cΔ)^2). By formula (<ref>), the normalization constant σ is given by (<ref>) raised to the power -1/3 and multiplied by 2^1/3. Here c=1 is the case when we have the discrete Hermite kernel for all values of γ, as demonstrated in Fig. <ref>(left), whereas one randomly sampled diagram is given in Fig. <ref>(right). §.§ Uniform convergence In this subsection, we will give another proof of Corollary <ref> for the q-Krawtchouk ensemble by using different method, emphasizing the exponential decay of the measure when the diagrams are far from the limit shape. Starting with formulas (<ref>), (<ref>) for the probability measure, introducing the variables s_i=q^-a_i, and recalling that q=e^-γ/n, we rewrite the measure in the exponential form as μ_n,k(λ|q,q^± 1)=exp[∑_i≠ jln|s_i-s_j| +∑_i=1^nln W^±_n,k(s_i|α)]. We consider the limit n,k →∞, q → 1 such that q=1-γ/n, k/n = c + 𝒪(1/n) and substitute the leading approximation for q-factorials ln [a]_q!=∑_i=1^aln1-q^i/1-q≈ n∫_0^a/nln(1-e^-γ y) +aln (n/γ)=n/γ[_2(e^-γ a/n)-_2(1)]+aln (n/γ), into ln W^±_n,k(a|α), written as ln W^±_n,k(a|α)=ln [n+k-1]_q!-ln[a]_q!-ln [n+k-1-a]_q!-γ a(a-1)/2n+alnα -aγ+aγ/n(k+1/2±k-1/2), to obtain, after the substitution a=n/γln s: ln W^±_n,k(s|α)≈ ≈n/γ[π^2/6+_2(e^-γ(c+1))-_2(s^-1)-_2(e^-γ(c+1)s)-(ln s)^2/2+(γ(c/2-1±c/2)+lnα)ln s]. The derivative of ln W^±_n,k(s|α) can be used to obtain the limit shape by solving the variational problem, as discussed below: d/ln W^±_n,k(s|α) ≈n/γ s[-ln(1-1/s)+ln(1-e^-γ(c+1)s)-ln s+γ(c/2-1±c/2)] = n/γ s[ln(e^γ(c+1)-s)-ln(s-1)+γ(-c/2-2±c/2)]. If we consider the upper boundary of the rotated and scaled diagram as the function f_n(s), we have f_n'(a_i/n)=± 1 and 1-f_n'(a_i/n)/2=1 if there is a particle at the midpoint of the interval a_i as shown in Fig. <ref> or 0 otherwise. Therefore we can consider the sum ∑_i≠ jln|s_i-s_j| in (<ref>) as an approximation to an integral ∑_i≠ jln|s_i-s_j|≈n^2/γ^2∫_1^e^γ(c+1)/t∫_1^e^γ(c+1)/s1/4(1-f_n'(n/γln s))(1-f_n'(n/γln t))lns-t. If we substitute Taylor expansion of lns-t at the points s=e^γ i/n, t=e^γ j/n for i,j=0,…, n+k-1, and use ∫_exp(γ i/n)^exp(γ (i+1)/n)/s=γ/n, we recover the double sum of the logarithms. Expanding the brackets in the integral (<ref>) and considering f_n as a function of s, we obtain the quadratic functional Q[f_n]=B[f_n,f_n] where B[f_1,f_2] = ∫_1^e^γ(c+1)∫_1^e^γ(c+1) 1/4 f'_1(s) f'_2(t) lns-t^-1 , a linear term L[f_n]=-1/2∫_1^e^γ(c+1)∫_1^e^γ(c+1)/t f'_n(s) lns-t, and a constant term C_n=n^2/γ^2∫_1^e^γ(c+1)∫_1^e^γ(c+1)lns-t/t/s: n^2/γ^2∫_1^e^γ(c+1)/t∫_1^e^γ(c+1)/s1/4(1-f_n'(n/γln s))(1-f_n'(n/γln t))lns-t=Q[f_n]+L[f_n]+C_n. Similarly to the sum ∑_i≠ jln|s_i-s_j|, the sum ∑_i=1^nln W^±_n,k(s_i|α) can be approximated by an integral ∑_i=1^nln W^±_n,k(s_i|α)≈n^2/γ^2∫_1^e^γ(c+1)/2(1/s - f'_n(s) ) ×[π^2/6+_2(e^-γ(c+1))-_2(s^-1)-_2(e^-γ(c+1)s)-(ln s)^2/2+(γ(c/2-1±c/2)+lnα)ln s] . Integrating linear term (<ref>) over t, we obtain L[f_n]=-1/2∫_1^e^γ(c+1) f_n'(s)[_2(s^-1)+_2(e^-γ(c+1)s)+(ln s)^2/2-π^2/3+γ^2(c+1)^2/2]. If we combine this expression with the terms linear in f_n'(s) in (<ref>) and use ∫_1^e^γ(c+1) f_n'(s) = f_n(c+1)-f_n(0)=c-1 to get rid of the s-independent contributions, we get the final form of the linear term in the exponential: L^±[f_n]=-1/2∫_1^e^γ(c+1) f_n'(s)(γ(c/2-1±c/2)+lnα)ln s Combining all s-independent contributions with C_n we get the final normalization constant C_n, so we can write the probability of the diagram λ as an exponent of a functional J[f_n] = Q[f_n]+L^±[f_n]+C_n of the upper boundary f_n: μ_n,k(λ|q,q^± 1) = exp(-n^2/γ^2J[f_n]+𝒪(nln n)), where the correction term 𝒪(nln n) comes from the estimates of next orders in Stirling approximation and difference between sums and integrals. (For more detailed computations of this sort, see <cit.>.) Analysis of the functional J[f_n] is similar to <cit.>. The first step is to show that the limit density ρ(t) related to the limit shape Ω by the formula (<ref>), can be recovered as a solution to the limiting minimization problem ∬ lns-t^-1ρ(s) ρ(t) + ∫ρ(s) ln W^±(s|α), where ln W^±(s|α)=lim_n→∞1/nln W^±_n,k(s|α). By <cit.> (see also <cit.>) there exists a unique solution of this problem. In <cit.>, we have proven the weak convergence of the measures μ_n,k(λ|q,q^± 1) to the equilibrium measure ρ(t). The minimizing property of the limit density ρ(t) follows from this convergence. We were not able to find this kind of statement (for weights depending on n) in the existing sources, therefore we present the sketch of the proof below. The key point of the proof is the following “large deviation” estimate (see <cit.>, <cit.> for the case ln W^±_n,k(s|α) ≡ nln W^±(s|α)), with the proof essentially the same for the varying weights. Denote by E the minimal value of (<ref>), and for any n∈, k=⌊ cn⌋, η > 0 set A_n,η = {λ | ∑_i≠ jln|s_i-s_j| +∑_i=1^nln W^±_n,⌊ cn⌋(s_i|α) ≤ n^2 (E + η) .}. The following estimate holds for the complement of A_n,η. For any number a>0 there exists N∈, which depends on η but not on a, such that μ_n,⌊ cn⌋(A_n,η+a^c|q,q^± 1) ≤ e^-an^2 for all n ≥ N. Denote by μ^(1)_n the first correlation measure for μ_n,⌊ cn⌋, restricted to the sets A_n, η=1/n and normalized, i.e., for any compactly supported bounded function h we have ∫ h(s) dμ^(1)_n(s) := (n μ_n,⌊ cn⌋(A_n,η=1/n|q,q^± 1))^-1·∫_A_n,η=1/n∑_ih(s_i) dμ_n,⌊ cn⌋(λ|q,q^± 1). From the weak convergence of μ_n,[cn](λ|q,q^± 1), combined with the estimate from Proposition <ref>, it follows that the correlation measures μ^(1)_n converge weakly to ρ(t)dx. In addition, from the definition of A_n,η it follows that μ^(1)_n should converge to the minimizer, hence the minimizer is given by the density ρ(t) due to uniqueness. Alternatively, one can use Plemelj formula to solve a scalar Riemann–Hilbert problem to obtain the solution for the variational problem (<ref>). As ρ(t) is the solution to the variational problem, by <cit.> there exists a constant ℓ such that ∫lns-t^-1ρ(t) +ln W^±(s|α)=ℓ, if s∈(ρ), ρ(s)<1; ∫lns-t^-1ρ(t) +ln W^±(s|α)≥ℓ, if s∉(ρ); ∫lns-t^-1ρ(t) +ln W^±(s|α)≤ℓ, if ρ(s)=1. We add the third case as there is an additional restriction ρ(s)≤ 1. To prove the uniform convergence we need to demonstrate that the probability of a diagram λ with the upper boundary f_n can be estimated by the difference of f_n and limit shape Ω. We note that J[Ω] is non-negative, since we can approximate Ω by an upper boundary of a diagram with any precision for n large enough, but J[f_n] is non-negative as it is equal to minus the logarithm of a probability. Then we note that the functional Q[f_n] is positive-definite on Lipschitz functions with compact support (see <cit.>). Therefore we can use it to introduce the norm on the compactly supported Lipshitz functions ·_Q = Q[·]^1/2. By <cit.>, this norm can be used to bound the supremum norm for a compactly supported Lipshitz function h from above: h_∞=sup_sh≤ C Q[h]^1/4. Now write f_n as a sum of the limit shape Ω and a compactly supported Lipshitz function δ f_n: f_n(s)=Ω(s)+δ f_n(s). Then the probability of the corresponding diagram λ is given by μ(λ|q,q^± 1)=1/Z_nexp(-n^2/γ^2( J[Ω]+Q[δ f_n]+2B[Ω,δ f_n]+L^±[δ f_n] ) ) We estimate the linear term 2B[Ω,δ f_n]+L^±[δ f_n] in the same way as in the proof of <cit.> and use (<ref>) to demonstrate that it is non-negative. It is essentially another way to state that Ω is a minimizer. Therefore for a Young diagram λ with the upper boundary f_n such that Q[δ f_n]=ε^2 we have μ(λ|q,q^± 1) ≤ C_1exp(-n^2/γ^2ε^2+𝒪(nln n)). The number of Young diagrams inside the n× k box is equal to n+kn≤exp(c_2 n). Therefore, the probability of large fluctuations from the limit shape goes to zero: ℙ(f_n-Ω_Q > ε) < C_1exp(-n^2/γ^2ε^2+𝒪(nln n))exp(c_2 n) 0. Finally, we use (<ref>) to conclude that the convergence to the limit shape is uniform. §.§ Constant q If q=const, we still have q-Krawtchouk ensemble, but the limit shape degenerates. It can be derived by taking the limits in Equations (<ref>) and (<ref>) as described below. Assume that x_i=α q^i-1, y_j=q^b(j-1), then we have γ=-nln q and δ=-bnln q. Denote the corresponding probability measure on Young diagrams by μ_n,k^b(λ|q). Substituting to (<ref>), we get for real t t=-1/nln q(ln(1-α q^nz)-ln(1-α z))-1/nbln q(ln(z+1)-ln(z+q^nbc)). It is more convenient to exponentiate this equation: q^-n(t+1)=q^-n-α z/1- α z(z+ q^nbc/z+1)^-1/b Assuming q>1, c>1, b<0 we look for a pair of complex conjugate asymptotic solutions in the form z_±=q^une^±φ with u<0. Such pair is not real only if the leading term on the right-hand side is of the form -z^1-1/b=q^un(1-1/b)· e^π±φ(1-1/b), therefore max(-1, bc) ≤ u ≤ 0. On the left hand side we have q^-n(t+1), and comparing these terms we get u=-b(t+1)/b-1, φ=bπ/b-1. Thus we get constant density ρ(t)=φ/π=b/b-1 for -1≤ t≤min(-1/b, c-1-bc). For q>1, b>0 the same argument shows that z should grow at least as fast as q^nbc, therefore u≥ bc>0. However in this case the absolute value of the right-hand side tends to 1 and the limit shape degenerates. Another way to derive the support of the limit shape is to do the same substitution in Equation (<ref>), find the roots z_±, substitute to (<ref>), and only then take the limit n→∞, assuming q>1. Thus Equation (<ref>) takes the form z/nbln q(α b/α z-1-α b/α z-q^-n-1/z+1+1/z+q^nbc)=0. Taking the terms to the common denominator we obtain a cubic equation in the numerator with one of the roots z_0=0. As before, denote the other two roots by z_±. For the degenerate case b>0 the actual values for these roots are not important, as we have the degeneration t_± = 0 for q<1 and t_± = c-1 for q>1 for any finite nonzero values z_±. Thus the diagram is empty for q<1, b>0 and is completely filling the rectangle for q>1, b>0. For other cases we should take into account the asymptotic behaviour of the roots. For example, for the case b<0, q>1, bc<-1 we have z_- = α1-b/α+b, z_+ = (q^-n), and we recover the endpoints t_-=-1, t_+=-1/b. This allows us to conjecture the following asymptotic behavior of the diagrams. Consider the specializations x_i = q^i-1 and y_j = q^b(j-1). * For b < 0, the limit density of the measure μ_n,k^b is ρ(t) = b / (b-1). Moreover, the limit shape Ω(t) is a straight line with slope 1+b/1-b with the left (resp. right) support endpoint being 1 for q > 1 (resp. c for q < 1); explicitly Ω(t)=1+b/1-bt+2/1-b (resp. Ω(t)=1+b/1-bt-2bc/1-b). * For b > 0, the limit shape for μ_n,k^b is the empty (resp. full) diagram for q < 1 (resp. q > 1). * The correlation kernel for t in the support interval converges to the discrete sine kernel lim_n,k→∞𝒦(nt+l,nt+l') = sin( πρ(t) · (l-l') )π(l-l') if l ≠ l', ρ(t) if l = l'. In particular, if b = -1/c, then the limit shape for q > 1 ends at the right corner. We conjecture that fluctuations in the corner are described by a q-analogue of the discrete Hermite kernel. This kernel should be close to the discrete Hermite kernel for q close to 1. In Fig. <ref>, we present the sample distributions of the first row length for the cases q=1.2, k>n and q=0.8, n<k and we see that they are close to the discrete Hermite distributions. Similar to the constant case in Section <ref>, we can extend the analysis in this section to the case of multiple parameters. We can take x_i=q^α_1i for i=1,…,⌊ A_1n⌋, x_i=q^α_2i for i=⌈ A_1n⌉,…,⌊ (A_1+A_2)n⌋ and so on with constants α_1,…, α_u and shares A_1,…,A_u, where ∑_i=1^uA_i=1. Similarly denoting the constants for y_j by β_1,…, β_v and shares by B_1,…,B_v such that ∑_j=1^vB_j=c, we get a higher order polynomial equation instead of (<ref>) that can be solved asymptotically in the same way. We then conjecture the convergence of the correlation kernel to the sine kernels with constant densities on multiple intervals as demonstrated in Fig. <ref>. § CONCLUSION AND OPEN PROBLEMS We have proven the asymptotics of determinantal ensembles in various regimes for general specializations. In particular, we have considered limit shapes for single-interval and multi-interval support, as well as bulk, edge, Pearcey and corner fluctuations. Nevertheless some questions remain open, and we list them below. * In the analysis of bulk asymptotics we have used that f(s)>0, g(s)≥ 0 to demonstrate convergence to the discrete sine kernel. We expect this result to hold for f(s)≥ 0. One might need to estimate the number of complex roots for transcendental equations in this case. * We have demonstrated that fluctuations near the corner are described by the discrete Hermite kernel. It remains an open problem to prove that conditions for the limit shape to end in the corner stated in Conjecture <ref> are equivalent. * In the multi-interval case it would be interesting to check for the existence of the higher-order Airy-like (or Pearcey-like) behavior. * The case of principal specialization with q=const considered in Section <ref> is not included in the general setup of the present paper. The use of asymptotic solutions allows us to conjecture the limit shape in this case, and we have numerical evidence for the fluctuations. Yet the proofs require another technique. For b>0, the limit shape degenerates to empty or fully-filled diagram, but it is possible that there is non-trivial behavior in the corner after suitable rescaling. * Similar analysis for other classical dual pairs of Lie groups is another promising direction of research. In general, limit shape for the measures, given by skew Howe duality for pairs (_2n+1, _2k), (_2n, _2k), and (_2n, _k) are known only for the case when all specialization parameters are equal to 1 and the probability of a diagram is proportional to the dimension of an irreducible component <cit.>. The fluctuations in this case can be studied using semiclassical orthogonal polynomials as demonstrated in <cit.>, but detailed analysis of various asymptotic regimes have not been carried out. In Section <ref> we have formulated a hypothesis for the limit shape of symplectic diagrams for a general specialization which can be stated for other pairs as well. A free fermionic approach might be used to establish this result, but it requires additional techniques. On the other hand, (_n,_2k) Howe duality can be treated using the free fermion formalism as demonstrated in <cit.> or a linear algebraic approach via the Eynard–Mehta theorem (as discussed in <cit.>) as demonstrated in <cit.>. * Study of transition probabilities for the presented limit shapes can be also of interest. For example, for Schur–Weyl duality the transition probability converges to Marchenko–Pastur law as demonstrated in <cit.>. We expect to see the Marchenko–Pastur distribution for the single-interval support of the limit shape. The multi-interval case is an open question. * Another reasonable problem is to compute the entropy of the measures considered in the present paper, similar to what was done for the Plancherel measure <cit.> and Schur–Weyl measure <cit.>. * Moreover, one can study limit shapes of the lozenge or domino tilings, presented in Fig. <ref> and Fig. <ref> by using well-known techniques of <cit.>. In these cases we can consider limit shapes to be surfaces in three-dimensional space. Note, that rearrangement of the specialization parameters x_i, y_j drastically changes the picture here, as demonstrated in Fig. <ref>; contrast this with Fig. <ref>. In the case of q=const, we expect the limit surface to be piecewise flat, as can be seen in the right panel of Fig. <ref>. § SAMPLING CODE We provide some code to generate the samples using SageMath <cit.>. However, we have used specialized code to generate our figures. def sample(n, k, f, g, **kwds): M = matrix([[0 if random() <= 1 / (1 + f(i/n) * g(j/k)) else 1 for j in range(1,k+1)] for i in range(1,n+1)]) P,Q = RSK(M, insertion=RSK.rules.dualRSK) data = list(P.shape()) data += [0] * (n - len(data)) data = (((val-i)/n, (val+i)/n) for i,val in enumerate(data)) P = polygon2d([(-1,1), (0,0), (k/n,k/n), (k/n-1,k/n+1)], color='black', fill=False) P += line(data, thickness=2, **kwds) P.set_aspect_ratio(1) return P We use this as sage: sample(200, 300, lambda x: x^.2, lambda y: y^5).show(figsize=20) alpha
http://arxiv.org/abs/2408.11562v2
20240821121634
A Joint Noise Disentanglement and Adversarial Training Framework for Robust Speaker Verification
[ "Xujiang Xing", "Mingxing Xu", "Thomas Fang Zheng" ]
cs.SD
[ "cs.SD", "eess.AS" ]
Strong decays of doubly charmed and bottom baryons Qi-Fang Lü August 26, 2024 ================================================== § ABSTRACT Automatic Speaker Verification (ASV) suffers from performance degradation in noisy conditions. To address this issue, we propose a novel adversarial learning framework that incorporates noise-disentanglement to establish a noise-independent speaker invariant embedding space. Specifically, the disentanglement module includes two encoders for separating speaker related and irrelevant information, respectively. The reconstruction module serves as a regularization term to constrain the noise. A feature-robust loss is also used to supervise the speaker encoder to learn noise-independent speaker embeddings without losing speaker information. In addition, adversarial training is introduced to discourage the speaker encoder from encoding acoustic condition information for achieving a speaker-invariant embedding space. Experiments on VoxCeleb1 indicate that the proposed method improves the performance of the speaker verification system under both clean and noisy conditions. § INTRODUCTION Automatic speaker verification (ASV) aims to verify the identity of the speaker using their voice <cit.>. The most advanced speaker recognition systems <cit.> can achieve remarkable performance under acoustic control conditions. However, in real environments, the degradation of speech signals caused by background noise can significantly reduce the performance of speaker recognition systems <cit.>. That is due to that noise can disrupt the voiceprint characteristics of clean speech and cause a distribution mismatch between test and training speech, which is typically devoid of noise. In the last years, extensive research was conducted on reducing the adverse effects of noise on speaker recognition systems <cit.>. One method is to extract noise-robust speaker embedding by reducing the embedding distance between noisy/clean pairs. MohammadAmini et al. <cit.> proposed an optimal training strategy to make the extracted x-vector in noisy environments close to the corresponding x-vector in clean environments. Traditional speech enhancement (SE), aiming to improve speech quality by suppressing noise, may be detrimental to speaker verification <cit.>. Unlike traditional SE, joint training of speaker recognition systems and front-end enhancement modules is a novel approach <cit.>. Han et al. <cit.> utilized the combined model of SE and speaker verification as a pre-trained model to extract noise-robust embedding. And some other methods for extracting robust speaker embeddings. Yu et al. <cit.> presented context-aware masking to extract robust speaker embedding by enabling the neural network to focus on the speaker of interest and blur irrelevant noise. Data augmentation is also one of the most commonly used methods to improve the robustness of speaker recognition systems. Wang et al. <cit.> proposed a novel difficulty-aware semantic augmentation approach for generating diverse training samples at the speaker embedding level. Joint training of speaker recognition systems using clean and noisy data often yields satisfactory results <cit.>, but the performance of the SV system degrades sharply when facing unseen noises. To address this challenge, a common approach is to treat noisy speech and clean speech as different domains and obtain invariant speaker embedding space through adversarial training <cit.>. Another approach is to learn feature representations that are independent of noise through disentanglement learning <cit.>. However, we find that noise disentangling can lead to the loss of some speaker-related information under clean conditions, resulting in poor performance of the SV system. In addition, few studies simultaneously consider extracting noise-independent speaker embeddings and establishing speaker invariant embedding spaces. Inspired by this, we propose a noise disentanglement network architecture based on adversarial training to extract robust speaker embedding. Firstly, the disentanglement module includes a speaker encoder and a speaker-irrelevant encoder for decoupling speaker-relevant embedding and speaker-irrelevant embedding, respectively. The reconstruction component functions as a regularization constraint on the noise factor. And a feature-robust loss function guides the speaker encoder to learn noise-independent embeddings while preserving speaker information. In addition, adversarial training prevents speaker encoder from encoding various noisy information to promote model learning for more general representations. Experimental results confirm that our proposed method can achieve optimal performance under all conditions. § RELATED WORK §.§ TDNN for deep speaker embedding The most commonly used deep neural networks for extracting speaker embeddings are residual neural networks (ResNet) <cit.>, time-delayed neural networks (TDNN) <cit.>, or convolutional neural networks (CNN) <cit.>. In this study, we used ECAPA-TDNN <cit.> to extract speaker embedding. In addition to applying statistics pooling to project variable-length utterances into fixed-length speaker embeddings, ECAPA-TDNN proposes further architecture enhancements to both the TDNN architecture and statistics pooling layer. Additional skip connections are introduced to propagate and aggregate channels throughout the system, and channel attention using global context is added to the frame layers and statistics pooling layer. Finally, the speaker embedding is extracted through a fully connected layer. §.§ NDML-based method Our method is related to the recently proposed Noise-Disentanglement Metric Learning (NDML) method <cit.>, which is a SV system based on noise-disentanglement with metric learning to tackle the challenge of noise robustness under noisy environments. Inspired by NDML, we propose a novel noise-disentanglement network architecture based on multi-task adversarial training to achieve noise robustness. We will discuss their differences and emphasize the advantages of our approach in Section 3. § PROPOSED METHODS The proposed noise-disentanglement based on adversarial training architecture consists of three modules: a backbone B, a disentanglement module and an adversarial training module, as illustrated in Figure <ref>. The disentanglement module includes a speaker encoder E_s, a speaker-irrelevant encoder E_i and a reconstruction module D. And the adversarial training module, which includes a binary domain classifier with a gradient reversal layer, is used to discourage E_s from encoding acoustic condition information. The parameters of the backbone, speaker encoder, speaker-irrelevant encoder and decoder are accordingly denoted as θ, ϕ _s, ϕ _i and ϕ _d, respectively. Finally, the reconstruction loss, feature-robust loss, classification loss and adversarial loss are used jointly to optimize the speaker encoder and backbone network. §.§ Noise disentanglement Noise can disrupt the voiceprint features of clean speech. To alleviate this, we propose the noise disentanglement for purifying clean speaker information from corrupted speech at the speaker embedding level. Different from the NDML <cit.> that focuses on the disentanglement of feature level, which is susceptible to noise interference, better results can be achieved at the deep speaker embedding level decoupling. Speaker encoder E_s and speaker-irrelevant encoder E_i are designed to capture speaker representation S_s and speaker-irrelevant representation S_i from the noisy speaker embedding S_n, respectively. The reconstruction module serves as a constraint term to promote decoupling. Then, the concatenation of S_s and S_i serves as input to decoder D to reconstruct noisy speaker embedding S_n. MSE loss is used to minimize the distance between S_n and Ŝ_n, as follows: L_rec = 1/N∑_i=1^N ( S_n^i - Ŝ_n^i )^2 where N is the batch size. A feature-robust loss between clean speaker embedding S_c and decoupled noisy embedding S_s is optimized to supervise the speaker encoder E_s to generate a noise-independent speaker embedding without losing speaker information, as follows: L_fr = 1/N∑_i=1^N ( S_c^i - S_s^i )^2 Then, S_c and S_s are fed into the speaker classifier simultaneously, to calculate the classification loss using AAM-Softmax: L_cls = - 1/2N∑_i=1^2N loge^s· cos(θ _y_i + m)/e^s· cos(θ _y_i + m) + ∑_j=1,j y_i^C e^s· cos(θ _j ) where y_i represents the speaker label of the i-th utterance, s and m are two hyperparameters for AAM-Softmax. §.§ Adversarial training However, noise-disentanglement does not fully separate speaker information from speaker-irrelevant information. To increase the degree of disentanglement and establish a speaker-invariant space, we propose adversarial training to discourage E_s from encoding acoustic condition information. In order to utilize adversarial training in this case, we use augmentation labels (raw/augmented) instead of acoustic condition labels. Therefore, the domain classifier is designed as a binary classifier to maximize the correct prediction of augmentation labels for speaker embedding S_c and S_s. And during backpropagation, the gradient reversal layer is used to force the backbone B and speaker encoder E_s to generate speaker embeddings independent of noise, making it impossible for the domain classifier to distinguish, thereby achieving a minimax game. The adversarial cost function L_adv is defined as the cross-entropy, L_adv = - 1/2N∑_i=1^2N a_i· log(Softmax(F(S_a^i))) where a_i is the augmentation label of the i-th utternace, F is the domain classifier, and S_a is the set of S_c and S_s. Through adversarial training, the backbone B and speaker encoder E_s can be maximally motivated to learn noise-independent speaker embeddings and achieve speaker invariant embedding space. The whole cost function L is formulated below: L = L_rec + L_fr + L_cls - λ L_adv where λ is a positive gradient reversal coefficient that controls the trade-off between multiple objectives during training process. For each step, ϕ _s^t is updated to the value of ϕ _s^t+1 using reconstruction loss L_rec, feature-robust loss L_fr, classification loss L_cls and adversarial loss L_adv, as follows: ϕ _s^t+1 = ϕ _s^t - α▽ _ϕ _s^t ( L_rec + L_fr + L_cls - λ L_adv) where α is the learning rate. § EXPERIMENTS §.§ Datasets Following the common experiment settings <cit.>, experiments are conducted on the VoxCeleb1 <cit.> dataset. The development set contains 148642 utterances from 1211 speakers. And the test set contains 4874 utterances from 40 speakers, which constructs 37720 test trials. Since the dataset is collected in the wild, the speech segments are corrupted with real-world noise. But we assume the raw data to be a clean dataset and generate noisy data based on this raw data. The MUSAN <cit.> dataset is used as the source of noise, which contains 60 hours of speech, 42 hours of music and 6 hours assorted noise. The MUSAN dataset is divided into two non-overlapping subsets for generating noisy training and testing utterances respectively. At the training stage, for each clean utterance, one noisy utterance is generated at the random SNR level from 0dB to 20dB with a random noise type. At the testing stage, we evaluate the performance of the SV systems under seen and unseen noisy environments. For the seen noisy environments, the noise data is sampled from the remaining half of the MUSAN dataset. For the unseen noisy environments, we use NoiseX-92 <cit.> dataset and Nonspeech dataset as another noise source to generate noisy testing utterances. The NoiseX-92 dataset includes 15 kinds of noise, such as White Noise and Pink Noise. The nonspeech dataset consists of 100 types of noise, which is collected in various life scenarios. §.§ Implementation details The input features are 80-dimensional log mel spectrogram features from a 25 ms window with a 10 ms frame shift, which is normalized through cepstral mean subtraction and no voice activity detection is applied. During the training stage, 3s segments are randomly selected from each original utterance. Additionally, SpecAugment <cit.> is applied on the log mel spectrogram of the samples, where 0 to 10 channels in the frequency domain and 0 to 5 frames in the time domain are randomly masked. One clean and one noisy utterance per 150 randomly selected speakers, totaling 300 utterances, are grouped as one batch and fed into the systems. All systems are trained using Additive Angular Margin Softmax (AAM-softmax) with a margin of 0.2 and a scaling factor of 30, except that the loss function for the domain classifier is defined as cross-entropy. For optimization, the Adam optimizer with an initial learning rate of 0.001, a learning rate decay of 0.97 and the weight decay of 2e-5 is used to train the whole network. ECAPA-TDNN network is used as the speaker embedding extractor for its simplicity, with 1024 channels in the convolutional frame layers. After training, the 192-dimensional speaker embeddings are extracted through the backbone and speaker encoder. The whole utterance is used to extract speaker embeddings during the test stage. The cosine similarity is used for scoring. And the equal error rate (EER) is used as the performance metric. The speaker encoder and speaker-irrelevant are 2-layer AutoEncoders with hidden size of 1024. The decoder is almost the same as the encoders. §.§ Results Table <ref> and <ref> show the performance under the seen and unseen noisy conditions, respectively. To observe the embedding distribution, we selected 40 speakers from the VoxCeleb1 test set, and randomly sampled 20 utterances from each speaker to generate speaker embeddings. The t-SNE visualization of speaker embeddings in visible and invisible noise conditions are plotted in Figure <ref>. Clean means the baseline is trained on the original dataset. Joint means the baseline is trained on the original dataset and noisy dataset. As anticipated, the performance of the baseline, trained on the original dataset, markedly degrades in noisy environments. Data augmentation enhances the robustness of the model to noise. While Joint training surpasses clean training in effectiveness, the extent of this improvement is constrained. The model trained with noise-disentanglement metric learning (NDML) <cit.> is used to compare. Table <ref> illustrates that our method can generally achieve the best results under the clean and seen noisy conditions. In the average of overall conditions, the proposed NDAL achieves 33.56% relative reduction in the terms of EER compared to the baseline joint training model. For clean scenarios, NDAL outperforms baseline 27.75% in the EER. And our method has achieved better performance compared to NDML <cit.>. Experimental results reveal that our method yields greater improvement in noisy environments, which is attributed to the robustness of our method to noise. In addition, optimizing feature-robust loss can effectively ensure that speaker related information is not lost under clean conditions while generating a noise independent speaker embedding, which is an essential part. Table <ref> shows that our proposed method outperforms the baseline under the unseen noisy environments. Although Nonspeech dataset contains a wider variety of noise types compared to NoiseX-92 dataset, the performance of our model in these two unseen environments is essentially similar. This further demonstrates that our method can be robust to unseen noise. Due to the lack of prior knowledge of noise distribution, noise problems become more difficult for invisible environments <cit.>. However, our model performs well in unseen environments, exhibiting strong generalization ability. On average, compared to the baseline, NDAL achieves 32.38% relative reduction in EER. This performance improvement is attributed to our enhanced-disentanglement module and adversarial training approach, which enables the model to learn a speaker-invariant embedding space that is noise-independent. As shown in Figure <ref>, in visible noise environments, our method can achieve better speaker embedding distribution compared to the baseline, which is more significant in invisible noise environments. §.§ Ablation studies The ablation study is conducted to evaluate the effect of the individual components in Section 3. NDAL (w/o AL) means that we only keep the disentanglement module. NDAL (w/o Dis) signifies that we train the baseline with adversarial training. NDAL (w/o AL) achieves 28.77% and 28.57% relative reduction in EER compared to baseline on average under both seen and unseen scenarios, respectively. And NDAL (w/o Dis) obtains 28.43% and 27.35% relative reduction in EER compared to baseline on average under both seen and unseen scenarios, respectively. It can be observed that disentanglement module and adversarial training play a crucial role in improving the system performance. Furthermore, the synergistic combination of these two approaches yields the most optimal performance. § CONCLUSION In this work, we proposed a novel speaker verification system based on noise-disentanglement adversarial training to address the challenge of noise robustness under noisy environments. Specifically, the disentanglement module is used to capture noise-robust speaker embeddings. Adversarial training is used to discourage speaker encoder from encoding acoustic information, generating a speaker-invariant embedding space. Experimental results indicate that our method can enhance the robustness of SV system under both seen and unseen noisy conditions. IEEEtran
http://arxiv.org/abs/2408.11662v1
20240821143750
Optimizing Federated Graph Learning with Inherent Structural Knowledge and Dual-Densely Connected GNNs
[ "Longwen Wang", "Jianchun Liu", "Zhi Liu", "Jinyang Huang" ]
cs.LG
[ "cs.LG" ]
[ [ ===== § ABSTRACT Federated Graph Learning (FGL) is an emerging technology that enables clients to collaboratively train powerful Graph Neural Networks (GNNs) in a distributed manner without exposing their private data. Nevertheless, FGL still faces the challenge of the severe non-Independent and Identically Distributed (non-IID) nature of graphs, which possess diverse node and edge structures, especially across varied domains. Thus, exploring the knowledge inherent in these structures becomes significantly crucial. Existing methods, however, either overlook the inherent structural knowledge in graph data or capture it at the cost of significantly increased resource demands (, FLOPs and communication bandwidth), which can be detrimental to distributed paradigms. Inspired by this, we propose FedDense, a novel FGL framework that optimizes the utilization efficiency of inherent structural knowledge. To better acquire knowledge of diverse and underexploited structures, FedDense first explicitly encodes the structural knowledge inherent within graph data itself alongside node features. Besides, FedDense introduces a Dual-Densely Connected (DDC) GNN architecture that exploits the multi-scale (, one-hop to multi-hop) feature and structure insights embedded in the aggregated feature maps at each layer. In addition to the exploitation of inherent structures, we consider resource limitations in FGL, devising exceedingly narrow layers atop the DDC architecture and adopting a selective parameter sharing strategy to reduce resource costs substantially. We conduct extensive experiments using 15 datasets across 4 different domains, demonstrating that FedDense consistently surpasses baselines by a large margin in training performance, while demanding minimal resources. § INTRODUCTION The rising interest in Graph Neural Networks (GNNs) is fueled by the extensive availability of graph data across various domains, such as chemical molecules <cit.>, bioinformatics <cit.>, social networks <cit.>, and computer vision <cit.>. Traditional GNNs require graph data to be centralized for processing and analysis. However, escalating privacy concerns and the increased need for cross-domain collaboration have made addressing privacy breaches and data silos crucial. To this end, Federated Graph Learning (FGL) <cit.>, which integrates Federated Learning (FL) <cit.> into the training process of GNNs, has been proposed. FGL effectively addresses both privacy breaches and data silos through a distributed paradigm, allowing clients to collaboratively train GNNs without disclosing their private data, thereby unlocking the full potential of GNNs. Existing FGL methods predominantly rely on traditional GNNs that employ a feature-based message-passing mechanism, where each graph node representation is iteratively updated by aggregating features from its one-hop to multi-hop neighbors. However, this feature-based mechanism overlooks the unique structural information inherent in graph data. The structures of nodes and edges are not merely supplementary but are fundamental characteristics of graphs, embodying knowledge distinct from features <cit.>. As shown in Figure <ref>, feature and structure heterogeneity across different domains exhibit completely disparate patterns. For feature information, even data from the same domain (, DD (BIO) and ENZYMES (BIO)) reveal significant variation, while their structure remains quite similar. Conversely, data from different domains (, Bioinformatics and Social Networks) exhibit very small feature heterogeneity but substantial differences in structural information. These findings indicate that structural knowledge plays a unique role alongside features in graph data, particularly across domains. Thus, a local model on each client that relies solely on features may fail to capture structural insights, potentially resulting in misaligned updates across clients and hindering overall model performance <cit.>. To this end, an emerging area of research in FGL focuses on structural knowledge utilization. Among these studies, FedStar <cit.> achieves state-of-the-art accuracy across various datasets by employing a feature-structure decoupled dual-channel GNN architecture. This approach is promising as it isolates structural knowledge learning from features and facilitates its sharing in FGL. However, FedStar's basic dual-channel design struggles with efficiency. While the additional channel enhances the learning of structural knowledge, it also significantly increases resource demands. Moreover, the simple decoupling design lacks interaction between the two channels, which may result in insufficient utilization of the knowledge each channel learns individually. This limitation can hinder their ability to accommodate different data distributions (, non-IID) across clients, leading to persistently inconsistent updates and ultimately degrading the performance of the global model. We then conduct preliminary experiments to better illustrate the above. As shown in Table <ref>, in a cross-domain non-IID setting, FedStar shows only a slight performance improvement (, 0.42%) over the local training with single-channel GNNs while incurring approximately an additional 10G FLOPs per client per round. This significant local computational demand poses a critical challenge for FGL, especially considering that many client devices (, smartphones and tablets) have limited computational resources and participation time. Such demands may prevent clients from completing tasks, thereby severely impairing both local and global model performance. Furthermore, FedStar lacks parameter efficiency. The basic dual-channel architecture results in a local model size that is more than double that of single-channel networks, potentially leading to heavy network bandwidth and communication delays during model deployment, especially when the participating clients are enormous. These limitations indicate that the basic decoupling design is inefficient, and to address the non-IID data issues, the inherent structural knowledge across different domains still requires further exploration. To overcome the aforementioned limitations, we introduce FedDense, an FGL framework that optimizes the efficiency of structural knowledge utilization with dual-densely connected GNNs. To achieve comprehensive learning on diverse and underexploited structures, FedDense first introduces a structural vector that explicitly encodes the structural knowledge inherent within the graph itself. Furthermore, FedDense advances knowledge acquisition on top of the basic decoupled GNNs. Specifically, we introduce a Dual-Densely Connected (DDC) architecture, where the stacked GNN layers in both decoupled channels are densely connected with their feature maps, facilitating the multi-scale (, one-hop to multi-hop) insights embedded in feature maps of both channels to be comprehensively tapped. Finally, considering the resource constraint in FGL, we propose a very narrow layer design and implement a selective parameter sharing strategy within the DDC architecture to achieve high efficiency. As a result, each client in FedDense is capable of performing tasks with minimal resource demands. There are three key contributions of our work: ∙ In FedDense, we optimize the structural knowledge utilization within the graph data itself and the feature maps of each GNN layer, thereby mitigating non-IID data issues caused by diverse and underexploited graph structures. ∙ By designing narrow layers and a selective parameter sharing strategy, FedDense ensures excellent performance while significantly reducing resource demands of model training, effectively addressing the efficiency concerns in FGL. ∙ We conduct extensive experiments in four non-IID settings with 15 across 4 different domains, demonstrating that FedDense consistently outperforms baselines by a large margin in terms of test accuracy and convergence speed while requiring minimal computational demand and communication cost. § RELATED WORK §.§.§ GNNs with Inherent Structure Knowledge. Graph Neural Networks (GNNs) are a class of neural networks specifically designed to process and analyze graph-structured data <cit.>. A fundamental characteristic of most GNNs is the message-passing mechanism, where each node representation is updated by aggregating information from its neighboring nodes' features <cit.>. This mechanism enables GNNs to excel in tasks such as node classification, link prediction, and graph classification <cit.>. However, recent studies increasingly recognize the fact that the feature-based message-passing mechanism falls short in differentiating and capturing the inherent structural knowledge of graph data <cit.>. Most existing solutions aim to extract structural/positional encodings to explicitly represent structural information <cit.>. Nevertheless, these approaches often overlook the additional insights that structural representations can provide at the feature map level, highlighting the need for further research in this area. §.§.§ Federated Graph Learning (FGL). FGL is an emerging field that allows GNNs to train on distributed graph data, thereby enhancing the potential of GNNs <cit.>. A major challenge in FGL is the severe non-IID nature of graph data. Unlike typical Euclidean data, such as images, graphs are inherently more heterogeneous <cit.>. To address this challenge, <cit.> introduce a dynamic client clustering framework that reduces structural and feature heterogeneity within clusters, thereby improving learning efficiency and performance. <cit.> incorporate meta-learning techniques to manage non-IID graph data while maintaining generalizability. Additionally, <cit.> propose a feature-structure decoupled framework to extract and share structural information among graphs, enhancing the ability to capture structure-based domain-invariant knowledge. However, while existing FGL methods have made breakthroughs in addressing non-IID issues, they often neglect considerations of resource consumption (, communication bandwidth), which are essential for distributed paradigms. Therefore, ensuring the efficiency of FGL while addressing the non-IID problem remains a challenging and underdeveloped area in FGL research. § PRELIMINARIES §.§ Graph Neural Networks (GNNs) A typical graph G = (V, E) consists of a set of nodes V and a set of edges E, where each node v ∈ V is associated with a feature vector 𝐱_v. We denote the representation of node v as 𝐡_v, and it can be iteratively updated by aggregating the representations of its one-hop neighbors 𝒩(v) as: 𝐦_v^(ℓ)=AGGREGATE({𝐡_u^(ℓ-1)|u∈𝒩(v)}), 𝐡_v^(ℓ)=UPDATE(𝐡_v^(ℓ-1),𝐦_v^(ℓ)), where 𝐡_v^(ℓ) is the updated representation of the node v at the ℓ-th layer. Different AGGREGATE and UPDATE functions allow for the implementation of various types of GNNs with distinct focuses <cit.>. GNNs can be applied to various tasks, such as node classification, link prediction, and graph classification. In this paper, we focus primarily on graph classification, where GNNs combine the representations of all nodes to form a graph-level representation 𝐡_G. This is typically achieved through pooling methods, such as average pooling, sum pooling, and max pooling. §.§ Federated Graph Learning (FGL) A typical FGL system consists of a Parameter Server (PS) and a set of N clients that collaboratively train a global GNN model. Each client i holds a private graph dataset d_i, and the total samples across all clients are denoted as D. The training process of FGL is divided into T rounds. At the start of each training round t ∈{1, …, T}, the PS distributes the global model parameters w̅^(t) to all clients. Upon receiving w̅^(t), each client i performs local training on its private graph data d_i and uploads the updated model parameters w_i^(t) back to the PS. At the end of round t, the PS aggregates these updates for the next round. The typical aggregation method used in FGL is FedAvg <cit.>, which averages the model updates from all clients by: w̅^(t+1)=∑_i=1^N|d_i|/|D| w^(t)_i, where |d_i| denotes the size of data samples of client i and |D| represents the total size of samples over all clients. The global model optimization in FGL aims to minimize the overall loss across all participating clients, denoted as: min_( w_1,w_2,⋯, w_i)1/N∑_i=1^N ℒ_i(w_i), where ℒ_i(·) and w_i are the loss function and model parameters of client i, respectively. However, due to the prevalence of non-IID data in real-life graph datasets, the performance of FGL is often suboptimal. § METHODOLOGY In this section, we detail our proposed FedDense framework, which is illustrated in Figure <ref>. FedDense focuses on better exploiting the inherent structural knowledge at two levels: the data level and the feature map level. At the data level, FedDense introduces a structural vector alongside node features (172) to explicitly capture the unique structural patterns inherent in graph data itself. At the feature map level, we propose a Dual-Densely Connected GNN architecture. Specifically, we first employ dual-channel (173 and 174) GNNs to separately learn feature and structural knowledge with the decoupled vectors. Additionally, FedDense establishes dense connections between the dual-channel GNNs by integrating their feature maps at each layer in the feature channel (174) and aggregates the collective knowledge within all hidden layers throughout the whole network to generate the final graph-level representation (175). Furthermore, we design an efficient parameter sharing scheme that shares only the selected part of model parameters to achieve high communication efficiency (176). Finally, we analyze the resource consumption of our framework and demonstrate that FedDense is highly resource-efficient with a significantly narrow layer design, achieving fewer model parameters, lower computational demand, and reduced communication costs. §.§ Structural Patterns Decoupling Existing GNNs primarily rely on feature information for message-passing. However, although GNNs implicitly incorporate structural knowledge through the message-passing mechanism by iteratively aggregating one-hop neighboring node features, this approach diminishes the direct learning of unique topological structures (, node degree) in graph data. As demonstrated in Figure <ref>, inherent structural patterns also carry significant and distinctive information, especially in cross-domain scenarios. Therefore, it is crucial to explicitly leverage structural knowledge and learn from both feature and structural information. To this end, inspired by <cit.>, we introduce a structural vector into each graph node. Specifically, in addition to the node's feature vector 𝐱_v, we extract and convert the node's underlying topological information into a structural vector 𝐬_v, defined as: 𝐬_v=f([s_1,s_2,s_3,…,s_n]), where [s_1,s_2,s_3,…,s_n] are structural information encodings. To capture the local/global structural patterns of graph data, potential options include one-hot degree vectors, random walk transition matrices, and positional embeddings <cit.>. The function f(·) serves as a fusion function, which can be implemented using techniques such as concatenation, fully connected layers, or pooling <cit.>. Notably, both the structural encodings [s_1,s_2,s_3,…,s_n] and the function f(·) can be tailored according to the specific task, highlighting the versatility and adaptability of this approach. Incorporating the structural vector adds an additional dimension of structural knowledge to each node, allowing the unique structural patterns embedded in the graph to be fully exploited. Consequently, the node representations become more robust and informative, combining both feature and structural information, denoted as {𝐱_v, 𝐬_v}. §.§ Dual-Densely Connected Architecture <cit.> introduce a feature-structure decoupled dual-channel GNN architecture in FGL. This approach provides a solid starting point by allowing independent capture and processing of feature and structural information in graph data. However, simple decoupling has its limitations, as it overlooks internal interactions within the decoupled networks and the potential and informative multi-scale insights across their feature maps. A traditional GNN layer can be viewed as an aggregation from one-hop neighbors. Therefore, each layer of stacked GNNs is able to fetch different scales of local or global feature and structure insights within multi-hops, indicating that the feature maps in GNNs are highly informative, especially in a decoupled dual-channel architecture. Inspired by this, we propose a novel Dual-Densely Connected (DDC) architecture. Specifically, we first employ dual-channel GNNs to separately learn feature and structural information with decoupled feature vector 𝐱_v and structural vector 𝐬_v at the data level. Additionally, we establish dense connections between the dual channels at the feature map level. Each layer in the feature channel receives additional inputs from the outputs of all preceding layers in both channels. This dual-dense connectivity allows the multi-scale insights of feature maps in both channels to be collectively leveraged. §.§.§ Initialization. The DDC architecture employs two parallel GNNs: one for feature information and one for structural information. Both channels start with a linear initialization layer. In the feature channel, the feature vector 𝐱_v of each node v is processed by a linear layer, transforming it into a hidden representation 𝐱_v^(0). Simultaneously, in the structural channel, the corresponding structural vector 𝐬_v is passed through a separate linear layer, producing the representation 𝐬_v^(0) with the same dimension as 𝐱_v^(0). §.§.§ Dual-Dense Connectivity. After initialization, both channels follow L stacked GNN layers. To maintain simplicity and preserve the integrity of structural information, the input to the ℓ-th GNN layer in the structural channel is directly derived from the output of the previous layer 𝐬_v^(ℓ-1) in this channel, where ℓ∈{1,⋯, L}. Meanwhile, to enhance internal interactions within the decoupled networks and leverage multi-scale information from their stacked hidden layers, each layer in the feature channel receives the feature maps of all preceding layers from both channels as input: 𝐜_v^(ℓ)=Concat[α_v^(ℓ),β_v^(ℓ)] , where Concat[·] denotes the concatenation operation. The α_v^(ℓ) and β_v^(ℓ) can be denoted as: α_v^(ℓ)=H_𝐱(Concat[𝐱_v^(0),𝐱_v^(1),⋯,𝐱_v^(ℓ - 1)]), β_v^(ℓ)=H_𝐬(Concat[𝐬_v^(0),𝐬_v^(1),⋯,𝐬_v^(ℓ - 1)]), where 𝐱_v^(0), 𝐱_v^(1), ⋯, 𝐱_v^(ℓ - 1) and 𝐬_v^(0), 𝐬_v^(1), ⋯, 𝐬_v^(ℓ - 1) refer to the feature maps produced in layers 0 through ℓ-1 from the feature and structural channels, respectively. We define H_𝐱(·) and H_𝐬(·) as non-linear transformations between hidden GNN layers in the feature channel, consisting of a composite function of operations such as Batch Normalization (BN), Dropout, rectified linear units (ReLU), and Pooling. For the final graph-level embedding 𝐡_G, rather than relying solely on the output of the final layer, we consider and concatenate the feature maps of all the hidden layer outputs generated across both channels. The concatenated representation is then transformed into the graph-level embedding via a readout function. With the DDC architecture, FedDense not only ensures the independent learning of structural knowledge with two parallel channels but also guarantees the different insights they learn individually are fully integrated and leveraged with dual-dense Connectivity. Additionally, by combining all feature maps throughout the network to generate the final graph embeddings, FedDense considers the collective knowledge across both channels, thus achieving robust and comprehensive learning of graph data. §.§ Selective Federated Sharing Unlike traditional FGL methods, where all model parameters are shared among clients, FedDense restricts parameter sharing to the structural parameters only, specifically the learnable parameters of each layer in the structural channel. Hence, Eq. (<ref>) can be reformulated as follows: w̅^(t+1)_s=∑_i=1^N|d_i|/|D| w^(t)_s,i where w̅^(t+1)_s represents the aggregated structural parameters at the PS, and w^(t)_s,i denotes the updated structural parameters of client i in round t. The feature parameters are neither shared nor updated through federated learning but are instead optimized locally within each client. This approach is adopted for three key reasons: (1) Communication Efficiency. Transmitting the full set of parameters from both the structure and feature channels can lead to significant communication delays and increased data transfer costs, especially in environments with constrained bandwidth. Therefore, selecting and transmitting only a subset of parameters is essential for maintaining efficiency. (2) The Significance of Structural Information. Structural information is crucial as it provides deep insights into the underlying topology of a graph. Elements such as node degree, connectivity, and overall graph structure encapsulate essential characteristics that are often more informative than feature attributes alone, especially in heterogeneous or cross-domain graphs. In scenarios where communication bandwidth is limited, prioritizing the transmission of structural parameters becomes advantageous. (3) Synergy enhancements via dual-dense connectivity. The dense integration of feature and structural channels at the feature map level enables synergistic interaction between these two types of information in the feature channel. Even though feature training is conducted locally, the structural knowledge shared through federated learning can significantly benefit feature learning. This dual-dense connectivity in the feature channel ensures that local feature updates are informed by the global structural context, leading to more robust and comprehensive training. Considering these factors, FedDense shares only the structural parameters to achieve reduced communication costs while preserving essential information. §.§ Analysis We assume that the primary resource bottlenecks of FedDense arise from the GNN layers within dual-densely connected channels and take GCN <cit.> as an analysis example. For a given graph G = (V, E), we denote |V| and |E| as the total number of nodes and edges. The dimensions of each input and output feature map for each GCN layer are denoted by a and b. We denote the time complexity as Θ and the number of parameters as |θ|. Generally, the layer parameters are predominantly determined by its weight matrix 𝐖∈ℝ^a× b. Therefore, for each GCN layer, we can derive that Θ =O(|E|a + |V|ab) and |θ| = a× b <cit.>, respectively. In FedDense, if the output size for all layers is set to the same value, denoted by the hyperparameter r, it follows that the ℓ-th GNN layer has 2×ℓ× r and r input feature maps in feature and structural channels, respectively. In this way, Θ and |θ| of the ℓ-th layer can be represented as O(|E|2ℓ r + |V| 2ℓ r^2) and 2ℓ r^2 for feature channel, and O(|E|r + |V|r^2) and r^2 for structural channel. It is noteworthy that GNNs often achieve optimal performance with shallow architectures, where ℓ in GNNs is relatively small, typically between 2 and 3 <cit.>, and thus can be treated as a constant when analysis. Therefore, Θ and |θ| of the ℓ-th GCN layer in both channels can be summed to O(|E|r + |V|r^2) and 2ℓ r^2+r^2, respectively. It is evident that for a given graph G, both Θ and |θ| of FedDense are significantly reduced as r decreases. Additionally, since the shared parameters in FedDense are limited to the structural channel only, the communication cost per round remains minimal due to the reduced model parameters. According to the above analysis, it is evident that for a given graph dataset, limiting the local model in FedDense to a narrow layer design (, r = 16 or r = 10) significantly reduces the resource demand in both local training and parameter sharing. Furthermore, thanks to our DDC architecture, even with a very narrow layer design, FedDense maintains efficient and comprehensive acquisition of both local and global knowledge. As shown in the next section, with a significantly small r compared to all baselines, FedDense achieves excellent results on the test datasets while minimizing computational demands and communication costs. § EXPERIMENTS §.§ Datasets and Experimental Setup §.§.§ Datasets. We utilize a total of 15 datasets <cit.> across 4 different domains: seven Molecules datasets (MUTAG, BZR, COX2, DHFR, PTC-MR, AIDS, NCI1), two Bioinformatics datasets (ENZYMES, DD), three Social Networks datasets (COLLAB, IMDB-BINARY, IMDB-MULTI), and three Computer Vision datasets (Letter-low, Letter-high, Letter-med). To simulate data heterogeneity in FGL, we establish four different non-IID settings: (1) a single-domain setting (, Single) using only the Molecules datasets; (2) a cross-domain setting (, Cross-Sim) using datasets from similar domains (Molecules and Bioinformatics); (3) another cross-domain setting (, Cross-Diff) utilizing datasets from completely different domains (Bioinformatics, Social Networks, and Computer Vision); and (4) a multi-domain setting (, Multi) incorporating datasets from all four domains. In each setting, the graph data for each client is derived from one of the corresponding datasets and is randomly split into a ratio of 8:1:1 for training, validation, and testing. §.§.§ Baselines. We employ five baselines in our experiments: (1) Local, where each client conducts model training locally without any communication with others; (2) FedAvg <cit.>, a standard FGL approach that aggregates client models by averaging their local updates; (3) FedProx <cit.>, where a regularization term in the loss function was proposed to handle system and statistical heterogeneity; (4) GCFL <cit.>, which tackles non-IID graph data through a dynamic clustering technique based on GNN gradients to group clients with similar data distributions; and (5) FedStar <cit.>, a state-of-the-art FGL framework that decouples structural and feature learning and sharing across diverse graph domains. §.§.§ Implementation Details. To construct the structural vector, we align with the settings used in FedStar <cit.>. Specifically, we concatenate two types of structural encodings: a degree-based embedding representing vertex degrees with one-hot encoding and a random walk-based positional embedding which is computed based on the random walk diffusion process <cit.>, both with dimensions of 16. The non-linear transformations in FedDense, consistent with other methods, apply ReLU followed by Dropout. We utilize a 3-layer GIN <cit.> in the feature channel for all methods and a 3-layer GCN <cit.> in the structural channel for FedDense and Fedstar. We set the hidden size to 64 for all baselines. For FedDense, the hidden size of each layer is controlled by the hyperparameter r. We use a batch size of 128 and the Adam optimizer <cit.> with a learning rate of 0.001 and a weight decay of 5×10^-4. The local epoch is set to 1, and the number of communication rounds is 200 for all FGL methods. All experiments are conducted on one NVIDIA GeForce RTX 4090 GPU and run for five random repetitions. More implementation details can be found in the Appendix. §.§ Experimental Results. §.§.§ Accuracy Performance. As shown in Table <ref>, FedDense surpasses all competing baselines in four non-IID settings. In Cross-Diff and Multi settings, where the data across clients is more heterogeneous, all baselines exhibit severe performance degradation, with most methods failing to surpass the Local baseline. However, under these highly heterogeneous conditions, FedDense (r = 32) achieves impressive average accuracy gains of 5.38% and 4.19%, respectively, significantly outperforming the existing state-of-the-art FedStar by notable margins of 4.98% and 1.61%. Remarkably, even with a small r (,r = 16), our framework still achieves excellent performance gain(, 4.60% and 3.23%) and continues to surpass FedStar (, 4.18% and 0.65%). The superior performance of FedDense can be attributed to its structural vector and DDC architecture. The additional dimension of structural knowledge and integration of both feature and structural insights significantly improve the knowledge acquisition across clients, thereby enhancing FedDense to model complex and diverse structural patterns in both local and cross-domain graphs. §.§.§ Convergence Analysis. Figure <ref> illustrates the average test accuracy with standard deviation curves during training across five random runs for all methods. In the Cross-Diff and Multi settings, where client data exhibits higher heterogeneity, FedDense consistently outperforms other methods in terms of average test accuracy and convergence speed. For instance, in the Cross-Diff setting, FedDense reaches 65% test accuracy by round 36 (r = 32) and 38 (r = 16), while FedAvg, FedProx, GCFL, and FedStar require 180, 165, 189, and 88 rounds, respectively, to reach 65% test accuracy. The significant improvement, especially over FedStar, demonstrates that our DDC architecture greatly reinforces the federated structure knowledge acquisition at each training round, thus greatly leveraging the advantages of knowledge sharing in FGL and speeding up the convergence. §.§.§ Communication Cost. Table <ref> presents the communication cost for FedAvg, FedStar, and FedDense. We divide communication cost into two parts: the parameter sharing payload per client per round during the FGL process and the size of the distributed local model during the deployment. Obviously, FedDense with r=16 only takes 14.7% of the payload relative to the standard FGL paradigm, FedAvg, while maintaining a nearly equivalent model size. Furthermore, FedDense significantly reduces the communication payload by approximately 74.7% compared to FedStar, with the model size being nearly 50.6% smaller. Considering the outstanding performance of FedDense in terms of accuracy and convergence speed, these results indicate that the narrow layer design and selective parameter sharing in FedDense significantly reduce communication costs while successfully guaranteeing that essential and informative knowledge is effectively learned both locally and globally, thereby endowing FedDense with high communication efficiency throughout the entire FGL training process. §.§.§ Computational Efficiency. One of the primary advantages of our proposed framework is its remarkable computational efficiency. As illustrated in Figure <ref>, FedDense significantly outperforms FedAvg and FedStar in terms of minimum average accuracy across five random repetitions while requiring minimal local computation. Although FedStar achieves better accuracy compared to FedAvg, its basic dual-channel GNN architecture introduces significant additional local computation demands, making it less suitable for distributed paradigms like FGL, especially given the often limited computational resources and participation time of each client. In contrast, FedDense stands out for its exceptional efficiency, achieving the best accuracy performance while demanding minimal computational resources in all non-IID settings. Notably, in the highly heterogeneous Cross-Diff setting, FedDense (r = 10) surpasses FedStar (k = 64) by a large margin in accuracy while requiring 27.8 times lower FLOPs per client per round, demonstrating that extracting knowledge from feature maps in each GNN layer significantly enriches both local and global knowledge acquisition in FGL. Despite constraints on computational resources and limited client participation time, FedDense delivers outstanding performance. § CONCLUSION This paper proposes an effective framework, FedDense, to optimize federated graph learning with inherent structural knowledge and dual-densely connected GNNs. To better exploit diverse structures, we decouple structural patterns at the data level and employ a dual-densely connected architecture at the feature map level. Moreover, we design narrow layers and adopt a selective parameter sharing strategy for high resource efficiency. The extensive experimental results demonstrate that FedDense can achieve state-of-the-art performance with minimum resource demands. § APPENDIX §.§ Experimental Details In this appendix, we provide detailed descriptions of the experimental setups and specific configurations that were not fully elaborated in the main text due to space limitations. §.§.§ Data Splitting Details. In this section, we detail the configuration of the data splitting across the four non-IID settings described in the main text. Specifically, For each non-IID setting—Single, Cross-Sim, Cross-Diff, and Multi—we systematically assigned datasets to clients and implemented a random split of 8:1:1 for training, validation, and testing. The precise configurations for each non-IID scenario are summarized in the tables below. §.§.§ Details of the Baseline Methods. We compare FedDense with five baselines. The details of these baselines are provided as follows. ∙ Local, where each client conducts model training locally without any communication with others. ∙ FedAvg, a standard FGL approach that aggregates client models by averaging their local updates. ∙ FedProx, where a regularization term in the loss function was proposed to handle system and statistical heterogeneity. The regularization term with importance weight μ is set to 0.01 in our experiments. ∙ GCFL, which tackles non-IID graph data through a dynamic clustering technique based on GNN gradients to group clients with similar data distributions. Two hyper-parameters are determining the clustering results, i.e., ϵ_1 and ϵ_2. To guarantee the performance of GCFL, we use the same values in the original study where ϵ_1 = 0.05 and ϵ_2 = 0.1. ∙ FedStar, a state-of-the-art FGL framework that decouples structural and feature learning and sharing across diverse graph domains. The structural embeddings for the structural channel are consistent with the original study. The concatenation of a degree-based embedding representing vertex degrees with one-hot encoding and a random walk-based positional embedding which is computed based on the random walk diffusion process, both with dimensions of 16.
http://arxiv.org/abs/2408.12371v1
20240822131449
Invariant graphs in Julia sets and decompositions of rational maps
[ "Guizhen Cui", "Yan Gao", "Jinsong Zeng" ]
math.DS
[ "math.DS" ]
§ ABSTRACT In this paper, we prove that for any rational map f on the Riemann sphere and for each sufficiently large integer n, there exists a finite and connected graph G in the Julia set of f, such that f^n(G)⊂ G, G contains all post-critical points in the Julia set and every component of G contains at most one post-critical point in the Fatou set. The proof is based on the cluster- decomposition of rational maps. [2010]Primary 37F20; Secondary 37F10 Orbits of Binary Stars: from Visual Measures to Speckle Interferometry Andrei Tokovinin August 26, 2024 ================================================================================ § INTRODUCTION Let f be a rational map on the Riemann sphere with f≥ 2. The Fatou set and Julia set of f are denoted by F_f and J_f, respectively. Their definitions and basic properties can be found in <cit.>. The set of post-critical points of f is defined by P_f=⋃_n>0{f^n(c):f^'(c)=0}. In particular, the map f is called post-critically finite, or simply PCF, if #P_f<∞. Generally, a marked rational map (f,P) is a PCF rational map f together with a finite set P⊂ such that P_f⊂ P and f(P)⊂ P. In complex dynamics, a fundamental problem is understanding the structure of Julia sets for rational maps. Significant progress has been made in this area for polynomials, largely due to the fact that the Julia set of a polynomial is the boundary of its basin of infinity. However, for a general rational map, it is not possible to observe the entire Julia set from just a single Fatou domain. Therefore, one needs to consider not only the boundary of each Fatou domain, but also the arrangement of distinct Fatou domains. An effective approach to this problem is to construct a suitable invariant graph. In this paper, graph refers to a finite and connected graph in . For PCF polynomials, the well-known Hubbard trees are invariant and completely characterize the dynamics of the polynomials <cit.>. Invariant graphs for Newton maps and critically fixed rational maps are studied by several groups <cit.>. The first breakthrough in the general situation was made by Cannon-Floyd-Parry <cit.> and Bonk-Meyer <cit.> independently. They proved that [<cit.>,Theorem 3.1] Any marked rational map (f,P) with J_f= admits an f^n-invariant Jordan curve passing through all points of P for each sufficiently large integer n. The same conclusion was obtained for marked rational maps, i.e., rational maps with carpet Julia sets, by Meyer, Haïssinsky and the last two authors of this paper <cit.>. The following theorem is an enhanced version of <cit.>. [<cit.>,Theorem 1.2] Let (f,P) be a marked rational map such that no points of P lie in the boundaries of Fatou domains. Then for each sufficiently large integer n, there exists an f^n-invariant Jordan curve passing though all points of P, such that its intersection with the closure of any Fatou domain is either empty or the union of two closed internal rays. Recently, by developing Bonk-Meyer's method in <cit.>, the authors of this paper demonstrated that every PCF rational map f admits an f^n-invariant graph that contains P_f for each sufficiently large integer n. See <cit.>. Nevertheless, not every invariant graph can adequately capture the complexity of the Julia set. For instance, for a PCF polynomial without bounded Fatou domains, the union of external rays landing at the post-critical points forms an invariant graph. Unlike the Hubbard tree, this graph provides limited information about the Julia set. Therefore, to address such issues, we expect to confine the graphs within the Julia sets. The main result of this paper is as follows. Let (f, P) be a marked rational map. Then for each sufficiently large integer n, there exists a graph G⊂ J_f such that f^n(G)⊂ G, P∩ J_f⊂ G and each component of G contains at most one point of P. (1) Based on this theorem, we obtain an increasing sequence of invariant graphs {f^-kn(G)}_k≥ 1 which approximate the Julia set from inside. (2) Theorem <ref> is essentially known for PCF polynomials. Factually, let X be the union of P_f and the branched points of the Hubbard tree T. If f has no bounded Fatou domains, then T itself serves as the desired graph. Otherwise, for each bounded Fatou domain U that intersects T, if U∩ X≠∅, we substitute U∩ T with the Jordan curve ∂ U; if U∩ X=∅, we replace the segment U∩ T with a suitable choice of one of the two open arcs as the components of ∂ U T. The resulting graph fulfills the conditions in Theorem <ref>. (3) The proof of Theorem <ref> is compeletely independent of our earlier work <cit.> presented after Theorem B. Instead, it can be directly derived from Theorem <ref>. Indeed, we may mark one point on the boundary of each Fatou domain intersecting P_f, such that the union of these marked points together with P_f forms an f-invariant set, denoted by P. By applying Theorem <ref> to (f,P), we obtain an f^n-invariant graph G'⊂ J_f such that P∩ J_f⊂ G', for each sufficiently large integer n. Thus, the union G of G' and all internal rays landing at points of P is an f^n-invariant graph containing P_f. There are several ingredients to prove Theorem <ref>, as outlined in the schematic graph in Figure <ref> and summarized below. The first ingredient refers to the invariant graphs on the boundaries of Fatou domains, serving as a semi-local counterpart to Theorem <ref>. Let f be a PCF rational map and let U be a Fatou domain of f with f(U)=U. If f is a polynomial, then ∂ U admits an invariant graph by Remark <ref> (2). It is natural to inquire whether the conclusion holds true in general. The answer to this question is negative, as illustrated by a counterexample in Theorem <ref>. On a positive note, we can construct an invariant graph related to ∂ U within a larger invariant set, namely the Fatou chain generated by U, which is defined to be ⋃_k≥ 0 E_k, where E_k is the component of f^-k(U) that contains U. Let (f,P) be a marked rational map and let U be a fixed Fatou domain of f. Then there exists a graph G⊂ J_f in the Fatou chain generated by U, such that f(G)⊂ G and G is isotopic rel P to a graph G_0⊂∂ U which satisfies G_0∩ P=∂ U∩ P and two points of P lie in distinct components of ∖ G_0 provided that they belong to distinct components of ∖∂ U. Theorem <ref> is proved in Section <ref>, based on an explicit study of the dynamics on ∂ U. We aim to extend the invariant graph in Theorem <ref> to a broader range. Inspired by the Fatou chain generated by a single Fatou domain, we introduce the concept of general Fatou chains. The second ingredient involves constructing invariant graphs within these Fatou chains. A continuum is a connected and compact subset of containing more than one point. Let f be a rational map with J_f≠. A level-0 Fatou chain of f is defined as the closure of a Fatou domain of f. A continuum K⊂ is a level-1 Fatou chain of f if there is a sequence of continua {E_k}_k≥ 0, each of which is the union of finitely many level-0 Fatou chains, such that E_k⊂ E_k+1 and K=⋃_k≥ 0E_k. Inductively, a continuum K⊂ is a level-(n+1) Fatou chain if there is a sequence of continua {E_k}, each of which is the union of finitely many level-n Fatou chains such that E_k⊂ E_k+1 and K=⋃_k≥ 0E_k. A Fatou chain K is maximal if any Fatou chain intersecting K is contained in K. By definition, a level-n Fatou chain is a level-m Fatou chain if n<m, and the Fatou chain generated by a fixed Fatou domain is a level-1 Fatou chain. Moreover, for rational maps, any maximal Fatou chain is just the closure of a Fatou domain; while for polynomials or Newton maps, the whole sphere is a maximal Fatou chain. Let f be a rational map with J_f≠. Then each Fatou domain of f is contained in a maximal Fatou chain. Moreover, the image and components of the pre-image of a maximal Fatou chain under f are still maximal Fatou chains. The proof of Theorem <ref> is presented in Section <ref>. In Section <ref>, we revisit maximal Fatou chains, exploring their combinatorial and topological properties. With these preparations, the following main result of the second ingredient will be proved in Section <ref>. Let (f,P) be a marked rational map with J_f≠. Let K be the intersection of J_f with an f-invariant maximal Fatou chain. Then there exists a graph G⊂ K such that f(G)⊂ G, G∩ P=K∩ P, and two points of P lie in distinct components of ∖ G provided that they belong to distinct components of ∖ K. If a PCF rational map has a maximal Fatou chain equal to , then Theorem <ref> follows directly from Theorem <ref> because every Fatou domain has at most one marked point. From the perspective of Julia set configurations, such a map can be regarded as a generalization of polynomials and Newton maps. We refer to it as a cluster rational map. The third ingredient is about the decomposition of a marked rational map. According to Theorem <ref>, in order to obtain a global invariant graph, it is necessary to investigate the dynamics outside the union of marked maximal Fatou chains. This approach leads to a decomposition of marked rational maps by maximal Fatou chains. We present it in a generalized form. Let f be a rational map, and let be a union of finitely many pairwise disjoint continua. We call a stable set of f if f()⊂ and each component of f^-1() is either a component of or disjoint from . According to Theorem <ref>, the union of all periodic maximal Fatou chains is a specific example of stable sets. By definition, each component of a stable set is eventually periodic. Thus, the following result describes the dynamics of a stable set. Let f be a PCF rational map and let K≠ be a connected stable set of f. Then f is renormalizable on K, i.e., there exists a rational map g and a quasiconformal map ϕ of such that J_g=ϕ(∂ K) and ϕ∘ f=g∘ϕ on K. Moreover, the rational map g can be taken to be PCF, and it is unique up to conformal conjugacy. We call g the renormalization of f on K. Next, we consider the dynamics outside a stable set. Let (f,P) be a marked rational map. Let _1⊂ be open sets with ∂⊂ J_f such that each component of ∂ contains more than one point. We say f:_1→ is an exact sub-system of (f,P) if (1) has finitely many components, and each of them is finitely connected; (2) _1 is the union of some components of f^-1(); (3) each component of ∖_1 is a continuum disjoint from P. By definition, each component of contains a unique component of _1. Consequently, there exists a self-map f_# on the collection of components of defined by f_#(V):=f(V_1), where V_1 is the unique component of _1 contained in V. Since has finitely many components, every component of is eventually f_#-periodic. Therefore, the dynamics of an exact sub-system is characterized by the following theorem. Let (f,P) be a marked rational map. Suppose that f: V_1→ V is an exact sub-system of (f,P) such that V is connected. Denote V_n=(f|_V_1)^-n(V) and E=⋂_n>0V_n. Then there exists a marked rational map (g,Q_g), a continuum K_g⊃ J_g with g^-1(K_g)=K_g and a continuous onto map π:→ such that * components of ∖ K_g are all Jordan domains with pairwise disjoint closures; * E=π(K_g) and f∘π=π∘ g on K_g; * for any point z∈⋂_n>0 V_n, the fiber π^-1(z) is a singleton; * for any component B_n of ∖ V_n, the set π^-1(B_n) is the closure of a component of K_g; * a point x∈ Q_g if and only if either π(x)∈ P∩ V, or x is the center under the Böttcher coordinate of a component D of ∖ K_g such that π(D)∩ P≠∅. Moreover, the marked rational map (g,Q_g) is unique up to conformal conjugacy. The marked rational map (g,Q_g) is called the blow-up of the exact sub-system f:V_1→ V of (f, P). Generally, if f:_1→ is an exact sub-system of (f,P), and V is an f_#-periodic component of with period p, then the blow-up of the exact sub-system f^p:V_p→ V of (f^p,P) is regarded as a blow-up of f:_1→ (associated with V). Here, V_p denotes the component of (f|__1)^-p(V) contained in V. The primary result of the third ingredient is the decomposition theorem below. A connected open or closed set E is called simple-type (rel P) if there is a simply connected domain D⊂ such that E⊂ D and #(D∩ P)≤ 1; or annular-type if E is not simple-type and there is an annulus A⊂∖ P such that E⊂ A; or complex-type otherwise. Let (f,P) be a marked rational map with J_f≠. Then there exists a stable set ⊂ J_f such that (i) for any periodic component K of with period p, the renormalization of f^p on K is a cluster rational map; (ii) either =∅ or f: _1→ is an exact sub-system of (f,P), where and _1 are the union of complex-type components of and f^-1(), respectively. Moreover, each blow-up of f: _1→ is a marked rational map. According to Theorem <ref>, the dynamics of (f,P) is essentially inherited by the sub-systems f:→ and f:_1→. In fact, the complement of ⊔ can be expressed as ⊔, where and 𝒮 denote the union of all annular-type and simple-type components of ∖, respectively. The set has finitely many components, each of which is an annulus (see Theorem <ref>). Let _1 be the union of all annular-type components of f^-1(). It follows that _1⊂ and f:_1→ forms an annular sub-system. The dynamics of an annular sub-system is straightforward and has been intensively studied in <cit.> by Peng, Tan and the first author of this paper. Additionally, the dynamics of f associated with 𝒮 is trivial by the Shrinking Lemma (see Lemma <ref>), since each component of contains at most one point of P_f. Theorem <ref> (i)-(ii) and Theorem <ref> are established in Section <ref>. Theorem <ref> is proved in Section <ref>, which immediately implies the remaining part of Theorem <ref>. Now, according to Theorem <ref>, any marked rational map with non-empty Fatou set can be decomposed into several marked cluster or rational maps. The invariant graphs for marked cluster rational maps are established in Theorem <ref>, while those for marked rational maps appear in Theorem B. In the fourth and final ingredient, we will connect the invariant graphs associated with these sub-systems together to derive a global invariant graph. This can be accomplished by identifying invariant arcs within the annular sub-system described in Remark <ref>. The process is encapsulated in the following proposition, which is proved in Section <ref>. A graph is called regulated for a PCF rational map if its intersection with the closure of any Fatou domain of the map is either empty or the union of finitely many closed internal rays. Let (f, P) be a marked rational map with J_f≠, and let ,,_1 represent the sets specified in Theorem <ref>. Suppose each blow-up (g, Q_g) of the exact sub-system f:_1→ admits a g-invariant regulated graph containing Q_g. Then there exists an f-invariant graph G⊂ J_f such that P∩ J_f⊂ G and each component of ∖ G contains at most one point of P. If J_f=, then Theorem <ref> follows immediately from Theorem A. Now suppose that J_f≠. Let , and _1 represent the sets specified in Theorem <ref>. Then for every n≥1, the stable set induces a cluster- decomposition of (f^n,P). In particular, f^n:_n→ is an exact sub-system of (f^n,P), where _n denotes the union of all complex-type components of f^-n(). We will compare the blow-ups of f:_1→ and those of f^n:_n→. Let V be any f_#-periodic component of with period p. Denote (g, Q_g) the blow-up of the exact sub-system f^p:V_p→ V, where V_p refers to the unique component of _p contained in V. Fix any integer n≥ 1. Let m=m(n, V) be the least common multiple of n and p. Then the period of V under (f^n)_# is m/n. Moreover, the blow-up of f^n:_n→ associated with V is the blow-up of the exact sub-system f^m:V_m→ V of (f^m,P), which is exactly (g^m/p,Q_g). Since m(n, V) tends to ∞ as n→∞, it follows from Theorem B that each blow-up (g^m/p, Q_g) of f^n:_n→ admits a g^m/p-invariant and regulated graph passing through Q_g for each sufficiently large integer n. Therefore, by applying Proposition <ref> to (f^n, P) and , we obtain an f^n-invariant graph G with all properties of Theorem <ref>. The last section of this paper is an appendix. The standard spherical metric is denoted by σ(z)|dz| with σ(z)=1/(1+|z|^2). Without emphasis, the distance, diameter, convergence, etc., are all considered under the spherical metric. So we use the simplified notations like dist(·,·), diam(·), etc., instead of dist_σ(·,·), diam_σ(·), etc. Another metric used in this paper is the orbifold metric ω with respect to a PCF rational map. Its definition and properties are given in Appendix <ref>. Under this metric, we usually use the homotopic length L_ω[·] and the homotopic diameter H-diam_ω(·), instead of the usual length and diameter, of a smooth curve and a connected set in ∖ P_f, respectively. See <ref> for their definitions and detailed discussions. In Appendix <ref>, we introduce an isotopy lifting lemma under rational maps and a well-known convergence result for a sequence of isotopies obtained by lifting. Appendix <ref> includes three topological results related to local connectivity. §.§ Related work The cluster rational maps are closely related to the crochet rational maps introduced in a recent work <cit.>. See also <cit.>. We learned about this matter only after the article was basically completed. In particular, these two types of maps coincide in the PCF case. Dylan Thurston proposed a question (<cit.>) to find a preferred “best” spine of ∖ P_f for a hyperbolic PCF cluster rational map. In this case, the invariant graph obtained in Theorem <ref> appears to be a good candidate. Recently, several interesting results about PCF cluster maps were announced. For example, this kind of map has a zero-entropy invariant graph containing P_f (<cit.>) and its Julia set has Ahlfors-regular conformal dimension one (<cit.>). In complex dynamics, a well-known method to decompose a PCF rational map involves utilizing stable multicurve, as elaborated upon by Pilgrim in <cit.>. Specifically, the periodic maximal Fatou chains of a PCF rational map naturally induce a stable multicurve by considering the boundary curves of their complement. From this perspective, Dudko, Hlushchanka and Schleicher recently achieved a similar result to Theorem <ref> (<cit.>), but employing a significantly different approach. Another relevant work can be found in <cit.>. The existence of invariant graphs has also been studied beyond the rational case. A Thurston map is a PCF branched covering on the 2-sphere. Bonk and Meyer <cit.> proved that any expanding Thurston map f admits an f^n-invariant Jordan curve passing through all post-criticl points for each sufficiently large integer n. More broadly, a Thurston map is Böttcher expanding if it has a certain “expansion property” near its Julia set (see <cit.>). The dynamics of such maps is investigated in a series of works, such as <cit.>. In particular, Floyd, Parry and Pilgrim <cit.> showed that a suitable iterate of a Böttcher expanding Thurston map admits an isotopy-invariant graph containing all post-critical points. Invariant graphs are extensively used in the study of the dynamics of PCF rational maps and Thurston maps. For instance, Meyer <cit.> investigates the unmating of PCF rational maps with empty Fatou sets by invariant Peano curves. Hlushchanka and Meyer use the invariant Jordan curves from Theorems A and B to calculate the growth of iterated monodromy groups for certain PCF rational maps. Additionally, based on Theorem A, Li established the thermodynamic formalism (<cit.>) and the prime orbit theorems (collaborate with Zheng, <cit.>) for expanding Thurston maps. §.§ Future directions Firstly, a natural question arises regarding whether the iterate is strictly necessary in Theorem <ref>. Addressing this question, we propose the following conjecture. For any marked rational map (f,P), Theorem <ref> holds with n=1. In other words, there is an f-invariant graph G⊂ J_f such that P∩ J_f⊂ G and each component of G contains at most one point of P. According to Proposition <ref>, this conjecture is true if one can confirm that any marked rational map (g,Q) with its Julia set equal to either the sphere or the carpet admits a g-invariant and regulated graph containing Q. Every PCF rational map with Julia set equal to is an expanding Thurston map. In addition, each PCF rational map f can descend to an expanding Thurston map F by collasping the closure of each Fatou domain to a point, and any graph in the F-plane can be lifted to a regulated graph for f; See <cit.>. Therefore, Conjecture <ref> is implicated by the following conjecture, which appeared in <cit.>. For any marked expanding Thurston map (F,Q), there exists an F-invariant graph that contains Q. Another direction concerns the renormalizability of a rational map on stable sets. A classical result by McMullen asserts that any rational map is renormlizable on each of its fixed Julia components <cit.>. It is worth noting that every fixed Julia component is a specific connected stable set. On the other hand, Theorem <ref> shows that if the rational map is PCF, then it is renormalizable on any connected stable set, due to the expansion property near the Julia set. Is every rational map renormalizable on any connected stable set or on any fixed maximal Fatou chain of the map? The next direction examines the invariant graphs derived from Theorem <ref> from the view of entropy. By Thurston, the core entropy of a polynomial is the topological entropy on its Hubbard tree, which is a very useful tool in studying the bifurcation locus of polynomials <cit.>. However, there is currently no definition for the core entropy of a rational map. Consider a marked rational map (f,P_f), and let denote the collection of all graphs obtained in Theorem <ref>. For polynomials, the topological entropy of f on the graphs in remains constant, which equals the maximum of the core entropy of f and log d_ U/p_ U for all periodic Fatou domains U, where p_ U denotes the period of U and d_ U is the degree of f^p_ U:U→ U. Based on this observation, a potential candidate for the core entropy of f is given by h(f)=inf_G∈{h_top(f^n|_G)/n:f^n(G)⊂ G,n≥1}, where h_top(f^n|_G) denotes the topological entropy of f^n:G→ G. Indeed, a motivation for us to construct invariant graphs within the Julia set is to define the core entropy of a rational map. Additionally, when f is a polynomial, the graphs in are isotopic relative to P_f by imposing some natural restrictions. But in the general case, the elements of are far from unique up to isotopy. Therefore, it is important to seek invariant graphs with canonical conditions. From the view of entropy, we may ask Is there a (unique) f^n-invariant graph G∈ such that h(f)=h_top(f^n|_G)/n ? The final direction is generalizing Theorem <ref> to the non-rational case, specifically to Böttcher expanding Thuston maps as mentioned in Section <ref>. These maps also have Julia and Fatou sets, and they share several similarities with PCF rational maps. Hence it is plausible to expect that Theorem <ref> applies to Böttcher expanding Thurston maps as well. Do (a part of) the theorems listed in the introduction still hold for Böttcher expanding Thurston maps after appropriate revisons? 0.3cm Acknowledgements. The authors are grateful for insightful discussions with Zhiqiang Li, Xiaoguang Wang, Yunping Jiang, Dylan Thurston and Luxian Yang. The first author is supported by National Key R&D Program of China no. 2021YFA1003203, and the NSFC Grants no. 12131016 and 12071303. The second author is supported by the NSFC Grant no. 12322104 and NSFGD Grant no. 2023A1515010058. The third author is supported by the NSFC Grant no. 12271115. § INVARIANT GRAPHS ASSOCIATED WITH FIXED FATOU DOMAINS In this section, we study the dynamics of a rational map f on the boundary of a fixed Fatou domain U of f. We start by examining the mapping behavior of f on ∂ U. Next, we construct an invariant continuum on ∂ U with nice topological properties (called circle-tree). Finally, we present the proof of Theorem <ref>. §.§ Circle-trees Let U⊂ be a simply connected domain such that T_0:=∂ U is a locally connected continuum. The next lemma is classical (refer to <cit.>). In this paper, a circle means a Jordan curve and a disk means a Jordan domain in . An arc is a continuous injective map from [0,1] into , and its restriction to (0,1) is called an open arc. The following statements hold. (a) Both T_0 and U are arcwise connected. (b) All components of U are disks, whose diameters converge to zero. (c) Each circle C⊂ T_0 is the boundary of a component of U. Let C⊂ T_0 be a circle. If E⊂ T_0 is a continuum, then C∩ E is connected. If C'≠ C is also a circle in T_0, then #(C∩ C')≤ 1. Suppose to the contrary that C∩ E is disconnected. Then C E has at least two components. Let x and y be two points contained in two distinct components of C E, respectively. Let D be the component of C disjoint from U. Then there are open arcs α⊂ U and β⊂ D such that both of them join the points x and y. Now α∪β∪{x,y} is a Jordan curve disjoint from E, and both of its two complementary components intersect E. This contradicts the connectivity of E. Suppose C'≠ C is also a circle in T_0. Then I=C∩ C' is connected by the above discussion. If I contains at least two points, then it contains an open arc γ. This implies that each point in is an exterior point of U, contradicting the fact that ⊂ C⊂∂ U. Motivated by the above results, we consider circles in T_0 as entire entities when discussing subsets of T_0. A continuum T⊂ T_0 is called a circle-tree of T_0 if for any circle C⊂ T_0, either C⊂ T or #(C∩ T)≤ 1. Let T be a circle-tree of T_0. A point x∈ T is a cut point of T if T{x} is disconnected. A circle C⊂ T is an end circle of T if C contains at most one cut point of T. A point x∈ T is an endpoint of T if it is neither contained in a circle in T nor a cut point. By an end we mean an endpoint or an end circle. We call T a finite circle tree if T has finitely many ends. In order to study circle-trees and their topology, one useful tool is the geodesic lamination introduced by Thurston. Let denote the unit disk. Then there is a conformal map ϕ: → U which can be extended continuously to the boundary. For each point x∈ T_0, denote by H_x the convex hull within of ϕ^-1(x) under the Poincarè metric on . The basic observation of the lamination theory is H_x∩ H_y=∅ if x≠ y. Note that ∂ H_x∩ consists of geodesics if it is non-empty. The lamination _ U induced by U is defined as the union of all such geodesics, which are called leaves. Then _ U is closed in and the closure of a component of _ U is a gap of _ U. Assume that U is not a disk. Then the following statements hold. (a) For each gap A of _ U, ϕ(A∩∂) is either a point or a circle. Conversely, for any circle C⊂ T_0, there is a unique gap A such that ϕ(A∩∂)=C. Moreover C is an end circle of T_0 if and only if A∩∂ is connected. (b) A point x∈ T_0 is an endpoint if and only if #ϕ^-1(x)=1 and there is a sequence of leaves {L_n} in _ U converging to ϕ^-1(x), such that L_n separates L_n-1 from L_n+1. (c) Let x∈ T_0 be a point and let I_0 be a component of ∂ϕ^-1(x). Then either ϕ(I_0) is an end circle or ϕ(I_0) contains an end. (d) Let C⊂ T_0 be a circle and let I_0 be a component of ∂ϕ^-1(C). Then either ϕ(I_0) is an end circle or ϕ(I_0) contains an end. (a) Note that ∂ A is a Jordan curve. Define a map ϕ_ A: ∂ A→ T_0 by ϕ_ A=ϕ on ∂ A∩∂ and ϕ_ A(L)=ϕ(L∩∂) for any leaf L⊂∂ A. Then ϕ_ A is continuous and ϕ_ A(∂ A)=ϕ(A∩∂). Thus ϕ_ A(∂ A)⊂ T_0 is either a point or a closed curve. In the latter case, the curve is not self-intersecting since ϕ_ A^-1(x) is connected for any x∈ϕ_ A(∂ A). Therefore it is a circle in T_0. Conversely, let C⊂ T_0 be a circle. For any point x∈ C, C{x} is connected. Thus ϕ^-1(C{x}) is contained in a component A_x of H_x and C⊂ϕ(A_x∩∂). Let A=⋂_x∈ CA_x. Then A is a gap and C⊂ϕ(A∩∂). By the discussion in the previous paragraph, ϕ(A∩∂) is either a point or a circle. So we have C=ϕ(A∩∂). If A'≠ A is another gap, then there is a leaf L⊂∂ A which separates the interior of A from A'. Thus ϕ(A∩∂)∩ϕ(A'∩∂) contains at most one point, and then ϕ(A'∩∂)≠C. If A∩∂ is connected, then ϕ is injective in the interior of A∩∂, whose image contains no cut points, and ϕ maps the two endpoints of A∩∂ to a cut point. Thus C is an end circle. Conversely, if C is an end circle, let x∈ C be the unique cut point. Then A∩∂=ϕ^-1(C{x}) is connected since ϕ^-1(y) is a point for y∈ C{x}. (b) Denote x_n=ϕ(L_n∩∂). Let B_n be the component of T_0{x_n} containing the point x, then B_n+1⊂ B_n and the diameter of B_n tends to 0 as n→∞. Thus x is an endpoint. Conversely, if x∈ T_0 is an endpoint, then ϕ^-1(x) consists of a single point t∈∂ and there are no leaves landing on t. For each leaf L, denote by |L|_t the length of the component of ∂ L containing the point t. Assume by contradiction that inf{|L|_t}>0. Then there is a leaf L_0 such that |L_0|_t=inf{|L|_t} since _ U is closed. Let D_0 be the component of L_0 whose boundary contains the point t. Then there are no leaves in D_0 separating L_0 from the point t. Thus there is a gap A which contains the point t and the leaf L_0. By statement (a), ϕ(A∩∂) is either a single point or a circle. Since x∈ϕ(A∩∂) is an endpoint, we obtain x=ϕ(A∩∂), which contradicts the condition that ϕ^-1(x) is a single point. (c) By statement (a), the two endpoints of I_0 are connected by a leaf in _ U. Denote by the collection of all open arcs I⊂ I_0 with I≠ I_0 such that the two endpoints of I are connected by a leaf in _ U. Then any two arcs in are either disjoint or nested since any two distinct leaves are disjoint. If is empty, then ϕ(I_0) is an end circle by (a). If |I|>|I_0|/2 for all I∈, then there exists a unique arc I^*∈ such that I^*⊂ I for all I∈. This implies that ϕ(I^*) is an end circle. Otherwise, there is an arc I_1∈ such that |I_1|≤ |I_0|/2. Continuing this process successively, we either have to stop at some step, which yields an end circle, or obtain an infinite sequence of arcs {I_n}, such that I_n+1⊂ I_n and |I_n+1|≤ |I_n|/2. By the definition of lamination, there are at most two leaves share a common endpoint. Thus t=⋂ I_n is a single point. By statement (b), ϕ(t) is an endpoint. The proof of (d) is similar as that of (c). The next result is a direct consequence of Lemma <ref> (c) and (d). Let x∈ T_0 be a point and let B be a component of T_0{x}. Then either B is an end circle or B contains an end of T_0. Let C⊂ T_0 be a circle and let B be a component of T_0 C. Then B∩ C is a singleton, and either B is an end circle or B contains an end of T_0. A circle-tree can be characterized by the lamination _ U. A continuum T⊂ T_0 is a circle-tree of T_0 if and only if each component of ∂ H_ T∂ is a leaf in _ U, where H_ T is the convex hull of ϕ^-1(T) within . For any circle C⊂ T_0, there is a unique gap A such that ϕ(A∩∂)=C by Lemma <ref> (a). Since each component of ∂ H_ T∂ is a leaf, either A is contained in H_ T, or A∩ H_ T=∅, or A∩ H_ T is a leaf, Thus either C⊂ T or #(T∩ C)≤ 1. Therefore T is a circle-tree of T_0. Conversely, assume that T is a circle-tree of T_0. Let I=(s,t) be a component of ∂ϕ^-1(T). Denote ϕ(s)=x and ϕ(t)=y. Then x,y∈ T. If x≠y, then H_x∩ H_y=∅. Note that there are no leaves of _ U in (H_x∪ H_y) separating H_x from H_y, since such a leaf would have an endpoint in I, which contradicts the connectivity of T. Thus there is a gap A such that s, t∈ A∩∂. By Lemma <ref> (a), ϕ(A∩∂) is a circle in T_0 which contains the points x,y∈ T. Thus it is contained in T since T is a circle-tree. So we have A⊂ H_ T. Hence I is a component of ∂ A. This implies that s,t are connected by a leaf in ∂ A, and hence x=y, a contradiction. Since x=y, either there is a leaf joining the points s and t, or H_x∩ I≠∅. The latter case can not happen as I∩ H_ T=∅. Thus s and t are connected by a leaf in _ U. Let T be a circle-tree of T_0. Then T is locally connected and there is a simply connected domain V⊂ such that ∂ V=T. Note that ∂ H_ T is a Jordan curve. By Lemma <ref>, each component of ∂ H_ T∂ is a leaf. Define a map ϕ_ T: ∂ H_ T→ T_0 by ϕ_ T=ϕ on ∂ H_ T∩∂ and ϕ_ T(L)=ϕ(L∩∂) for any leaf L⊂∂ H_ T. Then ϕ_ T is continuous and ϕ_ T(∂ H_ T)=T. Thus T is locally connected. Let V be the component of T containing U. Then V is a simply connected domain and ∂ V⊂ T. On the other hand T⊂U⊂V. Thus T⊂∂ V. So we have ∂ V=T. The next result provides a basic tool for constructing circle-trees. Let x,y∈ T_0 be two distinct points. Then there is a unique circle-tree T[x,y] of T_0 such that any circle-tree of T_0 containing x and y contains T[x,y]. Moreover, each end of T[x,y] intersects {x, y}. We call T[x,y] the circle-tree spanned by {x,y}. By Lemma <ref> (a), there is an arc γ: [0,1]→ T_0 with (0)=x and (1)=y. Let T_1 be the union of γ and all circles C⊂ T_0 with #(C∩γ)≥ 2. By Lemma <ref> (b), T_1 is a continuum. We will show that T_1 is a circle-tree. By definition, it is enough to prove that for any circle C⊂ T_0 with #(C∩ T_1)≥ 2, it holds that #(C∩)≥ 2. Suppose to the contrary that #(C∩)≤ 1. Let x_1,x_2∈ C∩ T_1 be two distinct points, and let α be an arbitrary component of C∖{x_1,x_2}. If C∩=∅, then there exist two distinct circles C_1, C_2⊂ T_1 such that x_1=C∩ C_1 and x_2=C∩ C_2. By the definition of T_1, there is an arc _0⊂ such that y_1:=_0(0)∈ C_1,y_2:=_0(1)∈ C_2, and _0(0,1) is disjoint from C_1∪ C_2. For i=1,2, let β_i be a component of C_i∖{x_i,y_i} such that β_1∩β_2=∅. Then α,β_1,β_2 and _0 are pairwise disjoint. It follows that α∪β_1∪β_2∪γ_0∪{x_1, x_2,y_1,y_2} is a circle in T_0, a contradiction to Lemma <ref> (c). If #(C∩γ)=1, we may assume x_1 to be this intersection point, and there exists a circle C_2⊂ T_1 with x_2=C∩ C_2. A similar argument as above will also deduce an contradiction to Lemma <ref> (c). Now we have proved that T_1 is a circle-tree. Let T_2 be a circle-tree containing the points x and y. Then there is an arc '⊂ T_2 joining x and y. For any component _1 of ∖', we denote _1' the sub-arc of ' with the same endpoints as those of _1. Thus _1∪_1' is a circle in T_0. Since _1'⊂ T_2, it follows that _1∪_1'⊂ T_2, and hence ⊂ T_2. By the definition of T_1, we have T_1⊂ T_2. This implies the uniqueness of T_1. By definition, any point of T_1 belongs to either or a circle in T_0. So an endpoint of T_1 must be x or y. If C is an end circle of T_1 disjoint from {x,y}, then T_1':=(T_1∖ C)∪{z}⊂ T_1 is a circle-tree containing x and y, where z is the unique cut point of T_1 on C. The uniqueness implies T_1'=T_1, a contradiction. Let T_1 and T_2 be circle-trees of T_0 such that T_1∩ T_2≠∅. * T_1∩ T_2 is either a singleton or a circle-tree of T_0. * T_1∪ T_2 is a circle-tree of T_0 and each end of T_1∪ T_2 is an end of T_1 or T_2. (1) For any two distinct points x,y∈ T_1∩ T_2, T[x,y]⊂ T_1∩ T_2 by Lemma <ref>. Thus T_1∩ T_2 is a continuum. For any circle C⊂ T_0 with #(C∩ T_1∩ T_2)≥ 2, we have #(C∩ T_1)≥ 2 and #(C∩ T_1)≥ 2. Thus C⊂ T_1∩ T_2. So T_1∩ T_2 is a circle-tree of T_0. (2) By Lemma <ref>, each component of ∂ H_ T_1∂ and ∂ H_ T_2∂ is a leaf in _ U. Since any two distinct leaves are disjoint in , each component of ∂ H_ T_1∪ T_2∂ is a leaf in _ U. Thus T_1∪ T_2 is a circle-tree of T_0. Let x∈ T_1∪ T_2 be a point disjoint from any circle in T_1∪ T_2. Assume x∈ T_1. If x is a cut point of T_1, then there is a Jordan curve in U∪{x} which separates T_1{x}. Thus x is a cut point of T_1∪ T_2. Therefore if x is an endpoint of T_1∪ T_2, then it is an endpoint of T_1 or T_2. Let C⊂ T_1∪ T_2 be an end circle. Then either C⊂ T_1 or C⊂ T_2. Assume C⊂ T_1. If C contains two distinct cut points x and y of T_1, then x and y are also cut points of T_1∪ T_2. This is a contradiction. Thus C is an end circle of T_1. For any finite set {x_1,…, x_n}⊂ T_0 with n≥ 2, denote T[x_1,…, x_n]=T[x_1, x_2]∪⋯∪ T[x_1, x_n]. Furthermore, let {x_1,…,x_n,C_1,…,C_m} be a collection of points x_i and circles C_j in T_0. Pick two distinct points y_j, z_j∈ C_j for each circle C_j. Denote T[x_1,…,x_n,C_1,…,C_m]=T[x_1,…,x_n,y_1,…,y_m,z_1,…,z_m]. By Lemmas <ref> and <ref>, T[x_1,…,x_n,C_1,…,C_m] is a finite circle-tree and it is the minimal circle-tree of T_0 that contains x_1,…,x_n,C_1,…,C_m. We call it the circle-tree spanned by {x_1,…,x_n,C_1,…,C_m}. Let T be a finite circle-tree of T_0, and let T_1 be the circle-tree spanned by the ends of T. Then T_1=T. By Lemma <ref>, T_1⊂ T. Assume that x∈ T T_1 is a point disjoint from all circles in T. Since x is not an endpoint of T, there is a component T' of T{x} disjoint from T_1. By Corollary <ref>, T' contains an end of T, a contradiction. Assume that C⊂ T is a circle such that C∩ T_1 contains at most one point. Then C is not an end circle of T. Thus T C has a component T' disjoint from T_1. By Corollary <ref>, T' contains an end of T, also a contradiction. Let T be a finite circle-tree of T_0. By Corollary <ref>, there exists a component V of T and a conformal map ψ: → V which can be extended continuously to the boundary such that ψ(∂)=∂ V=T. For each point x∈ T, denote μ_ T(x)=#ψ^-1(x). A point x∈ T is called a cut point of T if μ_ T(x)≥ 2, or a branched point of T if μ_ T(x)≥ 3, or a locally branched point of T if for any sufficiently small neighborhood W of x, (T∩ W){x} has at least three components. For any circle C⊂ T, denote μ_ T(C)=#{y∈ C: μ_ T(y)≥ 2}. A circle C⊂ T is called a cut circle of T if μ_ T(C)≥ 2, or a branched circle of T if μ_ T(C)≥ 3. When x∈ T is not contained in any circle in T, then x is a branched point if and only if it is a locally branched point. When x∈ T is contained in a circle in T, then x is a locally branched point if and only if x is a cut point of T. If a circle C⊂ T contains no branched points of T, then μ_ T(C) is the number of components of T C. In general, μ_ T(C) is the number of components of T C. Refer to Figure <ref> for an example of finite circle-trees, where p_1 is an endpoint, p_2 is a cut point and p_3 is a branched point; C_1 and C_2 are end circles, C_3 and C_4 are cut circles, and C_5 is a branched circle. Note that any circle-tree T⊂ T_0 has at least one end by Corollary <ref>. If T has only one end, then it is a circle. Let T be a finite circle-tree of T_0 with n≥ 2 ends. Then T has only k branched points {x_i} and l branched circles {C_j} such that ∑_i=1^k(μ_ T(x_i)-2)+∑_j=1^l(μ_ T(C_j)-2)=n-2. If n=2, the circle-tree T has neither branched points nor branched circles. In fact, if z∈ T is a branched point, then T has at least three ends by Corollary <ref>, a contradiction. Similarly, we also obtain that T has no branched circles. Assume by induction that the lemma holds for an integer n ≥ 2. Let T be a circle-tree of T_0 with n +1 ends X_0,⋯, X_n. Denote T'=T[X_1,⋯,X_n ]. If X_0∩ T'≠∅, then X_0 is an end circle and T' intersects X_0 at a single point y. If X_0∩ T'=∅, then there is an arc : [0,1]→ T such that (0)∈ X_0, y=(1)∈ T' and (t)∉T' for t∈ [0,1). We claim that T[X_0,y]∩ T'={y}. By the definition of T[X_0,y] in the proof of Lemma <ref>, we only need to check that for any circle C⊂ T_0 with #(C∩)≥ 2, either C∩ T'=∅ or C∩ T'={y}. Since [0,1) lies in a component of T∖{y} disjoint from T', there exists an open arc β⊂ U such that lim_t→0β(t)=lim_t→ 1β(t)=y and that β separates [0,1) from T'∖{y}. Note that C⊂ T and C∩[0,1)≠∅. Then C∖{y} and T'∖{y} are contained in distinct components of ∖β. So the claim is proved. In both cases, y is not an endpoint of T'. If y is a cut point of T', then μ_ T(y)=μ_ T'(y)+1. Otherwise y is contained in a circle C⊂ T' which is not an end circle of T'. Thus μ_ T(C)=μ_ T'(C)+1. For any branched point x of T' with x≠ y, x is also a branched point of T with μ_ T(x)=μ_ T'(x). If C_1≠C is a branched circle of T', then it is also a branched circle of T with μ_ T(C_1)=μ_ T'(C_1). Finally, by the claim above, T∖ T'=T[X_0,y]∖{y}, which contains neither branched points nor branched circles of T. Thus the lemma is proved. §.§ Images of circle-trees Let f:→ be a branched covering. Let U,V⊂ be simply connected domains such that U is a component of f^-1(V) and ∂ V is locally connected. In particular, the conditions hold if f is a rational map with connected and locally connected Julia set and U is a Fatou domain of f. A continuum E⊂ is full if E is connected. Let C⊂∂ U be a circle. Then f(C) is a finite circle-tree of ∂ V. Moreover, each endpoint of f(C) is a critical value of f, and if f:C→ f(C) is not a homeomorphism, then each end circle of f(C) either contains a critical value or separates a critical value from V Let C'⊂∂ V be a circle such that #(f(C)∩ C')≥ 2. Denote I_1={x∈ C: f(x)∈ C'} and I_0=C I_1. Denote by {α_i} the components of I_0. Then each α_i is an open arc and f(α_i) is contained in a component B_i of ∂ V C'. By Corollary <ref>, B_i∩ C' consists of a single point and hence f(x_i)=f(x'_i), where x_i and x'_i are the endpoints of α_i. Let E_i be the component of ( V) C' containing B_i. Then E_i is a full continuum and E_i∩ C'={f(x_i)}. Moreover, E_i∩E_j=∅ if f(x_i)≠ f(x_j). We claim that E_i contains critical values of f. For otherwise, there is a disk W⊂ disjoint from the critical values of f such that E_i⊂ W. Thus f is a homeomorphism on each component of f^-1(W). This is a contradiction since f(x_i)=f(x'_i). Denote by Z the set of the points f(x_i) for all the components α_i. Since E_i∩E_j=∅ if f(x_i)≠ f(x_j), we obtain # Z≤ 2d-2 by the above claim, where d= f. For each point z∈ Z, there are at most d components α_i such that f(x_i)=z. Therefore I_0 has at most d(2d-2) components. Consequently, I_1 has at most d(2d-2) components. By Lemma <ref>, f(C)∩ C' is a continuum since #(f(C)∩ C')≥ 2. Then at least one component β_j of I_1 is an arc. Since f: β_j→ C' preserves the orientation induced by U and V, respectively, we obtain f(I_1)=C'. Thus C'⊂ f(C) and hence f(C) is a circle-tree of ∂ V. Assume that f:C→ f(C) is not a homeomorphism. Then each endpoint of f(C) is a critical value of f. Let C' be an end circle of f(C). We claim that C' either contains a critical value or separates critical values from V. If the claim is not true, each component of f^-1(C') is a Jordan curve on which the restriction of f is injective. As above, denote I_1={x∈ C: f(x)∈ C'}. Since C' is an end circle of f(C), I_1 has exactly one component β which is not a single point. Thus f(β)=C'. As f is injective on each component of f^-1(C'), it follows that β=C and f:C→ C' is a homeomorphism, a contradiction. The claim is proved. There might be infinitely many circles in ∂ V containing critical values of f. However, for each critical value v of f, there are at most f circles of ∂ V containing v such that they are contained in f(C). Therefore f(C) is a finite circle-tree. Let T be a finite circle-tree of ∂ U. Then f(T) is a finite circle-tree of ∂ V. Each endpoint of f(T) is either the image of an endpoint of T or a critical value of f. Each end circle of f(T) either is the image of an end circle of T, or contains a critical value of f or separates a critical value of f from V. Let C'⊂∂ V be a circle such that #(C'∩ f(T))≥ 2. We claim that there is a circle C⊂ T such that C'⊂ f(C). By the claim, C'⊂ f(T), and then f(T) is a circle-tree in ∂ V. To prove the claim, denote by I_0⊂ T the set of points that are not contained in any circle in ∂ U. Then f(I_0)∩ C'=∅. For otherwise, there is an open arc β⊂V which joins a point in f(I_0) to a point in V. Thus f^-1(β) has a component in U which joins a point in I_0 to a point in U. This is impossible. Denote I_1=T I_0. Then each point of I_1 is contained in a circle of ∂ U. Assume by contradiction that C'⊄f(C) for any circle C⊂ T. It follows that #(C'∩ f(C))≤ 1 since f(C) is a circle-tree. Thus C'∩ f(I_1) is a countable set because ∂ U has only countably many circles. Since C'∩ f(I_0)=∅, we know that C'∩ f(T)=C'∩ f(I_1) is a countable set. On the other hand, by Lemma <ref>, C'∩ f(T) is a continuum since #(C'∩ f(T))≥ 2. This yields a contradiction. Then the claim is proved. Obviously, each endpoint of f(T) is either a critical value of f or the image of an endpoint of T. Let C' be an end circle of f(T). By the claim above, there is a circle C⊂ T such that C'⊂ f(C). Then C' is also an end circle of f(C). Due to Lemma <ref>, either f:C→ C' is a homeomorphism, or C' contains a critical value, or C' separates a critical value from V. The number of C' in the last case is clearly finite as f has finite critical values. The circles C' in the first case must be the images of end circles of T, and hence have finite number. Note that there are finitely many circles in T containing a preimage of the critical values of f. Then the number of C' in the second case is also finite. Therefore f(T) is a finite circle-tree in ∂ V. §.§ Invariant circle-trees Let (f,P) be a marked rational map and let U be a fixed Fatou domain of f. We will construct an f-invariant and finite circle-tree of ∂ U. The process is similar to the construction of Hubbard tree for PCF polynomials <cit.>. We say a continuum E separates P if there are two points of P in distinct components of ∖ E. A circle C⊂∂ U is called a marked circle (rel P) if C either intersects or separates P. Any eventually periodic point in ∂ U receives finitely many internal rays in U. As a consequence, there are finitely many marked circles in ∂ U. It is enough to prove the lemma for a fixed point z∈∂ U. Let Θ⊂∂ be the set of angles corresponding to the internal rays in U landing at z. Then Θ is compact and p_d: Θ→Θ is injective, where p_d(z)=z^d and d= f|_ U. By <cit.>, Θ is a finite set. To show the finiteness of marked circles in ∂ U, we only need to prove that at most finitely many circles in ∂ U pass through an eventually periodic point z∈∂ U. According to the discussion above, ∂ U{z} has finitely many components, each of which together with the point z contains at most one circle in ∂ U passing through the point z. Then the lemma is proved. For two continua E_0⊂ E, we call E_0 a skeleton of E (rel P) if E_0∩ P=E∩ P and any two points of P in distinct components of ∖ E also lie in distinct components of ∖ E_0. Let T be the finite circle-tree of ∂ U spanned by P∩∂ U together with all the marked circles in ∂ U. Then * each end of T is a marked point or a marked circle, * f(T)⊂ T and T is a skeleton of ∂ U rel P. By Lemmas <ref> and <ref>, each endpoint of T is contained in P∩∂ U and each end circle of T is a marked circle. By Lemma <ref>, for each endpoint y of f(T), either y is a critical value or there is an endpoint x of T such that f(x)=y. In both cases, we have y∈ P∩∂ U. For each end circle C of f(T), either C is a marked circle, or C is the image of an end circle of T. In the latter case C is also a marked circle. Therefore each end of f(T) is contained in T. Thus f(T)⊂ T by Lemma <ref>. Obviously, T∩ P=∂ U∩ P. If two points a,b∈ P are contained in distinct components of ∂ U, then there is a unique circle C⊂∂ U separating a and b. Thus C⊂ T since C is a marked circle. It follows that T is a skeleton of ∂ U. The invariant circle-tree T obtained in Theorem <ref> attracts every circle in ∂ U. For any circle C⊂∂ U, there exists an integer n≥ 0 such that f^n(C)⊂ T. By Lemma <ref> and Theorem <ref>, either f(C) is still a circle in ∂ U or f(C)⊂ T. So it is enough to show that f^N(C) is a marked circle for some integer N≥ 0 under the condition that f^n(C) is always a circle for every n≥ 0. For otherwise, let D_n be the disk bounded by f^n(C) and disjoint from U for n≥ 0, then D_n∩ P=∅. Thus f^n(D)=D_n, which implies D is a Fatou domain of f. Consequently, there exists an integer N≥ 0 such that f^ N(D) is a periodic Fatou domain. Then f^ N(C) is a marked circle, a contradiction. As a by-product, we obtain the following result regarding the locally branched points on the boundaries of Fatou domains. This generalizes a well-known fact for polynomials. A circle C⊂ T is called regular if it is neither a marked circle nor a branched circle of T. Note that T has only finitely many irregular circles. Every locally branched point of ∂ U is eventually periodic. Let x be any locally branched point of ∂ U. We first claim that there is an integer N>1 such that f^ N(x) is either a locally branched point of T or a point in P_f∩ T. If x is contained in a circle C of ∂ U, then there is a component E of ∂ U C such that E∩ C={x}. Since ⋃_n>0(f^-n(T)∩∂ U) is dense in ∂ U, there exists a point y∈ E such that f^n_0(y)∈ T for some integer n_0>0. Then x is a locally branched point of T_1=T[y,C]. By Lemma <ref>, there exists an integer N≥ n_0 such that f^ N(x)∈ f^ N(C)⊂ T. It follows from Lemma <ref> that f^ N(T_1) is a circle-tree whose ends are contained in T, and thus f^ N(T_1)⊂ T by Lemma <ref>. Therefore the claim holds. If x avoids any circle in ∂ U, then x is a branched point of ∂ U. Thus ∂ U{x} has at least three components E_1,E_2 and E_3. By a similar argument as above, there exists a point y_i∈ E_i and an integer n_i>0 for each i=1,2,3 such that f^n_i(y_i)∈ T, and the circle-tree f^ N(T_1) is contained T with T_1:=T[y_1,y_2,y_3]∋ x and N:=max{n_1,n_2,n_3}. So the claim still holds. Since T has only finitely many branched points due to Lemma <ref>, it follows from the above claim that either x is eventually periodic, or f^n(x) is a locally branched point but not a branched point of T for every n≥ N. We only need to consider the latter case. In this situation, each f^n(x) is a cut point of T and contained in a circle C_n of T for n≥ N. If C_n_i=C for an infinite sequence {n_i}, then x is eventually periodic. This is because each circle of T contains finitely many cut points of T by Lemma <ref>. Thus, we may further assume that C_n,n≥ N are pairwise different circles of T. Since T has finitely many irregular circles, the circle C_n is regular for every large integer n. For a regular circle C, there is a dichotomy result: either D_ C contains a component of f^-1(U), or f:D_ C→ f(D_ C) is a homeomorphism, where D_ C denotes the component of ∖ C disjoint from U. Clearly, the first kind of regular circles in T are finitely many. It follows that C_n+1=f(C_n) and D_ C_n+1=f(D_ C_n) for every large integer n. This implies the existence of wandering Fatou domains, a contradiction. §.§ A Fatou domain without invariant graphs on the boundary In this subsection, we give an example of a PCF rational map with a fixed Fatou domain U, such that ∂ U admits no invariant graphs. Let X⊂ be a compact set. A continuous map ϕ:× [0,1]→ is an isotopy rel X if each map ϕ_s=ϕ(·,s) is a homeomorphism of and ϕ_s(z)=z for every z∈ X and s∈[0,1]. In this case, we say the homeomorphisms ϕ_0 and ϕ_1 are isotopic rel X. Sometimes, we write the isotopy ϕ as {ϕ_s}_s∈ [0,1]. Moreover, we say that two subsets E_1 and E_2 of are isotopy rel X if there is a homeomorphism h:→ that is isotopic to the identity map rel X such that h(E_1)=E_2. In this paper, E_1 and E_2 are typically considered as Jordan curves, (open) arcs or graphs. There exists a cubic PCF rational map f and a fixed Fatou domain U of f such that ∂ U contains infinitely many circles and that for any arc γ⊂∂ U, f^n(γ)=∂ U for some integer n≥ 1. Consequently, there are no invariant graphs on ∂ U. Let g(z)=z^2-2. Its Julia set is [-2,2]. Let D be the disk with diameter [-2,0]. Let B be the domain bounded by the three external rays landing at the points 0 and -2. Then there is a homeomorphism φ from BD to B [-2,0] and φ can be continuously extended to the boundary such that φ=id on the three external rays and φ(x+iy)=x on ∂ D. Let h: D→ [-2,2] be a homeomorphism such that h=g∘φ on ∂ D. Define f̃= g, on B, g∘φ, on BD, h, on D. Then f̃ is a branched covering of with f̃=3. It has three critical points -2, 0 and ∞ with (f̃|_z̃=0)=3 and (f̃|_z̃=-2)=(f̃|_z̃=∞)=2. Its post-critical set is P_f̃={-2,2,∞}. Thus f̃ is combinatorially equivalent to a rational map f by Thurston Theorem (refer to <cit.> or <cit.>), i.e., there exists a pair of orientation-preserving homeomorphisms (ϕ_0,ϕ_1) of such that ϕ_1 is isotopic to ϕ_0 rel P_f̃ and f:=ϕ_0∘f̃∘ϕ_1^-1 is a rational map. Denote the ϕ_0-image of -2, 0, 2 and ∞ by a, b, a_1 and c, respectively. Then critical points of f are a,b,c with ( f|_z=b)=3 and (f|_z=a)=(f|_z=c)=2. Moreover f(b)=a, f(a)=a_1=f(a_1) and f(c)=c. So P_f={a, a_1, c}. The map f has exactly one periodic Fatou domain U, which contains c. Then f(U)=U and (f|_U)=2. Thus f^-1(U) has another component U' except U. The lamination _ U of U consists of leaves L_n,n≥1 such that the endpoints of L_n are e^π/2^n and e^-π/2^n. Let W_0 be a round disk under the Böttcher coordinate of U which is compactly contained in U. Let W_n be the component of f^-n(W_0) containing the fixed point c for n≥ 1. Then W_n⊂ W_n+1 and ⋃_n≥ 0W_n=U. Denote by R_f(θ) the internal ray of f in U with angle θ∈(-π,π]. Denote by R_g(θ) the external ray of g with angle θ∈(-π,π]. Then f̃(R_g(0))=R_g(0). We may require that ϕ_0(R_g(0)) coincides with R_f(0) in W_0. Then ϕ_1(R_g(0)) coincides with R_f(0) in W_1 since f(ϕ_1(R_g(0))=ϕ_0(R_g(0)). Thus there exists an isotopy {ϕ_s}_s∈[0,1] rel P_f̃ such that ϕ_1=ϕ_0 on R_g(0)∩ W_0. Lifting the isotopy {ϕ_s}_s∈[0,1] inductively by Lemma <ref>, we get a sequence of homeomorphisms {ϕ_n} of such that ϕ_n+1 is isotopic to ϕ_n rel P_f̃ and f∘ϕ_n+1=ϕ_n∘f̃. Thus ϕ_n+1(R_g(0)) coincides with R_f(0) in W_n+1 and f(ϕ_n+1(R_g(0))=ϕ_n(R_g(0)). By Lemma <ref>, ϕ_n(R_g(0)) converges to R_f(0). Thus R_f(0) lands at the point a_1. Since f^-1(a_1)={a,a_1}, the ray R_f(π) lands at the point a and f^-1(R_f(0)) has a component in U' which joins the point a and the unique point c' of f^-1(c) in U'. Since f^-1(a)=b, both R_f(±π/2) land at the point b, and a component of f^-1(R_f(π)) in U' connects the critical point b and c'. Consequently, a,b∈∂ U∩∂ U'. It follows that R_f(θ_1) and R_f(θ_2) land at distinct points if θ_1∈(π/2,π) and θ_2∈(-π, -π/2). Consider the simply connected domain bounded by R_f(π) and R_f(±π/2). It contains no critical values of f. Thus its pre-image has three components, and one of them is bounded by R_f(±π/2) and R_f(±π/4). Thus R_f(±π/4) land at the same point. Moreover, R_f(θ_1) and R_f(θ_2) land at distinct points if θ_1∈(π/4,π/2) and θ_2∈(-π/2, -π/4). Inductively taking pre-images as above, the rays R_f(±π/2^n) land at the same point, but R_f(θ_1) and R_f(θ_2) land at distinct points if θ_1∈(π/2^n,π/2^n-1) and θ_2∈(-π/2^n-1, -π/2^n) for n≥ 2. Now we have proved that L_n is a leaf of _ U, and there is no leaf which joins e^θ_1 to e^θ_2 if θ_1∈(π/2^n,π/2^n-1) and θ_2∈(-π/2^n-1, -π/2^n) for n≥ 1. It follows that if L is a leaf of _ U which joins e^θ_1 to e^θ_2, then |θ_1-θ_2|<π/2. Assume that L is a leaf of _ U which joins e^θ_1 to e^θ_2, then there is a leaf of _ U which joins e^2^n θ_1 to e^2^n θ_2 for n≥ 1 except 2^n(θ_1-θ_2)≡ 0  2π. In particular, there is an integer n≥ 1 such that π/2<2^n|θ_1-θ_2|≤π. This is a contradiction. Denote by ϕ: → U the inverse of the Böttcher coordinate for U. It can be extended continuously to the boundary. For any arc γ⊂∂ U, Proposition <ref> implies that ϕ^-1(γ) must contain a non-trivial interval. Thus f^n(γ)=∂ U for some integer n≥ 1. Up to conformal conjugacy, the rational map f constructed above has the form f(z)=(z^2-6z+9-8/z)/3 with the critical points -1,2, and ∞; See Figure <ref> for its Julia set. §.§ Proof of Theorem <ref> Let (f,P) be a marked rational map and let U be a fixed Fatou domain of f. Let T⊂∂ U be the f-invariant circle-tree obtained in Theorem <ref>. Our proof strategy is as follows. First, we will find a graph G_1 as a skeleton of T rel P such that f^-1(G_1) contains a graph G_2 which is isotopic to G_1 rel P. Then, by lifting, we obtain a sequence of graphs {G_n}, and finally we will prove that {G_n} converges to an invariant graph G. Let X_0⊂ T be the union of P together with the set of cut points of T. Then X_0 is a compact set containing all endpoints of T and f(X_0)⊂ X_0. Each component of T X_0 is an open arc in a circle of T. Denote X_n:=f^-n(X_0) for n≥ 0. Then X_n⊂ X_n+1. Recall that a circle C⊂ T is regular if it is neither a marked circle nor a branched circle of T. So each regular circle C contains exactly two points of X_0, which cut C into two open arcs C^+ and C^-. Set G_1:=T⋃_CC^-, where C ranges over all regular circles in T. Then G_1 is a graph since there are finitely many irregular circles in T, and G_1 is a skeleton of ∂ U by Theorem <ref>. To construct G_2⊂ f^-1(G_1), we need to go beyond ∂ U. Let α_1 be a component of G_1 X_1. Its image f(α_1) is a component of T X_0. Thus there is a circle C⊂ T such that f(α_1) is a component of C X_0. If C is irregular, then f(α_1)⊂ C⊂ G_1. If C is regular, then f(α_1) equals either C^+ or C^-. * If f(α_1)=C^+, we still have f(α_1)⊂ G_1. * If f(α_1)=C^-, since C^+ and C^- are isotopic rel X_0, there is a unique component α_1^+ of f^-1(C^+) isotopic to α_1 rel X_1. Let B(α_1) denote the closed disk bounded by α_1 and α_1^+ disjoint from U. Then B(α_1)∩ G_1=α_1 and B(α_1)∩ X_1={α_1(0),α_1(1)}. Such a component α_1 of G_1∖ X_1 is called a deformation arc of G_1. We define the graph G_2 by G_2:=(G_1⋃α_1)∪⋃α^+_1, where the union is taken over all deformation arcs of G_1. By the discussion above, we have f(G_2)⊂ G_1 and there is an isotopy Θ^1: × [0,1]→ rel P such that Θ^1_t:=Θ^1(·,t) satisfies * Θ^1_0=id on , * Θ^1_t(z)=z on a neighborhood of attracting cycles of f for t∈ [0,1], * if z∈ G_1 is not in any deformation arc, then Θ^1_t(z)=z for t∈ [0,1], and * if α_1 is an deformation arc of G_1, then Θ^1_1(α_1)=α_1^+ and Θ^1(α_1×[0,1])=B(α_1). As a consequence, we have θ_1(G_1)=G_2 with θ_1:=Θ^1_1. By inductively applying Lemma <ref>, we obtain an isotopy Θ^n: × [0,1]→ rel P and a graph G_n+1 for each n≥1, such that Θ^n_0=id and Θ^n_t∘ f(z) =f∘Θ^n+1_t(z) for all z∈,t∈ [0,1], and that G_n+1=θ_n(G_n) with θ_n:=Θ^n_1. Thus f(G_n+1)⊂ G_n. In addition, there are some components of G_n∖ X_n, called the deformation arcs of G_n (under Θ^n), such that (a) if z∈ G_n is not in any deformation arc of G_n, then Θ^n_t(z)=z for t∈ [0,1], (b) if α_n is a deformation arc of G_n, then the deformation of α_n under Θ^n, denoted by B(α_n), is a closed disk with the property that B(α_n)∩ G_n=α_n and B(α_n)∩ X_n={α_n(0),α_n(1)}. Denote ϕ_n=θ_n-1∘⋯∘θ_0 for n≥ 1 with θ_0:=id. Then G_n=ϕ_n(G_1). By Lemma <ref>, {ϕ_n} uniformly converges to a quotient map φ of . Consequently, f(G)⊂ G, where G is defined as G:=φ(G_1). In order to show that G is a graph, we need to clarify the relation between the deformation arcs of G_m and G_n for m>n≥ 1. Fix a deformation arc α_n of G_n with n≥1. Set α_n-k:=f^k(α_n) for 0≤ k≤ n. From the lifting construction of Θ^n, it follows that, when 0≤ k≤ n-1, α_n-k is a deformation arc of G_n-k and f^k(B(α_n))=B(α_n-k), and that α_0=C^- for a regular circle C of T and f^n: B(α_n)→ B(α_0) is a homeomorphism. Here B(α_0)=B(C^-) refers to the closure of the component of ℂ C disjoint from U. Let α_m and β_n be distinct deformation arcs of G_m and G_n respectively, with m≥ n≥1. Then either B(α_m)⊂ B(β_n) or B(α_m)∩ B(β_n)=∅, or B(α_m) intersects B(β_n) at a single point of X_n. Set β_0:=f^n(β_n) and α_m-n:=f^n(α_m). By definition, B(β_0) is the closure of a component of ∖U, and the interior of B(α_m-n) is contained in a component D of ∖U. Then by Lemma <ref>, either D=B(β_0), or D∩ B(β_0)=∅, or D∩ B(β_0) is a singleton in X_0. It follows that either B(α_m-n)⊂ B(β_0), or B(α_m-n)∩ B(β_0)=∅, or B(α_n-m) intersects B(β_0) at a single point of X_0. Thus, this proposition can be proved by a pullback argument. Let m>n≥1 be integers, and let α_n be any deformation arc of G_n. * Let x∈ G_1 be a point such that ϕ_n(x)∈α_n. Then ϕ_m(x)∈ B(α_n). Consequently, if ϕ_m(x) is contained in a deformation arc α_m of G_m, then B(α_m)⊂ B(α_n). * Let α⊂ G_1 be an open arc such that ϕ_n(α)=α_n. Then G_m∩ B(α_n)=ϕ_m(α). (1) Let n=n_1<⋯<n_s< n_s+1:=m be all integers such that ϕ_n_i(x) belongs to a deformation arc α_n_i of G_n_i for i=1,… s. For each i∈{1,…,s} and any n_i< k≤ n_i+1, it follows from the definition of ϕ_n and items (a),(b) about Θ^n that ϕ_k(x)=θ_k-1∘⋯∘θ_n_i∘ϕ_n_i(x)=θ_n_i∘ϕ_n_i(x)∈θ_n_i(α_n_i)⊂ B(α_n_i). Thus ϕ_n_i+1(x)∈ B(α_n_i), and furthermore ϕ_n_i+1(x)∈ B(α_n_i)∩ B(α_n_i+1) if i∈{1,…,s-1}. This implies B(α_n_i+1)⊂ B(α_n_i) for i∈{1,…,s-1} by Proposition <ref>, since ϕ_n_i+1(x)∈α_n_i+1 which is disjoint from X_n_i+1. Therefore ϕ_m(x)=ϕ_n_s+1(x)∈ B(α_n_s)⊂⋯⊂ B(α_n). (2) By statement (1), we get immediately that ϕ_m(α)⊂ B(α_n). Therefore, to prove ϕ_m(α)=G_m∩ B(α_n), it suffices to show that ϕ_m(z)∉B(α_n) for any z∈ G_1∖α. First note that ϕ_n(z)∉ B(α_n) since G_n∩ B(α_n)=α_n. If ϕ_k(z) does not belong to any deformation arc of G_k for every n≤ k<m, then ϕ_m(z)=ϕ_m-1(z)=⋯=ϕ_n(z)∉ B(α_n). Otherwise, let n_1∈[n,m) be the smallest integer such that ϕ_n_1(z) belongs to a deformation arc α_n_1 of G_n_1. Then ϕ_n(z),ϕ_m(z)∈ B(α_n_1) by statement (1). Since ϕ_n(z)∉B(α_n), it follows from Proposition <ref> that B(α_n)∩ B(α_n_1) is either empty or a singleton in X_n. Note also that B(α_n)∩ X_n={α_n(0),α_1(1)} by item (b) above. Thus ϕ_m(z)∉B(α_n). For each point z∈ G∂ U, there exists an integer n≥ 1 and a component D of U, such that f^n(z)∈D and ∂ D is a regular circle of ∂ U. Let x∈ G_1 be a point such that φ(x)=z. Since z∉∂ U, there is a smallest integer n_0≥ 1 such that ϕ_n_0(x) belongs to a deformation arc α_n_0 of G_n_0. It then follows from Proposition <ref> (1) that z=φ(x)∈ B(α_n_0). By the discussion before Proposition <ref>, f^n_0(B(α_n_0)) is the closure of a component of ∖U bounded by a regular circle of ∂ U. The following result is a key part in the proof of Theorem <ref>. For any two distinct points x,y∈ G_1 with φ(x)=φ(y), there is an arc β⊂ G_1 connecting x and y such that φ(β)=φ(x). A point z∈ G_1 is called finitely deforming (under {ϕ_n}) if there exists an integer n(z)≥1 such that ϕ_n(z) does not belong to any deformation arc of G_n for every n≥ n(z). Thus, if z∈ G_1 is infinitely deforming, we can find an increasing sequence {n_i}_i≥ 1 such that ϕ_n_i(z) belongs to a deformation arc α_n_i of G_n_i for all i≥ 1. In this case, it holds that B(α_n_i+1)⊂ B(α_n_i) by Proposition <ref> (1). According to Lemma <ref>, the homotopic diameters of B(C) for all regular circles C of T are bounded above. So Lemma <ref> implies ⋂_i≥ 1B(α_n_i)={φ(z)}. Since φ(x)=φ(y), at least one of {x, y}, say x, is infinitely deforming. As above, there is an increasing sequence {n_i}_i≥ 1 and a deformation arc α_n_i of G_n_i for each i≥1, such that ϕ_n_i(x)∈α_n_i, B(α_n_i+1)⊂ B(α_n_i) and ⋂_i≥ 1B(α_n_i)={φ(x)}. 0.15cm Case 1. The point y is finitely deforming. 0.15cm In this case, we have φ(y)=ϕ_n(y) for every n≥ n(y). Since φ(x)=φ(y), it follows that ϕ_n_i(y)=φ(y)∈ B(α_n_i) for n_i>n(y). Then ϕ_n_i(y)∈ G_n_i∩ B(α_n_i)=α_n_i. Hence ϕ_n_i(y) is an endpoint of α_n_i. Let _i be the sub-arc of α_n_i connecting ϕ_n_i(y) and ϕ_n_i(x). Then β_i:=ϕ_n_i^-1(_i) is an arc in G_1 connecting x and y. Since there are only finitely many distinct arcs in G_1 connecting x and y, by passing to a subsequence of {i}, we have β=β_i and ϕ_n_i(β)=γ_i⊂α_n_i for every i≥1. This implies φ(β)=φ(x). 0.15cm Case 2. The point y is infinitely deforming. 0.15cm In this case, we obtain another increasing sequence {m_j}_j≥ 1 and a deformation arc δ_m_j of G_m_j for each j, such that ϕ_m_j(y)∈δ_m_j, B(δ_m_j+1)⊂ B(δ_m_j) and {φ(y)}=⋂_j≥ 1 B(δ_m_j). Since φ(x)=φ(y), it follows from Proposition <ref> and Proposition <ref> (1) that, if m_j≥ n_i, either B(δ_m_j)⊂ B(α_n_i), or B(δ_m_j) intersects B(α_n_i) at a single point in X_n_i. 0.15cm Case 2.1. There exist m_j≥ n_i such that B(δ_m_j)∩ B(α_n_i) is a singleton w∈ X_n_i. 0.15cm Since ϕ_n(x)∈ B(α_n_i) and ϕ_n(y)∈ B(_m_j) for every large integer n by Proposition <ref> (i), it follows that φ(x)∈ B(α_n_i) and φ(y)∈ B(δ_m_j). Thus φ(x)=φ(y)=w. Assume ϕ_n_i(z)=w. Then φ(z)=ϕ_n(z)=w for every n≥ n_i. By applying Case 1 to {x,z} and {z,y} respectively, we obtain the required arc β. 0.15cm Case 2.2. For each pair m_j≥ n_i, it holds that B(δ_m_j)⊂ B(α_n_i).0.15cm Let _i:=ϕ_n_i^-1(α_i)⊂ G_1 be the arc containing x. For any pair m_j≥ n_i, due to Proposition <ref> (2), we have ϕ_m_j(x)∈ϕ_m_j(_i)= G_m_j∩ B(α_n_i). Note also that ϕ_m_j(y)∈ G_m_j∩ B(_m_j)⊂ G_m_j∩ B(α_n_i). Thus ϕ_m_j(x), ϕ_m_j(y)∈ϕ_m_j(_i). It implies that there exists a sub-arc β_i⊂_i joining x to y such that ϕ_m_j(β_i)⊂ B(α_n_i). Since there are finitely many arcs in G_1 joining x to y, by passing to a subsequence, we may assume that β_i=β for all i≥1. Then φ(β)=lim_j→∞ϕ_m_j(β) coincides with ⋂_i≥ 1 B(α_n_i)={φ(x)}. Clearly G=lim_n→∞ G_n=φ(G_1) is an f-invariant continuum. Note that G_n+1 lies in the component E_n of f^-n(U) which contains U. Then G⊂ K_ U=⋃_n≥1 E_n. We claim that φ(α) is not a singleton for any component α of G_1∖ X_1. If α has two distinct endpoints, then the claim is obvious because φ=id on X_1∩ G_1. In the remaining case α is a circle in G_1. If ϕ_n(x) does not belong to the deforming arcs of G_n for any x∈α and every n≥ 1, we have φ(α)=α, and the claim holds. Otherwise, there exists a point x∈α and a smallest integer n_0≥ 1 such that ϕ_n_0(x) belongs to a deformation arc α_n_0 of G_n_0. Then α_n_0⊂α and ϕ_n_0=id on α_n_0. This implies that φ=id on the two endpoints of α_n_0. So the claim is proved. Since φ is the identity on X_1∩ G_1 which divides G_1 into open arcs, following Proposition <ref> and the claim above, the preimage of each point of G under φ|_G_1 is either a singleton or an arc in G_1. This implies that G is a graph homeomorphic to G_1. Finally, to prove that G is isotopic to G_1 rel P, we only need to show G_1∩ P=G∩ P, because G_n is isotopic to G_1 rel P for every n≥1. Since φ is the identity on P and G_1∩ P=∂ U∩ P, it follows that G_1∩ P=∂ U∩ P⊂ G∩∂ U∩ P⊂∂ U∩ P=G_1∩ P. On the other hand, we have (G∖∂ U)∩ P=∅ by Corollary <ref>. Thus G_1∩ P=G∩ P. § FATOU CHAINS In this section, we establish some basic properties of Fatou chains and prove Theorem <ref>. Throughout this section, let f be a rational map with J_f≠. Recall that a level-0 Fatou chain of f is the closure of a Fatou domain of f. By induction, we define a continuum K⊂ as a level-(n+1) Fatou chain of f if there is a sequence {E_k}_k≥ 0 of continua, each of which is composed of finitely many level-n Fatou chains, such that E_k⊂ E_k+1, and K=⋃_k≥ 0E_k. A level-n (n≥ 0) Fatou chain K is called a level-n extremal (Fatou) chain if any level-n Fatou chain that intersects K at a point in F_f is contained in K. By definition, each level-0 extremal chain is the closure of a Fatou domain. For every n>0 and any Fatou domain U of f, there is a unique level-n extremal chain K containing U. Moreover, there is a sequence {E_k} of continua, each of which is the union of finitely many level-(n-1) extremal chains, such that E_k⊂ E_k+1 and K=⋃_k≥0 E_k. We first prove the lemma in the case of n=1. Let Σ(U) denote the collection of Fatou domains U' for which both U and U' are contained in a continuum E(U, U') consisting of finitely many level-0 chains. Enumerate the elements of Σ(U) by U_i,i≥0, and fix E(U, U_i) for each i. For every k≥0, we define E_k=⋃_0≤ i≤ k E(U,U_i) and K=⋃_k≥0 E_k. Then K is a level-1 Fatou chain by definition. It remains to verify that K is extremal. Now, consider any other level-1 Fatou chain K' such that (K'∩ K)∩ F_f≠∅. Then K'∩ K contains a Fatou domain V. By definition, assume K'=⋃_k≥ 0E'_m, where E_m' is the union of a finite number of level-0 Fatou chains and E'_m⊂ E'_m+1 for every m≥0. Since V⊂ K', it follows V⊂ E'_m for any large m. Similarly, we have V∈Σ(U). Hence, each level-0 Fatou chain in E_m' is contained in Σ(U). By the construction of E_k, we obtain that E_m'⊂ E_k for a large integer k. This implies K'⊂ K. So K is a level-1 extremal chain. Assume that the lemma holds for some n≥ 1. Then there exists a unique level-n extremal chain σ containing U. Similarly as the case of n=1, let Σ(σ) be the collection of all level-n extremal chains σ' for which both σ and σ' are contained in a continuum E(σ, σ') consisting of finitely many level-n extremal chains. Note that Σ(σ) is a finite or countable collection. So Σ(σ)={σ_i}_i≥0. Fix E(σ, σ_i) for each σ_i. For every k≥0, we define E_k=⋃_0≤ i≤ k E(σ,σ_i) and K=⋃_k≥0 E_k. By definition, K is a level-(n+1) Fatou chain. Finally, with a similar argument as that in the case of n=1, we can show that K is an extremal chain of level-(n+1). Here are some examples of extremal chains. For a polynomial, the entire Riemann sphere is its level-1 extremal chain. On the other hand, any level-n extremal chain (n≥0) of a rational map is the closure of a Fatou domain. If f is a Newton map, the union of the attracting basins for all attracting fixed points is contained in a level-1 extremal chain of f. This chain contains J_f. Thus is a level-2 extremal chain of f. Let K⊂ be a level-n extremal chain (n≥0) of f. Then * f(K) is also a level-n extremal chain, and * f^-1(K) has a unique decomposition f^-1(K)=⋃_i=1^m K_i such that each K_i is a level-n extremal chain with f(K_i)=K. Moreover, (f|_K_i):=#(f^-1(w)∩ K_i) is a constant if w∈ K∩ F_f is not a critical value. If n=0, the lemma is true as any level-0 extremal chain is the closure of a Fatou domain. Suppose that the lemma holds for level-n extremal chains with n≥0. Let K be a level-(n+1)-extremal chain. By Lemma <ref>, there is a sequence of continua {E_k} such that each E_k consists of finitely many level-n extremal chains, E_k⊂ E_k+1 and K=⋃_k≥0 E_k. (1) By induction, each f(E_k) consists of finitely many level-n extremal chains. Then f(K)=⋃_k≥0f(E_k) is a level-(n+1) Fatou chain, and it is contained in a level-(n+1) extremal chain, denoted by K'. Lemma <ref> implies K'=⋃_j≥ 0 E'_j, where E'_j consists of finitely many level-n extremal chains and E'_j⊂ E'_j+1. Thus, there is an integer j_0≥ 0 such that f(E_0)⊂ E'_j for j≥ j_0. Let E_j” be the component of f^-1(E'_j) that contains E_0. By induction, the continuum E_j” consists of finitely many level-n extremal chains, and thus a level-(n+1) Fatou chain. Since K is extremal, we have E_j”⊂ K. As a consequence, E'_j=f(E_j”)⊂ f(K) for all j≥ j_0. It follows that f(K)=K' is a level-(n+1) extremal chain. (2) Let m(k) denote the number of components of f^-1(E_k). Then m(k) is decreasing. Thus there is an integer k_0≥ 0 such that m(k)=m is a constant for k≥ k_0. Let E_i,k, 1≤ i≤ m, be the components of f^-1(E_k) such that E_i,k⊂ E_i,k+1. It follows that d_i:=(f|_ E_i,k) is a constant for k≥ k_0. Set K_i:=⋃_k≥ k_0E_i,k. Then f^-1(K)=⋃_i=1^m K_i and f(K_i)=K. By induction, each E_i,k is the union of finitely many level-n extremal chains, so K_i is a level-(n+1) Fatou chain. Let K'_i denote the level-(n+1) extremal chain containing K_i. Then f(K'_i)⊃ f(K_i)=K. By statement (a), the continuum f(K_i') is a level-(n+1) extremal chain. Thus f(K'_i)=f(K_i)=K, which implies ⋃_i=1^m K_i'=⋃_i=1^m K_i. Since E_i,k is disjoint from E_j,k if i≠ j, any level-n extremal chain in K_i is disjoint from that in K_j if i≠ j. So we obtain K_i'=K_i for 1≤ i≤ m. Finally, let w be a point in K∩ F_f. Then w∈ E_k for every large integer k. Furthermore, if w is not a critical value, we have #(f^-1(w)∩ K_i)=#(f^-1(w)∩ E_i,k)=(f|_ E_i,k)=d_i. Then the lemma is proved. According to Lemma <ref>, every level-n extremal chain is eventually periodic. Moreover, for any level-n extremal chain K≠, its boundary and interior are contained in the Julia set and Fatou set of f, respectively. To see this, note first that ∂ K⊂ J_f. If the interior of K contains a point in the Julia set, then f^m(K)= for a large integer m. Since f^m(K) is a level-n extremal chain, we obtain K=f^m(K)= by Definition <ref>. The following result provides a dynamical construction of periodic extremal chains. Let K be a periodic level-(n+1) extremal chain of f with period p≥ 1. Let E_0 be the union of all periodic level-n extremal chains in K. Then E_0 is connected, f^p(E_0)=E_0 and K=⋃_k≥ 0E_k, where E_k is the component of f^-kp(E_0) that contains E_0. First note that f^p(E_0)=E_0 since the image of a periodic level-n extremal chain is also a periodic level-n extremal chain. By Lemma <ref>, E_0 is contained in a continuum E⊂ K which is the union of finitely many level-n extremal chains. Since f^p(E_0)=E_0, it follows that E_0⊂ f^kp(E) for every k>0. On the other hand, since each level-n extremal chain is eventually periodic, we obtain f^k_0p(E)⊂ E_0 for some integer k_0≥ 0. Therefore E_0=f^k_0p(E) is connected. By Lemma <ref> (2), each E_k is a level-(n+1) Fatou chain, and E_0⊂ E_k contains Fatou domains. Thus ⋃_k≥0E_k⊂ K by the definition of extremal chains. Conversely, for any level-n extremal chain σ⊂ K, there is a continuum E' such that E_0∪σ⊂ E' and E' is the union of finitely many level-n extremal chains. As above, we have f^k_1p(E')⊂ E_0 for an integer k_1>0. Then σ⊂ E'⊂ E_k_1, and therefore K⊂⋃_k≥0E_k. By definition, every level-n extremal chain is contained in a level-(n+1) extremal chain. The next result shows that the growth of extremal chains will stop at a certain level. There is an integer N≥ 0 such that any level-n extremal chain of f is a level-N extremal chain for n≥ N. Let k(n) denote the number of periodic level-n extremal chains of f. Then k(n) is decreasing. Thus there is an integer n_0 such that k(n) is a constant for n≥ n_0. It implies that two distinct periodic level-n extremal chains are disjoint for n≥ n_0. For each periodic Fatou domain U of f with period p≥ 1, denote by K_n(U) the level-n extremal chain containing U. Then f^p(K_n(U))=K_n(U) and K_n(U) is the unique periodic level-n extremal chain contained in K_n+1(U) for n≥ n_0. If K_n(U) is not a component of f^-p(K_n(U)), we have (f^p|_K_n+1(U))>(f^p|_K_n(U)) by Lemmas <ref> and <ref>. On the other hand, since (f|_K_n+1(U))≤ f, there is an integer n(U)≥ n_0 such that (f^p|_K_n(U)) is a constant for n≥ n(U). Thus K_n(U) must be a component of f^-p(K_n(U)) for n≥ n(U). It then follows from Lemma <ref> that K_n+1(U)=K_n(U) for n≥ n(U). Let N_1 be the maximum of {n(U)} for all periodic Fatou domains U of f. Then every periodic level-n extremal chain is a level-N_1 extremal chain for n≥ N_1. For any level-N_1 extremal chain K, there is an integer q≥ 0 such that f^q(K) is a periodic level-N_1 extremal chain. Let K_i denote the level-(N_1+i) extremal chain containing K for i>0. Then f^q(K_i) is a periodic level-(N_1+i) extremal chain containing f^q(K), and hence f^q(K_i)=f^q(K). Applying Lemma <ref> (2) to f^q, we obtain that K_i=K_1 for i≥ 1. Therefore, the lemma holds if we define N:=N_1+1. By Lemma <ref>, there is an integer N≥ 0 such that any level-n extremal chain is a level-N extremal chain for every n≥ N. For any Fatou domain U of f, let K(U) be the level-N extremal chain containing U. If a Fatou chain K intersects K(U), then K∪ K(U) is contained in an extremal chain of level N+1. This implies K⊂ K(U). So K(U) is a maximal Fatou chain. By Lemma <ref>, the image and components of the pre-image of a maximal Fatou chain are still maximal Fatou chains. § CLUSTER-EXACT DECOMPOSITION In this section, we will establish the cluster-exact decomposition (Theorem <ref>) for marked rational maps. This decomposition theorem corresponds to Theorem <ref> (i),(ii), and the remaining part of Theorem <ref> follows from Theorem <ref>, which will be proved in the next section. In Section 4.1, we study the combinatorics of planar continua and domains by their branched numbers. In Section 4.2, we characterize the dynamics of stable sets by proving Theorem <ref>. In Section 4.3, we obtain an important result, called exact decomposition, that serves as a key step towards the cluster-exact decomposition. Finally, we complete the proof of the cluster-exact decomposition in Section 4.4. §.§ Branched numbers Let P⊂ be a finite marked set and let E⊂ be a connected open or closed set. Recall that E is simple-type (rel P) if there is a simply connected domain D⊂ such that E⊂ D and #(D∩ P)≤ 1; or annular-type if E is not simple-type and there is an annulus A⊂∖ P such that E⊂ A; or complex-type otherwise. The branched number of E (rel P) is defined by b(E):=#(E∩ P)+κ(E), where κ(E) is the number of the components of E that intersect P. By definition, E is complex-type if and only if b(E)≥ 3, and b(E)=2 if E is annular-type. Let K_0⊂ K be continua in . Recall that K_0 is a skeleton of K (rel P) if K_0∩ P=K∩ P and any two points of P in distinct components of K are contained in distinct components of K_0. It is easy to check that K_0 is a skeleton of K⟺b(K_0)=b(K) and #(K_0∩ P)=#(K∩ P) The following statements hold. (i) For any continuum E⊂, there is a domain U⊃ E such that b(U)=b(E). (ii) For any domain U⊂, there is a continuum E⊂ U such that b(U)=b(E). (i) Let V_i, 1≤ i≤ n, be the components of E containing points of P. Then there is a full continuum K_i⊂ V_i such that P∩ K_i=P∩ V_i. Set U=⋃_i=1^n K_i. Then U⊃ E is a domain and b(U)=b(E). (ii) Let E_j, 1≤ j≤ m, be the components of U that intersect P. Then there are disks V_j⊃ E_j with pairwise disjoint closures such that ∂ V_j⊂ U and P∩ E_j=P∩ V_j. Since U is a domain, there is a graph E⊂ U which contains P∩ U and all ∂ V_j,j=1,…,m. It follows that b(U)=b(E). Suppose that V⊂ is a complex-type domain and ⊂ V is a compact set. Let be the collection of all complex-type components of either V or . Then ∑_ E∈ (b(E)-2)=b(V)-2. There are at most #P elements of intersecting P, and at most #P-2 elements of disjoint from P since each of them divides P into at least three parts. So is a finite collection. In order to prove the equality, we define a graph T as follows. Let _1 be the collection of all components of V intersecting P. There is a bijection v from _1∪ onto the set of vertices of T. Two vertices v(E_1) and v(E_2) of T are connected by an edge if and only if E_1 and E_2 are adjacent, i.e., no elements of separate E_1 from E_2. Then T is a tree. Note that for any element E∈_1∪, the number of edges of T linking to the vertex v(E) is exactly κ(E), i.e., the number of the components of ∖ E intersecting P. Thus v(E) is an endpoint of T precisely if κ(E)=1. In particular, v(E) is an endpoint if E∈_1. Let k_0≥ 0 denote the number of elements of with κ(E)=1. Then T has exactly κ(V)+k_0 endpoints. Since T is a tree, we have κ(V)+k_0-2=∑(κ(E)-2), where the summation is taken over all elements of with κ(E)≥ 2. It follows immediately that κ(V)-2=∑(κ(E)-2), where the summation is taken over all elements of . Thus, the lemma is true if V∩ P=∅. In the general case, without loss of generality, we can assume that all marked points in are interior points of . Then there exists a small number r>0 such that (z,3r)⊂ V for each point z∈ P∩ V, and (z,3r)⊂ if z∈ P∩. Set V':=V∖⋃_z∈ P∩ V(z,r) and ':=∖⋃_z∈ P∩(z,2r). Let ' be the collection of all complex-type components of either V'' or '. It follows that * ∑_E'∈'(b(E')-2)=b(V')-2, because V'∩ P=∅; and * b(V)=b(V') and each E'∈' is contained in a unique element E of with b(E')=b(E). Therefore we have ∑_E∈(b(E)-2)=b(V)-2. The lemma is proved. The following statements hold. (1) Let K_0⊂ K be continua in . Then b(K_0)≤ b(K). (2) Let {K_n} be a sequence of continua in such that K_n⊂ K_n+1 for all n≥ 0. Then there exists N≥0 such that b(K_n)=b(K_ N) and K_ N is a skeleton of K_n for every n≥ N. (3) Let {K_n} be a sequence of continua in such that K_n+1⊂ K_n for all n≥ 0, and set K:=⋂_n≥ 1K_n. Then b(K)=b(K_n) as n is sufficiently large. (a) By Lemma <ref>, there is a domain U⊂ such that b(U)=b(K). It then follows from Lemma <ref> that b(K_0)≤ b(U)=b(K). (b) Note that the numbers b(K_n) and #(K_n∩ P) are increasing and bounded above by # P. Thus there exists an integer N≥0 such that both b(K_n) and #(K_n∩ P) are constants for every n≥ N. Due to relation (<ref>), K_ N is a skeleton of K_n for every n≥ N. (c) By statement (a), the number b(K_n) is decreasing. Thus b(K_n) is a constant b≥ 1 as n is sufficiently large. Since K is a connected closed set, we have b(K)≤ b. On the other hand, by Lemma <ref>, there is a domain U⊃ K such that b(U)=b(K). Since K_n⊂ U for every large integer n, it follows from Lemma <ref> that b(K)=b(U)≥ b(K_n)=b. Now let (f,P) be a marked rational map. Since f(P)⊂ P, we immediately obtain the following pullback principle. Let (f, P) be a marked rational map. Suppose that E⊂ is a connected open or closed set. If E is simple-type, then each component of f^-1(E) is simple-type; and if E is annular-type, each component of f^-1(E) is either annular-type or simple-type. Let (f, P) be a marked rational map. Let E⊂ E' be connected open or closed sets in with b(E)=b(E'). Let E_1' be a component of f^-1(E'). Then E_1:=E_1'∩ f^-1(E) is connected. Moreover, if E is a skeleton of E', then E_1 is a skeleton of E'_1. By Lemma <ref>, there is a domain V⊃ E' and a compact connected set K⊂ E such that b(V)=b(K). Let V_1 be the component of f^-1(V) that contains E_1'. According to Lemma <ref>, each component U of V K is either simple-type or annular-type, and ∂ U has exactly one component contained in K. As a consequence, any component of f^-1(U) is either simple-type or annular-type due to Lemma <ref>, and its boundary has exactly one component contained in f^-1(K). This implies that V_1 contains exactly one component K_1 of f^-1(K) and b(V_1)=b(K_1). So the former part of the lemma holds. Furthermore, if E is a skeleton of E', then E∩ P=E'∩ P, which implies E_1∩ P=E_1'∩ P. Note also that b(K_1)≤ b(E_1)≤ b(E_1')≤ b(V_1)=b(K_1). Thus E_1 is a skeleton of E_1' by (<ref>). §.§ Stable sets Recall that a stable set of a rational map f is a non-empty and finite disjoint union of continua such that f()⊂ and each component of f^-1() is either a component of or disjoint from . By definition, each component of is eventually periodic and ∂ is still a stable set of f provided that ≠. Throughout this subsection, let f be a given PCF rational map. Let K⊊ be a connected stable set of f. Then ∂ K⊂ J_f. Choose a domain W⊃ K such that b(K)=b(W). Then each component of f^-1(W) contains exactly one component of f^-1(K) by Lemma <ref>. In particular, the component W_1 of f^-1(W) which contains K is disjoint from f^-1(K) K. Suppose to the contrary that ∂ K∩ F_f≠∅. Since K is a component of f^-1(K), we have f(∂ K)=∂ K. Thus there exists a super-attracting periodic point a∈∂ K. Without loss of generality, we may assume f(a)=a. Let U be the Fatou domain containing a. Then there is a disk Δ⊂ U such that it is a round disk under the Böttcher coordinate and Δ⊂ W. It implies that if z∈ K∩Δ, then f^-1(z)∩ U⊂ K. Let _t⊂Δ be the Jordan curve such that _t is the round circle with radius t∈ (0,1) under the Böttcher coordinate. Since K is connected and a∈ K, there is a point t_0∈ (0,1) such that _t_0∩ K≠∅ and _t_0⊂Δ. It follows that _t∩ K≠∅ for all t∈ (0,t_0) since _t separates _t_0 and a. In particular, given any t∈ (0,t_0), f^k(_t)∩ K≠∅ for all k≥ 1. Pick a point z_k∈ f^k(_t)∩ K. Then (f^-k(z_k)∩ U)⊂_t∩ K. Since _t∩ K is compact and ⋃_k≥ 1(f^-k(z_k)∩ U) is dense in _t, we obtain _t⊂ K for all t∈ (0,t_0), a contradiction. The following lemma offers a way to obtain stable sets. Let {V_n}_n≥ 0 be a sequence of domains in ℂ such that V_n+1⊂ V_n and f:V_n+1→ V_n is proper. If for any n≥ 0, there is an integer m>n such that V_m⊂ V_n, then K=⋂_n>0 V_n is a stable set of f when K is not a singleton. It follows from the konwn condition that K is a component of f^-1(K). Hence K is a stable set unless it is a singleton. Let K be the union of K and all components of K disjoint from P_f. If K=, then f^-1(K)=K, and thus K=K=, which contradicts the condition that K≠. Now assume K≠. Let denote the collection of components of K. We define a self-map f_* on as follows. If D∈ is disjoint from f^-1(K), then f(D)∈ and we set f_*(D):=f(D). Otherwise, let D' be the component of D f^-1(K) with ∂ D'⊃∂ D. In this case f(D') is an element of , and we define f_*(D):=f(D'). Since is a finite collection, each of its element is eventually periodic under f_*. Assume that D_i,0≤ i<p, is a cycle in with D_i=f_*^i(D_0) and D_0=f_*^p(D_0). Since f is expanding in a neighborhood of J_f under the orbifold metric and ∂ K⊂ J_f by Lemma <ref>, for each 0≤ i< p, there is an annulus A_ D_i=A_i⊂ D_i P_f with ∂ D_i⊂∂ A_i, such that A_i^1⊂ A_i∪∂ D_i, where A_i^1 is the component of f^-1(A_i+1) (set A_p=A_0) with ∂ A_i^1⊃∂ D_i. With a similar argument, we can assign an annulus A_ D for every periodic element D of . If D'∈ is not f_*-periodic but f_*(D')=D is periodic, we assign an annulus A_ D'⊂ D' P_f with ∂ D'⊂∂ A_ D', such that A_ D^1⊂ A_ D'∪∂ D', where A_ D^1 is the component of f^-1(A_ D) with ∂ D'⊂ A_ D^1. Repeat this process, we assign an annulus A_ D for each element D∈. Let V be the union of K and A_ D for all D∈. Then V is a finitely connected domain with V∩ P_f=K∩ P_f. Moreover, the component U of f^-1(V) which contains K is compactly contained in V by the construction of A_ D. Since K is not a singleton, it follows from <cit.> that f|_K≥ 2. Thus f: U→ V is a rational-like map (refer to <cit.>). Then the theorem follows directly from <cit.>. Let {_n}_n≥ 0 be a sequence of stable sets of f such that _n+1⊂_n. Then there is an integer N≥ 0 such that _n=_ N for every n≥ N. By the pullback principle (Lemma <ref>), we can split each stable set _n into two stable sets ^0_n and '_n, such that each periodic component of ^0_n is simple-type or annular-type, and each periodic component of '_n is complex-type. Then '_n+1⊂'_n by Corollary <ref> (1). We first assume that the components of '_n are all complex-type for every n≥ 0. The branched number of _n' is defined by b('_n)=∑ (b(K)-2)+2, where the summation is taken over all components of _n'. Then b('_n+1)≤ b('_n) by Lemma <ref>. Thus there is an integer n_1≥ 0 such that b('_n) is a constant for n≥ n_1. This implies that for n≥ n_1, each component of '_n contains at least one component of '_n+1. Let k(n) be the number of components of '_n for n≥ n_1. As argued above, k(n) is increasing. However, Lemma <ref> implies k(n)≤#P_f-2. Thus there is an integer n_2≥ n_1 such that k(n) is a constant for n≥ n_2. As a consequence, each component K_n of '_n contains exactly one component K_n+1 of '_n+1 for n≥ n_2. Since b('_n) is a constant for n≥ n_2, it follows that b(K_n)=b(K_n+1). To complete the proof, we need to show that for each periodic component K_n of '_n, it holds that K_n+1=K_n as n>n_2 is large enough. Without loss of generality, we may assume f(K_n)=K_n. Then f(K_n+1)=K_n+1. By Theorem <ref> and Lemma <ref>, we know that ⋃_k≥ 0 (f|_K_n)^-k(∂ K_n+1)=∂ K_n+1 is dense in ∂ K_n. Hence ∂ K_n+1=∂ K_n. If K_n+1≠ K_n, it implies that K_n K_n+1⊂ F_f. Since f has at most 2 f-2 cycles of Fatou domains, the inequality K_n+1≠ K_n can only occur finitely many times. Hence, there is an integer n_3≥ n_2 such that '_n='_ n_3 for n≥ n_3. In general, let ”_n be the union of all complex-type components of '_n. Then ”_n is also a stable set of f and ”_n+1⊂”_n for all n≥ 0. Based on the previous discussion, we can find an integer N_0≥ 0 such that ”_n=”_ N_0 for every n≥ N_0. Note that _n” contains all periodic components in _n', which means that any component of _n' is eventually iterated to _n”. Thus, for any m≥ N_0 and any component K of _m', either K is a component of _n' for every n≥ m, or K∩_n'=∅ when n is large enough. As a consequence, the number l(n) of the components of '_n (n≥ N_0) is decreasing. Therefore, there exists an integer N≥ N_0 such that l(n)=l(N) for every n≥ N. This implies '_n='_ N for n≥ N. Since '_n='_ N as n≥ N, it follows that ^0_n+1⊂^0_n for n≥ N. For any periodic component K of ^0_n, the renormalization of f^p on K is conformal conjugate to z↦ z^d or z↦ 1/z^d with d≥ 2. Thus K is either a Jordan curve or the closure of a periodic Fatou domain of f. In the former case, the cycle of K contains no other stable set of f except itself. In the latter case, the cycle of ∂ K is the unique stable set of f properly contained in the cycle of K. So we have ^0_n+1=^0_n as n≥ N is large enough. §.§ Exact decomposition Let (f,P) be a marked rational map. Suppose that is a stable set of f. Let and _1 be the union of all complex-type components of and f^-1(), respectively. By the pullback principle (Lemma <ref>), it holds that f(_1)⊂. We say that induces an exact decomposition of (f,P) if either =∅, or f:_1→ is an exact sub-system of (f,P), i.e., each component of ∖_1 is a full continuum disjoint from P; See Definition <ref>. The next result serves as the key step towards the cluster-exact decomposition. By an exceptional stable set, we mean a stable set containing the Julia set. Let (f,P) be a marked rational map, and let _0 be a non-exceptional stable set of f. Then there is a non-exceptional stable set ⊃_0 that induces an exact decomposition of (f,P). Moreover, if each component of _0 intersects or separates P (defined before Lemma <ref>), so does that of . The condition that each component of 𝒦_0 intersects or separates P is equivalent to κ(U)=#(∂ U) for any component U of ∖𝒦_0. In particular, annular-type components of ∖𝒦_0 are annuli. Here recall that κ(U) is the number of components of ∖ U intersecting P, and Comp(·) denotes the collection of all components of the corresponding set. We can always choose an f-invariant and finite set P_1⊃ P such that P_1∖ P⊂_0 and each component of _0 intersects or separates points of P_1. Obviously, any complex-type domain rel P is still complex-type rel P_1. Then by definition, if induces an exact decomposition of (f,P_1), it also induces an exact decomposition of (f,P). So it is enough to prove the theorem for (f,P_1). Therefore, we can assume that each component of _0 intersects or separates P. For any stable set of f, we denote by ^n the union of all components of f^-n() that intersect or separate P. By Lemma <ref>, each ^n is a stable set of f and ^n⊂^n+1. For each n≥0, let _n be the union of all complex-type components of ∖^n_0. It follows immediately that _n+1⊂_n. Assume that _n≠∅ for all n≥ 0. Then there exists a positive integer N_0 such that any component U_ N_0 of _ N_0 contains a unique component U_n of _n for every n≥ N_0, and it holds that #(U_n∩ P)=#(U_ N_0∩ P) and # Comp(∂ U_n)=# Comp(∂ U_ N_0). Let k(n) denote the number of complex-type components of _0^n. Then k(n) is increasing, and k(n)≤#P-2 by Lemma <ref>. Thus there is an integer n_0 such that k(n)=k(n_0) for all n≥ n_0. Therefore, _n_0 contains no complex-type components of _0^n for all n>n_0. Fix a component U_n of _n with n≥ n_0. Since U_n contains no complex-type components of _0^m for m>n, it follows from Lemma <ref> that ∑(b(U)-2)=b(U_n)-2>1, where the summation is taken over all components of _m contained in U_n. Thus U_n contains at least one component of _m. As a consequence, the number v(n) of the components of _n is increasing as n≥ n_0. Note that #(_n∩ P) is decreasing. Then there is an integer n_1≥ n_0 such that both v(n) and #(_n∩ P) are constants for n≥ n_1. Thus, each component U_n_1 of _n_1 contains a unique component U_n of _n for every n>n_1 such that #(U_n∩ P)=#(U_n_1∩ P). As b(U_n) is decreasing, there is an integer N_0>n_1 such that b(U_n)=b(U_ N_0) for all n≥ N_0. Finally, since each component of _0^n intersects or separates P, all complementary components of U_n intersect P, i.e., # Comp(∂ U_n)=κ(U_n). It follows that # Comp(∂ U_n)=b(U_n)-#(U_n∩ P) is a constant for n≥ N_0 by the choice of N_0. According to Lemma <ref>, any component U_ N_0 of _ N_0 and any component λ_ N_0 of ∂ U_ N_0 determine a sequence of pairs (U_n, λ_n) for n≥ N_0, where U_n is the component of _n contained in U_ N_0, and λ_n is the component of ∂ U_n such that either λ_n+1=λ_n, or λ_n+1 is disjoint from λ_n but separates λ_n from U_n+1. Since _ N_0 has finitely many components, all of which are finitely connected, there exists an integer N ≥ N_0 such that, for any determined sequence {(U_n,λ_n), n≥ N}, exactly one of the following two cases occurs: * λ_n=λ_ N for all n≥ N; * for any n≥ N, there is an integer m>n such that λ_m is disjoint from λ_n and separates λ_n from U_m. We call λ_ N an exact boundary component of U_ N in the first case. 0.1cm From now on, write =_ N, and denote _n the union of all complex-type components of f^-n(). Then _n coincides with the union of all complex-type components of ∖ f^-n(^ N_0). This implies _n⊂_ N+n. Note that any component of f^-n(^ N_0)∖^ N+n_0 neither intersects nor separates P, while each component of ∂_ N+n intersects or separates P. It follows that _ N+n∖_n consists of pairwise disjoint full continua disjoint from P. Therefore, (a) each component V=U_ N of contains a unique component V_n of _n such that U_ N+n∖ V_n consists of pairwise disjoint full continua that avoid P; (b) for any boundary component λ of V, there is a unique boundary component λ_n of V_n parallel to λ in the sense that either λ_n=λ or λ_n separates λ from V_n. We say that V is an exact (resp. renormalizable) component of if all components of ∂ V are exact (resp. non-exact) boundary components of V; See Figure <ref> (the pants represent V and the domains colored yellow are V_1). If V=U_ N is exact, then V=U_ N+1. By this point and statement (a) above, it follows that V∖ V_1 consists of full continua disjoint from P. This implies immediately that The stable set ^ N_0 induces an exact decomposition of (f,P) if every component of is exact. Let be the collection of all components of . Then f:_1→ induces a self-map f_# on defined by f_#(V):=f(V_1), where V_1 is the unique component of _1 contained in V. Since is a finite collection, each component of is eventually f_#-periodic. The map f:_1→ also induces a self-map f_* on the collection ∂ of the boundary components of V with all V∈. This self-map is defined by f_*(λ):=f(λ_1), where λ_1 is the unique boundary component of V_1 parallel to λ. Since ∂ is a finite collection, its elements are eventually f_*-periodic. Let V be a component of and let λ be a component of ∂ V. Then λ is an exact boundary component of V if and only if f_*(λ) is an exact boundary component of f_#(V). As a result, if V is non-exact, then f_#(V) is still non-exact. For each n≥0, we denote by V_n the unique component of _n contained in V, and by λ_n the unique boundary of ∂ V_n parallel to λ. Set W=f_#(V) and η=f_*(λ). Similarly, we can define W_n and η_n for n≥0. By definition, it holds that f(V_1)=W and f(λ_1)=η. If λ is exact, then λ_n+1=λ and η=f(λ_n+1)=η_n for all n≥ 0. So η is exact. If λ is non-exact, there is an n≥0 such that λ_n+1∩λ_1=∅. Choose an annulus A⊂ W∖ P that is bounded by η and a Jordan curve in W_n. Since b(W_n)=b(W), it follows from Lemma <ref> that f^-1(W_n)∩ V_1=V_n+1. Let A_1⊂ V_1 be the component of f^-1(A) with λ_1⊂∂ A_1. Then A_1 is an annulus disjoint from P and the boundary component of A_1 other than λ_1 is contained in V_n+1. Since λ_n+1∩λ_1=∅, we have λ_n+1⊂ A_1. It follows that A contains a boundary component of W_n parallel to η, which can only be η_n. Thus η is non-exact by the choice of N. According to Proposition <ref>, if all components of are exact, then Theorem <ref> holds by defining =^ N_0. If the components of are either exact or renormalizable, we denote by ' the union of all renormalizable components of , and by _n' the union of all components of _n within '. By Proposition <ref>, the map f_# is invariant on both the collection of all renormalizable components and the collection of all exact components of . Thus f:_1∖_1'→∖' is an exact sub-system and ':=⋂_n≥ 1_n' is a stable set of f disjoint from _0 by Lemma <ref>. Therefore, Theorem <ref> holds if we set :=^ N_0∪'. However, might contain components that are neither exact nor renormalizable; See figure <ref>. In this case we need to combine these components to obtain a renormalization domain. Suppose that V is an f_#-periodic and non-exact component of . Then there is a non-exceptional stable set ' of f whose components are all complex-type, such that ⋂_n≥0V_n⊂', where V_n denotes the component of _n contained in V. Moreover, each component of _0 is either contained in ' or disjoint from '. We can quickly deduce Theorem <ref> from Lemma <ref>. We adhere to the notations mentioned above. If =∅ or contains only exact components, the theorem holds by taking =^ N_0 due to Proposition <ref>. Otherwise, has an f_#-periodic and non-exact component V by Proposition <ref>. Let ' be the non-exceptional stable set obtained in Lemma <ref>. Then there is a large integer N' such that (')^ N'+1 (')^ N' is disjoint from _0. Set _1=_0∪ (')^ N'. It is a non-exceptional stable set of f and its components all intersect or separate P. Since ⋂_n≥ 0V_n is a complex-type continuum (by Corollary <ref> (3)) not contained in _0, it follows from Lemma <ref> that b(_0):=∑ (b(K)-2)+2<b(_1):=∑ (b(K_1)-2)+2, where the first and second summations are taken over all complex-type components of _0 and _1, respectively. If _1^ N_1 induces an exact decomposition of (f,P) for an integer N_1, the theorem holds by taking =_1^ N_1. Otherwise, we can repeat the argument above using _1 in place of _0, and obtain a non-exceptional stable set _2⊃_1 such that b(_2)>b(_1) and each component of _2 intersects or separates P. Continuing this process successively, we obtain an increasing sequence of non-exceptional stable sets {_n} such that b(_n+1)>b(_n). Since b(_n)≤#P by Lemma <ref>, this process must stop after finite steps. Then the proof is complete. According to Proposition <ref>, there exists an f_*-periodic and non-exact boundary component λ of V. Its period is denoted by p. For each 0≤ i<p, set V_i,0:=f_#^i(V) and λ_i:=f_*^i(λ). Then f^p_#(V_i,0)=V_i,0 and each λ_i is a non-exact boundary component of V_i,0 by Proposition <ref>. For every n≥0, we denote by V_i,n the unique complex-type component of f^-np(V_i,0) contained in V_i,0. Equivalently, V_i,n is the component of _np contained in V_i,0. Let D_i,0 be the component of λ_i containing V_i,0. Then f^-p(D_i,0) has a unique component D_i,1 containing V_i,1, and D_i,1⊂ D_i,0 as λ_i is non-exact. Inductively, for each n≥ 1, f^-p(D_i,n) has a component D_i,n+1 containing V_i,n+1, and D_i,n+1⊂ D_i,n. By Corollary <ref>, K_i:=⋂_n≥ 1D_i,n is a complex-type continuum. Moreover, it is a stable set of f^p by Lemma <ref>, and K_i⊅J_f since λ_i is disjoint from of D_i,k for a large integer k. Then ∂ K_i⊂ J_f due to Lemma <ref>. Let r∈[1,p] be the smallest integer such that K_0=K_r. From the above construction, we obtain that K_i+1=f(K_i) and K_i+r=K_i for every i∈{0,…,p-1}. Then each of K_0,…,K_r-1 is a stable set of f^r and r is a factor of p. Moreover K_0,…,K_r-1 are pairwise distinct. In order to obtain a stable set of f, we need to consider the intersection of K_i with K_j. Suppose K_i∩ K_j≠∅ for distinct i,j∈{0,…,r-1}. Then * λ_j⊂ D_i,0 and λ_i⊂ D_j,0; * V_i,n∪ V_j,n⊂ D_i,n∩ D_j,n for all n≥0; and * if K_ℓ intersects K_i with ℓ∈{0,…,r-1}, then K_ℓ intersects K_j. We first claim that D_i,n⊈ D_j,0 for any n≥ 0. Assume by contradiction that D_i,m⊆ D_j,0 for some m≥ 0. Then, for all n≥ 1, D_i,m+n lies in a component of f^-np(D_j,0). This component must be D_j,n. For otherwise, it would contradict the condition that K_i∩ K_j≠∅. Therefore, we have D_i,m+n⊂ D_j,n for all n. This implies K_i⊂ K_j. Since (f^p|_K_i)=(f^p|_K_j) and both K_i and K_j are stable sets of f^p, we have ⋃_n>0(f^p|_K_j)^-n(K_i)=K_i. Furthermore, since f^p:∂ K_j→∂ K_j is quasi-conformal conjugate to the restriction of a rational map on its Julia set (Theorem <ref>), the set ⋃_k>0(f^p|_K_j)^-k(∂ K_i) is dense in ∂ K_j. This implies ∂ K_i=∂ K_j. Then each component of K_j∖ K_i, if existing, is a Fatou domain. However, since D_i,n+1⊂ D_i,n, any component of ∂ D_i,n for each n≥0 is not the boundary of a Fatou domain in K_j∖ K_i. Thus K_i=K_j. The claim is proved. (1) Since K_i∩ K_j≠∅, we have either D_i,0⊂ D_j,0, or D_j,0⊂ D_i,0, or λ_j⊂ D_i,0 and λ_i⊂ D_j,0. Then statement (1) follows directly from the above claim by setting n=0. (2) It is enough to show that V_i,n⊂ D_j,n for all n≥0. By statement (1), we have V_i,0⊂ D_j,0. As a consequence, for each n>0, either V_i,n⊂ D_j,n or V_i,n∩ D_j,n=∅. If V_i,n∩ D_j,n=∅ for some n>0, according to the construction of V_i,n and D_j,n, there is a component η of ∂ D_j,n that separates D_j,n from V_i,n. In particular η separates D_j,n from λ_i. By statement (1), it follows that D_j,n⊂ D_i,0, a contradiction to the claim above. (3) Without loss of generality, we assume that K_ℓ is distinct from both K_i and K_j. Then by applying statement (2) to {K_i,K_j} and {K_i,K_ℓ} respectively, we obtain that V_i,n⊂ D_j,n∩ D_ℓ,n for all n>0. This implies K_j∩ K_ℓ≠∅. Let s∈[1,r] be the smallest integer such that K_0∩ K_s≠∅. Then s is a factor of r. Set Z:={ks:0≤ k <r/s}. By Proposition <ref> (3), we have (a) K_i∩ K_j≠∅ for any pair i,j∈ Z, and (b) K_i∩ K_ℓ =∅ if i∈ Z and ℓ∈{0,…,r-1}∖ Z. Let D_0 be the intersection of all D_i,0 with i∈ Z. Applying Proposition <ref> (1) to each pair {K_i,K_j} with distinct i,j∈ Z, we conclude that D_0 is the domain with boundary components {λ_i:i∈ Z} and V_i,0⊂ D_0 for every i∈ Z. For every n≥1, denote D_n the component of f^-pn(D_0) containing V_0,n. By point (a) above and Proposition <ref> (2), it holds that ⋃_i∈ ZV_i,n⊂⋂_i∈ Z D_i,n for every n≥0. Moreover, since f^np(⋂_i∈ ZD_i,n)⊂⋂_i∈ ZD_i,0=D_0 and f^np(D_n)=D_0, it follows that ⋃_i∈ ZV_i,n⊂⋂_i∈ ZD_i,n⊂ D_n for all n≥0. This inclusion relation also implies D_n⊂ D_i,n for any i∈ Z and n≥0. Thus (c) for every n≥ 0, the equality ⋂_i∈ ZD_i,n= D_n holds. This equality implies D_n_1⊂ D_n_2 as n_2-n_1 is large enough. Then E:=⋂_n≥0D_n=⋂_n≥0D_n is a stable set of f^p by Lemma <ref>. Moreover, ∂ D_n is disjoint from _0 for every large integer n. So each component of _0 is either contained in E or disjoint from E. Since E contains ⋂_n≥0V_0,n, it follows from Corollary <ref> that E is complex-type. Also, as λ_0=λ⊂ J_f is disjoint from E, we have J_f⊄E. Finally, point (c) implies E=⋂_i∈ Z K_i. Therefore f^s(E)⊂ E, and hence E is also a stable set of f^s. Combining this with point (b) above, we can deduce that E,f(E),…, f^s-1(E) are pairwise disjoint. Thus ':=⋃_i=0^s-1 f^i(E) is a stable set of f, and it satisfies all conditions of Lemma <ref> according to the previous discussion. §.§ Cluster-exact decomposition Let (f,P) be a marked rational map. A continuum K⊂ J_f is called a cluster if it is a stable set of f^p for some p≥ 1 and the renormalization of f^p on K is a cluster rational map, i.e., the sphere is a Fatou chain of this rational map. Let (f,P) be a marked rational map with J_f≠. Let _f be the intersection of J_f with the union of all maximal Fatou chains of f intersecting P. Then there is a stable set of f with _f⊂⊂ J_f such that * every periodic component of is a cluster, and * induces an exact decomposition of (f,P). Moreover, each component of intersects or separates P. If J_f= _f, the theorem is true by taking =J_f. So we assume _f⊊ J_f. Note that _f is a stable set of f. Then by applying Theorem <ref> to _0=_f, we obtain a stable set _1 with _f⊂_1⊊ J_f such that _1 induces an exact decomposition of (f,P) and each component of _1 intersects or separates P. If every periodic component of _1 is a cluster, the theorem holds by taking =_1. Now suppose that K_* is a periodic component of _1 with period p≥ 1 such that K_* is not a cluster. By Theorem <ref>, there is a marked rational map (g,Q) and a quasiconformal map ϕ of , such that J_g=ϕ(K_*) and ϕ∘ f^p=g∘ϕ on K_*, where Q is the union of ϕ(P∩ K_*) together with all centers of Fatou domains U of g such that ϕ^-1(U) contains a point of P. In particular, g is not a cluster rational map. As previous, we can define _g for (g,Q). Then _g⊊ J_g. By applying Theorem <ref> to (g,Q) and _g, we obtain a stable set _g of g with _g⊂_g⊊ J_g, such that _g induces an exact decomposition of (g,Q) and each component of _g intersects or separates Q. Set =ϕ^-1(_g). Then ⊊ K_* is a stable set of f^p, and we have the commutative diagram: [ (K_*,) (K_*,); ϕ↓ ↓ϕ; (J_g,_g) (J_g,_g) . ] From the choice of Q, it follows that each component of intersects or separates P. It is worth noting that _f∩ K_* is also a stable set of f^p. For any continuum E⊂, we always denote by E the union of E and all components of ∖ E disjoint from P. Both _f∩ K_* and ∂K_* are contained in . It is enough to prove that ϕ(_f∩ K_*) and ϕ(∂ K_*) are contained in _g(⊂_g). Remember always that ϕ sends a component of ∖ K_* onto a Fatou domain of g. Let B be a marked maximal Fatou chain of (f,P) such that ∂ B is a component of _f contained in K_*. Notice that each component of B∖∂ B is a Fatou domain of f, and hence a component of ∖ K_*. This implies that ϕ(B) lies in a marked maximal Fatou chain of (g,Q). Hence ϕ(∂ B)=∂ϕ(B)⊂_g. For any point z∈∂ K_*, there exists a component D of ∖ K_* with z∈∂ D, and such D must intersect P. Then ϕ(∂ D) is the boundary of a marked Fatou domain of (g,Q). It follows immediately that ϕ(z)∈_g. Let K_1,…,K_m be all components of _1 whose orbits pass through K_*. For each K_i, there is a smallest integer k_i≥ 0 such that f^k_i(K_i)=K_*. Thus K_i is a component of f^-k_i(K_*). Let _i denote the union of all components of f^-k_i()∩ K_i that either intersect or separate P. Then both _f∩ K_i and ∂K_i are contained in _i for each i∈{1,…,m} by Proposition <ref>. Set _2=(_1⋃_i=1^m K_i)∪⋃_i=1^m_i. The previous discussion shows that _2 is a stable set of f with _f⊂_2⊊ J_f, and each component of _2 intersects or separates P. Moreover, it holds that ⋃_K∈ Comp(_1)∂ K ⊂ _2⊊_1. The stable set _2 induces an exact decomposition of (f,P). Suppose that is a stable set of f. We deduce from definitions that * The stable set induces an exact decomposition of (f, P) if and only if, for any complex-type component V of ℬ, if a component B_1 of f^-1(ℬ) lies in V, then B_1 neither intersects nor separates P. * For any component B of ℬ, a component B_1 of f^-1(ℬ) that intersects B is either equal to B, or contained in a component of B B, which is simply connected and avoids P. We shall use statement (1) to prove this proposition. Let V be any complex-type component of ∖_2. By the construction of _2 and the inclusion relation (<ref>), the domain V is either a complex-type component of _1, or a complex-type component of K_i∖_i for some i∈{1,…,m}. Let E be a component of f^-1(_2) that lies in V. Since _2⊂_1, the continuum E is contained in a component of f^-1(_1), which is denoted by K(E). The purpose is to check that E neither intersects nor separates P. Case 1. The domain V is also a component of _1. Since _1 induces an exact decomposition of (f,P), by statement (1) above, K(E) neither intersects nor separates P. So does E. Case 2. The domain V is a complex-type component of K_i∖_i for some 1≤ i≤ m. In this case K(E) intersects K_i. Then by statement (2), either K(E)=K_i or K(E) is contained in a component D of K_i∖ K_i. The domain D is simply connected and disjoint from P. Moreover, we have D⊂ V since E⊂ V. So it is enough to consider the former case. The equality K(E)=K_i implies that E⊂ K_i and f(E)⊂ f(K_i)=K_j for some j. Thus f(E) is a component of _j⊂ K_j. Since E⊂ V is disjoint from _i, by the definition of _i, exactly one of the following two situations occurs: * K_i≠ K_* and E neither intersects nor separates P; * K_i=K_* and E is a component of (f^p|_K_*)^-1() that lies in V. Thus, we only need to deal with the second situation. By commutative graph (<ref>), ϕ(E) is a component of g^-1(_g). Note also that ϕ(V) is a complex-type component of _g. Since _g induces an exact decomposition of (g, Q), it follows from statement (1) that ϕ(E) neither intersects nor separates Q. Thus, E neither intersects nor separates P. By Proposition <ref>, if every periodic component of _2 is a cluster, then Theorem <ref> is true by choosing =_2. Otherwise, we can repeat the above argument using _2 in place of _1, and obtain a stable set _3 with _f⊂_3⊊_2 such that _3 induces an exact decomposition of (f,P) and each component of _3 intersects or separates P. Continuing this process successively, we obtain a sequence of stable sets {_n} with _f⊂_n⊊_n-1. This process must stop after finite steps due to Lemma <ref>. Then the proof of Theorem <ref> is complete. The subsequent corollary of Theorem <ref> will be used in Section 8. Let (f, P) be a marked rational map with J_f≠. Then there exists an f-invariant and finite set P'⊃ P and a stable set '⊂ J_f such that * the stable set ' induces a cluster-exact decomposition of (f,P'), and each of its components intersects P'; * every complex-type component of ∖' rel P' is disjoint from attracting cycles of f; * every simple-type component of ∖' rel P' is a simply connected domain; and * every annular-type component A of ∖' rel P' is an annulus, and moreover, if A∩ f^-1(')≠∅, then A contains an annular-type component of f^-1('). Let be the stable set obtained in Theorem <ref>. Consider a finite and f-invariant set Q_0⊂ such that each component of contains at least two points of Q_0. It is important to note that the complex-type components of ∖ rel P coincide with those rel P∪ Q_0. Hence items (1)–(3) and the former part of (4) hold for the stable set relative to P∪ Q_0. If the latter part of item (4) is false for an annular-type component A of ∖ rel P∪ Q_0, let K_ A be a component of f^-1()∩ A. We can select two points from f^-1(Q_0) within K_ A and denote by Q_1 the union of these two points with Q_0. Then the stable set _1:=∪ K_ A satisfies items (1)–(3) and the former part of (4) relative to P∪ Q_1. Moreover, the number of annular-type components of ∖_1 rel P∪ Q_1 is less than that of ∖ rel P∪ Q_0. If the latter part of item (4) is still false for _1 rel P∪ Q_1, we can repeat the argument above in place of and Q_0 with _1 and Q_1, respectively. Thus we obtain a sequence of stable sets {_n} and a sequence of f-invariant finite sets {Q_n} such that _n satisfies items (1)–(3) and the former part of (4) relative to P∪ Q_n, and the number of annular-type components of ∖_n rel P∪ Q_n is strictly decreasing as n increases. As a result, this process must stop after N steps for an integer N≥0. Then '=_ N and P'=P∪ Q_ N satisfy items (1)–(4). § BLOW-UP OF AN EXACT SUB-SYSTEM In this section, we will prove Theorem <ref> and complete the proof of Theorem <ref>. Throughout this section, let (f,P) be a marked rational map and let V⊂ be a domain such that ∂ V⊂ J_f consists of finitely many pairwise disjoint continua. We also assume that f:V_1→ V is an exact sub-system of (f,P), i.e., V_1 is a component of f^-1(V) contained in V and each component of V∖ V_1 is a full continuum disjoint from P. For two topological space X and Y, a homotopy from X to Y is a continuous map ξ:X× [0,1]→ Y. We usually write the homotopy as {ξ_t}_t∈[0,1]. §.§ Construction of the blow-up map Let λ be a component of ∂ V. Since V∖ V_1 is compact, we have λ⊂∂ V⊂∂ V_1. Thus f(λ) is also a component of ∂ V. Let E_λ be the component of ∖ V which contains λ. If E_f(λ) is disjoint from P, then f(E_λ)=E_f(λ) and E_λ is also disjoint from P. Let λ be a periodic component of ∂ V with period p≥ 1. Since f is expanding in a neighborhood of J_f under the orbifold metric, there is an annulus A⊂ V∖ P such that λ is a component of ∂ A, and A_1⊂ A∪λ, where A_1 is the component of f^-p(A) with λ⊂∂ A_1. A folklore argument implies that E_λ is locally connected and E_λ∩ P ≠∅. Since each component λ of ∂ V is eventually periodic, it follows that each component of V is locally connected. 0.1cm Now we begin to construct the blow-up map. Let χ be a conformal map from V onto a circular domain Ω̂⊂, i.e., each component of Ω̂ is a closed round disk in . Let Ω̂_1:=χ(V_1). Then ĝ:=χ∘ f∘χ^-1: Ω̂_1→Ω̂ is a holomorphic and proper map, which can be continuously extended to ∂Ω̂ such that ĝ(∂Ω̂)⊂∂Ω̂. By the symmetric principle and the expanding property of f, the map ĝ is holomorphic and expanding in a neighborhood of ∂Ω̂. Denote =∖Ω̂. Define a map ℘: → by ℘(D̂_i)=D̂_j if ĝ(∂D̂_i)=∂D̂_j, where D̂_i and D̂_j are components of , and ℘(z)=r_j(z-a_i/r_i)^d_i+a_j, if z∈D̂_i, where a_i and r_i are the center and the radius of the closed round disk D̂_i, respectively, and d_i=(ĝ|_∂D̂_i). Since ĝ is expanding on ∂Ω̂=∂, if ∂D̂_i is periodic with period p_i≥ 1, then there exists a quasi-symmetric map w_i: ∂D̂_i→∂D̂_i such that ℘^p_i∘ w_i=w_i∘ĝ^p_i on ∂D̂_i. By pullback, we obtain a quasi-symmetric map w:∂Ω̂→∂Ω̂ such that ℘∘ w=w∘ĝ on ∂Ω̂. Consider the conformal welding induced by w. There exist two conformal maps ζ: Ω̂→Ω̃⊂η: int()→ int() such that ζ=η∘ w on ∂Ω̂, where :=∖Ω̃, and the notation int(·) represents the interior of the corresponding set. Define g̃_0:=ζ∘ĝ∘ζ^-1, on ζ(Ω̂_1)⊂Ω̃, η∘℘∘η^-1, on η()=. Then g̃_0 is a holomorphic map on ζ(Ω̂_1)∪η(). Set ξ_0:=χ^-1∘ζ^-1: Ω̃→ V and continuously extend it to a quotient map (defined in <ref>) of , due to the local connectivity of ∂ V. Then ξ_0∘g̃_0=f∘ξ_0 on Ω̃_1^*:=ζ(Ω̂_1). For each n≥1, set V_n:=(f|_V_1)^-1(V). Then f:V_n+1→ V_n is an exact sub-system for each n≥1. By replacing V with some V_n, we may assume that V V_1 is disjoint from f^-1( P ). It means that f sends a neighborhood of each component of V V_1 homeomorphically onto a neighborhood of a complementary component of V. For each component of Ω̃Ω̃_1^*, we pick a small disk in Ω̃ξ_0^-1( P ∩ V) as a neighborhood of this component, such that these disks have pairwise disjoint closures. Let denote their union. Then g̃_0 is injective on ∂. We define a new map g̃:→ such that g̃ is continuous and injective on , and that g̃(z)=g̃_0(z) for all z∈. It is easy to check that g̃ is a PCF branched covering with deg(g̃)= f|_ V_1 and it is holomorphic on . Note that the interior of each component D̃ of Ω̃ contains a unique eventually periodic point z(D̃) of g̃. Set Z̃={z(D̃): ξ_0(D̃)∩ P ≠∅} and Q̃=ξ_0^-1( P ∩ V)∪Z̃. It follows that g̃(Z̃)⊂Z̃, g̃(Q̃)⊂Q̃ and P_g̃⊂Q̃. Denote Ω̃_1=g̃^-1(Ω̃). Then Ω̃Ω̃_1 consists of pairwise disjoint closed disks in . Moreover, by lifting there is a homeomorphism θ:Ω̃_1→Ω̃_1^* such that θ=id on Ω̃ and g̃=g̃_0∘θ on Ω̃_1; See Figure <ref>. Since each component of ∂Ω̃_1 is a Jordan curve and g̃ is injective on ∂Ω̃_1∂Ω̃, we can continuously extend θ to a quotient map of . This extended map, still denoted by θ, sends Ω̃Ω̃_1 onto Ω̃Ω̃_1^*. We define ξ_1:=ξ_0∘θ. Then ξ_1 is a quotient map of such that ξ_1(Ω̃_1)=V_1, ξ_1=ξ_0 on and ξ_0∘g̃=f∘ξ_1 on Ω̃_1. Moreover, there is a homotopy ξ_t: →, t∈[0,1], such that ξ_t is a quotient map of and ξ_t(z)=ξ_0(z) for all z∈ and t∈ [0,1]. In particular ξ_t(Q̃∩Ω̃)= P ∩ V Since g̃: Ω̃_1∖g̃^-1(Q̃)→Ω̃∖Q̃ and f: V_1∖ f^-1( P )→ V∖ P are both covering, and { ξ_t^-1(z):t∈[0,1] } is a singleton in Q̃∩Ω̃ for every z∈ P ∩ V, the homotopy ξ_t:Ω̃∖Q̃→ V∖ P can be lifted by f and g̃ to a homotopy ξ_t:Ω̃_1∖g̃^-1(Q̃)→ V_1∖ f^-1( P ),t∈ [1,2], due to the general homotopy lifting theorem; See <cit.>. Furthermore, this homotopy can be extended to a homotopy ξ_t: →, t∈[1,2], such that each ξ_t is a quotient map and ξ_t(z)=ξ_1(z) on ∖g̃^-1() for every t∈[1,2]. Inductively using the above argument, we obtain a sequence of quotient maps {ξ_n} of such that ξ_n(Ω̃_n)=V_n, ξ_n+1=ξ_n on g̃^-n() and ξ_n∘g̃=f∘ξ_n+1 on Ω̃_n+1, where Ω̃_n= g̃^-n(Ω̃) and V_n=(f|_V_1)^-n(V). The marked branched covering ( g̃, Q̃) is combinatorially equivalent to a marked rational map (g, Q). Let Γ={_k} be a multicurve of ( g̃,Q̃). Its transition matrix (a_kl) is defined by a_kl=∑1/g̃:δ→_l, where the summation is taken over all the components δ of g̃^-1(_l) isotopic to _k rel Q̃. Since each component of ∖Ω̃ contains at most one point of Q̃, one may require that each curve in is contained in Ω̃. Thus ξ_0∘g̃=f∘ξ_0 on g̃^-1(_k) for each _k∈. By the choice of Q̃, the collection of curves ξ_0()={ξ_0(_k)} is a multicurve of the rational map f. Moreover, each entry of the transition matrix of ξ_0() under f is not less than the corresponding entry of the transition matrix of Γ under ( g̃, Q̃). Then ( g̃, Q̃) has no Thurston obstruction since f has no Thurston obstruction by <cit.>. Therefore ( g̃, Q̃) is combinatorially equivalent to a marked rational map (g,Q) by <cit.>. §.§ Dynamics of the blow-up map According to Proposition <ref>, there is an isotopy ϕ_t:→ rel Q̃ (t∈[0,1]) such that ϕ_0( Q̃)=Q and g∘ϕ_1=ϕ_0∘g̃ on . Recall that Z̃= Q̃∖Ω̃ and set Z=ϕ_0(Z̃). Each Fatou domain of g with the center in Z is a disk whose boundary is disjoint from Q, and any two such Fatou domains have disjoint closures. In particular, g is a rational map if its attracting periodic points are all contained in Z. To prove this proposition, we need a combinatorial criterion to determining whether the boundary of a Fatou domain contains marked points, whether it is a Jordan curve, and whether two Fatou domains have disjoint closures. Let R be a PCF rational map, and let U be a periodic Fatou domain of R with the center a. (i) A repelling periodic point b lies in ∂ U if and only if there exists an open arc β⊂ P_ R joining a and b, such that R^-p(β) has a component isotopic to β rel P_ R for some p≥ 1. (ii) Let U'⊂ be another periodic Fatou domain of R with the center a'. Then ∂ U∩∂ U'≠∅ if and only if there exists an open arc β⊂ P_ R joining a and a', such that R^-p(β) has a component isotopic to β rel P_ R for some integer p≥ 1. (iii) Assume that ∂ U∩ P_ R=∅. Then U is not a disk if and only if there exists an open arc β⊂ P_ R which joins a to itself, such that β separates P_ R, and R^-p(β) has a component isotopic to β rel P_ R for some integer p≥ 1. (i) If b∈∂ U, the internal ray in U which lands at b satisfies the condition. Conversely, the arc β can be decomposed into two subarcs β=α∪δ, such that α⊂ U and δ is disjoint from super-attracting cycles of R. By successive lifting, R^-kp(β) has a component β_k isotopic to β rel P_ R, and β_k has a decomposition β_k=α_k∪δ_k such that R^kp(α_k)=α and R^kp(δ_k)=δ. Observe that α_k⊂ U, and diam(δ_k)→ 0 as k→∞ by Lemma <ref>. Hence b∈∂ U. (ii) First assume that ∂ U∩∂ U'≠∅. We choose a open arc β' joining a and a' passing through a point z∈∂ U∩∂ U' such that β'∖{z} are two internal rays in U and U' respectively. If R^k(z)∉ P_ R for all k≥ 1, since # P_ R<∞, there are q,p≥ 1 such that R^q+p(β') is isotopic to R^p(β') rel P_ R. Let β=R^q+p(β'). Then R^-p(β) has a component isotopic to β rel P_ R. If R^k(z)∈ P_ R for some integer k≥ 1, then by Lemma <ref>, there are integers q,p≥ 1 such that R^q+p(β')=R^q(β'). Note that R^q(z) is a repelling periodic point in P_ R. Let β be an open arc obtained by modifying R^q(β') in a small neighborhood of the point R^q(z) such that R^q(z)∉β, then R^-2p(β) has a component isotopic to β rel P_ R. Conversely, we decompose β into three subarcs β=α∪δ∪α', such that α⊂ U, α'⊂ U' and δ is disjoint from super-attracting cycles of g. By successive lifting, R^-kp(β) has a component β_k isotopic to β rel P_ R, and β_k can be decomposed as β_k=α_k∪δ_k∪α_k' such that R^kp(α_k)=α, R^kp(_k)=δ and R^kp(α_k')=α. Observe that α_k⊂ U,α_k'⊂ U', and diam(δ_k)→ 0 as k→∞ by Lemma <ref>. Thus ∂ U∩∂ U'≠∅. (iii) First assume that U is not a disk. Then there are two internal rays in U landing at a common point z∈∂ U. Let β' be the union of these two internal rays together with the point z. For simplicity, we assume R(U)=U. Since ∂ U∩ P_ R=∅, it follows that all R^k(β') are open arcs in ∖ P_ R with the same endpoints a. If ∖ R^k+1(β') has a component D_k+1 disjoint from P_ R, then ∖ R^k(β') has also a component D_k disjoint from P_ R, and R(D_k)=D_k+1. It follows that R^k(β') separates P_ R for each sufficiently large integer k. For otherwise, there would be a sequence {k_n} of integers tending to ∞ such that R^k_n(D_1)∩ P_ R=∅ for all n≥ 1. This is impossible as D_1∩ J_ R≠∅. Since # P_ R<∞, there are integers q,p≥ 1 such that R^q+p(β') is isotopic to R^p(β') rel P_ R. Let β=R^q+p(β'). Then R^-p(β) has a component isotopic to β rel P_ R. Conversely, by a similar argument as that in the proof of statement (ii), we can obtain two distinct internal rays in U with the same landing point. Hence U is not a disk. To prove the proposition, it suffices to verify the combinatorial conditions in Lemma <ref> for the branched covering g̃. Let a∈Z̃ be a periodic point of g̃. Let β⊂Q̃ be an open arc joining the point a to a repelling periodic point b∈Q̃ which belongs to Ω̃. Assume by contradiction that g̃^-p(β) has a component β_1 isotopic to itself rel Q̃ for some integer p≥ 1. By isotopy lifting, g̃^-kp(β) has a component β_k isotopic to β rel Q̃. We adjust the arc β within its isotopic class so that β=α∪δ with α⊂Ω̃ and δ⊂Ω̃. This allow us to write β_k=α_k∪δ_k with α_k⊂g̃^-kp(Ω̃) and δ_k⊂g̃^-kp(Ω̃), where g̃^kp(α_k)=α and g̃^kp(δ_k)=δ. In particular, one endpoint of δ_k lies in ∂Ω̃ and the other one is b. Recall that {ξ_n} is a sequence of quotient maps of such that ξ_0(Q̃∩Ω̃)= P ∩ V, ξ_n(Ω̃_n)=V_n, ξ_n+1=ξ_n on Ω̃_n and ξ_n∘g̃=f∘ξ_n+1 on Ω̃_n+1, where Ω̃_n= g̃^-n(Ω̃) and V_n=(f|_V_1)^-n(V). Thus ξ_kp(δ_k) is a component of f^-kp(ξ_0(δ)), such that one endpoint of ξ_kp(δ_k) lies in ∂ V and the other is ξ_0(b). By Lemma <ref>, the diameter of ξ_kp(δ_k) tends to zero as k→∞. It follows that ξ_0(b)∈∂ V, contradicting the assumption that b∈Ω̃. Hence condition (i) holds. The check of conditions (ii) and (iii) is similar. So we omit the details. §.§ Fibers of the semi-conjugacy Recall that =∖Ω̃ consists of pairwise disjoint closed disks, and g̃ is holomorphic in a neighborhood of with g̃()⊂. Each component of int() contains a unique preperiodic point of g̃. Moreover, there exists a small neighborhood _a of the attracting cycles of g̃ that are contained in Ω̃ such that g̃:_a→_a is holomorphic. Recall also that the marked branched covering (g̃,Q̃) is combinatorially equivalent to a marked rational map (g,Q) by a pair of homeomorphisms ϕ_0,ϕ_1 of , which are connected by an isotopy {ϕ_t}_t∈ [0, 1] rel Q̃. Due to Proposition <ref>, the homeomorphism ϕ_0 sends the preperiodic points of g̃ in int() to the centers of some Fatou domains of g, which are disks with pairwise disjoint closures. Note that the closure of the union of these Fatou domains is invariant under g. We may specify the isotopy ϕ_t such that ϕ_0 is holomorphic in _a∪ int() with ϕ_0()=, and ϕ_t=ϕ_0 on _a∪ for t∈[0, 1]. By successively using Lemma <ref>, for every n≥0, we have an isotopy {ϕ_t}_t∈[n, n+1] rel g̃^-n(∪_a∪Q̃), such that ϕ_n∘g̃=g∘ϕ_n+1 on . Set Ω_n:=ϕ_n(Ω̃_n). Recall that in Section <ref>, we obtained a homotopy {ξ_t}_t∈[n,n+1] on for every n≥0, such that ξ_n( Ω̃_n)=V_n, ξ_n=ξ_n+1 on ∖Ω̃_n and ξ_n∘g̃=f∘ξ_n+1 on Ω̃_n+1, where Ω̃_n= g̃^-n( Ω̃) and V_n=(f|_V_1)^-n(V). Then we have the following commutative diagram. Ω_n+1[d]_g [l]_ϕ_n+1Ω̃_n+1[d]_g̃[r]^ξ_n+1 V_n+1[d]^f Ω_n [l]_ϕ_nΩ̃_n[r]^ξ_n V_n Set _n:= V_n, _n:=∖Ω_n and _a:=ϕ_0( _a). Then for every n≥0, the family of maps {h_t:=ξ_t∘ϕ_t^-1}_t∈[n, n+1] is a homotopy on such that the following conditions hold: (a) h_t(z):→ is a quotient map; (b) h_t(z)=h_n(z) for z∈_n∪ g^-n(_a)∪ g^-n(Q); (c) h_t^-1(_n)=_n; (d) h_n∘ g=f∘ h_n+1 on Ω_n+1. The sequence of maps {h_n} uniformly converges to a quotient map of . The argument is similar as that in <cit.>. By <cit.>, the limit of a sequence of quotient maps is still a quotient map. Then it suffices to show that there exist constants M>0 and ρ>1 such that dist(h_n+1(z),h_n(z))≤ Mρ^-n for every n≥1. Recall that the homotopic length of a curve is the infimum among the lengths of smooth curves homotopic to rel P with endpoints fixed under the orbifold metric; See Appendix <ref>. For any point z∈Ω(_a∪ Q), we define a curve _z:[0,1]→ V∖ P as _z(t):=h_t(z) for t∈[0,1]. Since the homotopic length of _z is continuous with respect to z and converges to zero as z→∂Ω∪∂_a∪ Q, it is bounded above by a constant M_1 for all points z∈Ω(_a∪ Q) Fix an integer n≥1 and a point z∈. If z∈_n∪ g^-n(_a)∪ g^-n(Q), then dist(h_n(z),h_n+1(z))=0 by point (b) above. If z∈Ω_n∖(g^-n(_a)∪ g^-n(Q)), then w=f^n(z)∈Ω∖(_a∪ Q). In this case, the curve β={h_t(z):t∈[n,n+1]} is a lift of _w by f^n based at h_n(z). As a result, dist(h_n(z),h_n+1(z))≤ C· L_ω[β]≤ CM_1ρ^-n by (<ref>) and Lemma <ref>. Then the proof of Proposition <ref> is complete. Let π be the limit quotient map of the sequence {h_n}, and set K_g=⋂_n>0Ω_n. By Proposition <ref>, we have π(Ω_n)⊂V_n, π(_n)=_n and π(∂_n)=∂_n for all n>0. It follows that π(K_g)⊂ E:=⋂_n≥0V_n. Since π is surjective, we obtain π(K_g)= E. Moreover, the properties of h_n also imply that π∘ g=f∘π on K_g and that π:K_g∩ F_g→ E∩ F_f is a conformal homeomorphism. Suppose that B is a component of such that f^p(∂ B)=∂ B. Due to the properties of π mentioned above, there is a unique component D of such that ∂ D⊂π^-1(∂ B)∩ K_g, and π^-1(∂ B)∩ K_g⊂ J_g is a stable set of g^p of simple type. Then by Theorem <ref>, π^-1(∂ B)∩ K_g is the boundary of a Fatou domain of g, which implies π^-1(∂ B)∩ K_g=∂ D. Since π(D)=B, it follows that π^-1(B)=D. By pullback, we obtain π^-1(_n)=_n for every n>0. Now consider an arbitrary point z∈⋂_n>0V_n. Then π^-1(z)⊂⋂_n>0Ω_n is a full and connected compact set of simple type. If z∈ F_f, then π^-1(z) is a singleton. If z∈ J_f is eventually periodic, then π^-1(z)⊂ J_f is eventually periodic under g, and thus a singleton by Lemma <ref>. Assume that z∈ J_f is wandering, i.e., f^i(z)≠f^j(z) for any i≠ j≥ 0. Then the ω-limit set ω(z) contains infinitely many points. For otherwise, since f(ω(z))⊂ω(z), the orbit of z would converge to repelling cycles, a contradiction. Thus, we may choose a point z_∞∈ω(z)∖ P and a subsequence {f^n_k(z)} such that f^n_k(z)→ z_∞ as k→∞. Let U be a disk such that z_∞∈ U and U∩ P =∅. Then f^n_k(z)∈ U for every large integer k. It follows that g^n_k(π^-1(z))⊂π^-1(U) for every large integer k. Since π^-1(U) is a full continuum disjoint from P_g, by Lemma <ref>, the diameters of components of g^-n(π^-1(U)) tend to zero as n→∞. So π^-1(z) is a singleton. Finally, the uniqueness of the rational map g is deduced directly from <cit.>. Then we complete the proof of Theorem <ref>. By Theorem <ref>, there is a stable set of f which induces a cluster-exact decomposition of (f,P). Moreover, the union of all complex-type components of ∖ avoids the attracting cycles of f. It then follows from Theorem <ref> that each blow-up of the induced exact sub-system f:_1→ has the carpet Julia set. § TOPOLOGY OF GROWING CONTINUA To construct invariant graphs in extremal chains, we first study their topology. Let f be a rational map with J_f≠. Suppose that K is a periodic level-(n+1) (n≥ 0) extremal chain of f with period p≥ 1, and E is the union of all periodic level-n extremal chains contained in K. By Lemma <ref>, E is an f^p-invariant continuum, and K is generated by E in the sense that K=⋃_k≥ 0E_k, where E_k is the component of f^-kp(E) containing E. Due to the inductive construction mentioned above, all results about extremal chains can be proved by using induction on levels. To improve the clarity of the proofs and ensure wider accessibility, we will adopt a more general framework for our discussions in this section. By a growing continuum of f, we mean a continuum K⊂ together with a continuum E⊂ such that ∂ E⊂ J_f, f(E)⊂ E and K=⋃_k≥ 0E_k, where E_k is the component of f^-k(E) containing E. We call E the generator of K. Let P be a finite marked set. Since E_k⊂ E_k+1, according to Corollary <ref> (2), there is an integer k_0≥ 0 such that E_k_0 is a skeleton of E_k rel P for all k>k_0. Note that f(E_k_0)⊂ E_k_0. Then K is also a growing continuum generated by E_k_0. Therefore, we may always assume that E is a skeleton of E_k for all k>0. §.§ Local connectivity of extremal chains Let f be a PCF rational map. By Theorem <ref>, the maximal Fatou chains of f are locally connected since they are stable sets. In this part, we aim to prove the local connectivity for extremal chains, or more generally, for growing continua. Let K⊂ be a growing continuum generated by E. Suppose that E is locally connected. Then K is locally connected. According to Lemma <ref>, we need to consider the components of K. It is worth noting that any component of K is contained in a unique component of E_k for every k≥ 0. A nested sequence {Ω_k} is called an end of K if Ω_k is a component of E_k and Ω_k+1⊂Ω_k for every k≥0. An end {Ω_k} is called marked if Ω_k∩ P_f≠∅ for all k≥ 0. Marked ends are finitely many. Since E_k+1 is a component of f^-1(E_k), for each component Ω_k+1 of E_k+1, there is a unique component Ω_k' of E_k such that f(∂Ω_k+1)=∂Ω_k'. Moreover, f:Ω_k+1→Ω_k' is a homeomorphism if Ω_k'∩ P_f=∅. Let {Ω_k} be an end of K. For each k≥0, let Ω_k' be the component of E_k such that f(∂Ω_k+1)=∂Ω_k'. Then Ω'_k+1⊂Ω'_k for every large integer k. There is an integer k_0≥ 0 such that, either Ω_k+1 avoids f^-1(E_k) for each k≥ k_0 and hence f(Ω_k+1)=Ω'_k, or Ω_k+1 contains a component of f^-1(E_k) for each k≥ k_0. In the former case, it is clear that Ω'_k+1⊂Ω'_k for all k≥ k_0. In the latter case, let W_k be the component of Ω_k+1 f^-1(E_k) whose boundary contains ∂Ω_k+1. Then f:W_k→Ω'_k is proper and W_k contains critical points of f. Note that there is an integer k_1≥ k_0 such that each W_k contains the same critical points of f for all k≥ k_1. Thus all Ω'_k share common critical values of f. This implies that Ω'_k+1⊂Ω'_k as k≥ k_1. By Proposition <ref>, we obtain a self-map f_⋆ on the collection of ends of K. This map is defined by f_⋆{Ω_k}={Ω_k'} if f(∂Ω_k+1)=∂Ω_k' for each sufficiently large integer k. The proof of Proposition <ref> shows that the image of a marked end remains marked. Hence, marked ends are eventually f_⋆-periodic. Moreover, if {Ω_k'}=f_⋆^ N{Ω_k} is not marked, then for each sufficiently large integer k, the map f^ N:Ω_k+ N→Ω'_k is conformal. There are constants M>0 and ρ>1 with the following properties. Let {Ω_k} be an end of K such that f_⋆^ N{Ω_k} is not marked for an integer N≥ 1. Then (⋂Ω_k)≤ Mρ^-N. Consequently, ⋂_k≥ 0Ω_k is a singleton if {Ω_k} is f_⋆-wandering. Recall that E is a skeleton of each E_k rel P_f. By Lemma <ref> and the fact that E_1 is locally connected, the homotopic diameters of the components of E_1 that avoid P_f are bounded above by a constant M_1. Since f_⋆^ N{Ω_k} is not marked, there exists an integer k_0≥1 such that f^ N(Ω_k)∩ P_f=∅ for every k≥ k_0. Fix any integer k>k_0. For each 0≤ i≤ k, we denote W_i as the component of E_i such that ∂ W_i=f^k-i(∂Ω_k). Let n_k≥ 1 be the minimal integer with W_n_k∩ P_f=∅, and let D_1 be the component of E_1 that contains W_n_k. We claim that D_1∩ P_f=∅. If n_k=1, then D_1=W_n_k and the claim is true. If n_k>1, we have W_n_k-1∩ P_f≠∅ by the choice of n_k. Let D denote the component of ∖ E containing W_n_k-1. Since E is a skeleton of E_n_k-1, it follows that D∩ P_f=W_n_k-1∩ P_f. Thus, there exists an annulus A⊂ D∖ P_f bounded by ∂ D and a Jordan curve in W_n_k-1. Let A_1 be the component of f^-1(A) that contains ∂ W_n_k. Then A_1∩ P_f=∅ and A_1∪ W_n_k=D_1. The claim is proved. By this claim, the homotopic diameter of D_1 is less than M_1. Due to the choices of k and n_k, the map f^k-n_k:Ω_k→ W_n_k is conformal and k-n_k≥ N. Thus, this lemma follows directly from Lemma <ref>. Given any component D of K, let {Ω_k(D)} be the end of K such that D⊂Ω_k(D) for all k≥ 0. By Lemma <ref>, the end {Ω_k(D)} is eventually f_⋆-periodic and marked. First assume that {Ω_k}={Ω_k(D)} is periodic under f_⋆. Without loss of generality, we may assume that the period is one, and that f(∂Ω_k)=∂Ω_k-1 for every k≥1. Let _0⊂Ω_0 be a Jordan curve separating ∂Ω_0 from P_f∩Ω_0. Then there is a unique component γ_1 of f^-1(γ_0) contained in Ω_1 which separates ∂Ω_1 from P_f∩Ω_1=P_f∩Ω_0. Thus there is a homeomorphism θ_0: → isotopic to id rel P_f, such that θ_0(_0)=_1. By lifting (Lemma <ref>), we obtain a sequence of homeomorphisms {θ_k} of isotopic to id rel P_f, such that f∘θ_k+1=θ_k∘ f on . Set ϕ_k=θ_k∘⋯∘θ_0. Then _k+1=ϕ_k(_0). By Lemma <ref>, {ϕ_k} uniformly converges to a quotient map φ of . Denote =φ(_0). Then f()= and is locally connected. According to Lemma <ref>, the Hausdorff distance between ∂Ω_k and _k converges to zero. Consequently, ∂Ω_k→ as k→∞. Thus ⊂ K. Then D lies in a component of ∖. We claim that D is just a component of ∖γ. If it is not true, there exists a point z∈∂ D not in γ and a neighborhood W of z disjoint from ∂Ω_k for every large integer k. Since W∩ D≠∅ , it follows that W⊂Ω_k for every k≥ 0. In particular, W is disjoint from every E_k, and hence avoids K=⋃_k≥0 E_k. Thus W⊂ D, which is a contradiction. This claim implies that ∂ D is locally connected since is locally connected. Now suppose that {Ω̃_k}={Ω_k(D)} is strictly eventually periodic under f_⋆. Let q>0 be the smallest integer such that {Ω_k}=f_⋆^q({Ω̃_k}) is periodic. Let γ̃_q be the component of f^-q(_0) contained in Ω̃_q which separates ∂Ω̃_q from Ω̃_q∩ f^-q(P_f). For all k≥ 0, we define a homeomorphism ϕ̃_k:=θ_q+k∘⋯∘θ_q. Then * f^q∘ϕ̃_k(z)=ϕ_k∘ f^q(z) for every z∈; * _q+k+1:=ϕ̃_k(_q) is contained in Ω̃_q+k+1 and isotopic to _q rel f^-q(P_f). By a similar argument as the periodic case, we can prove that the map ϕ̃_k uniformly converges to a quotient map φ̃, and D is a component of ∖φ̃(_q). Thus ∂ D is locally connected. It remains to show that the diameters of the components of ∖ K tend to zero. Given any ϵ>0, there are only finitely many ends {Ω_k} with diam(⋂_k≥0Ω_k)≥ϵ by Lemma <ref>. Therefore, we just need to consider the components D of ∖ K for which {Ω_k(D)} are such ends. As shown above, D is a complementary component of a curve _ D=lim_k→∞∂Ω_k(D). Since there are finitely many curves _ D, and only finitely many components of ∖_ D have diameters larger than ϵ, we complete the proof of the lemma. Every extremal chain of a PCF rational map is locally connected. Every level-0 extremal chain of a PCF rational map f is clearly locally connected. Inductively, for n≥0, assume that level-n extremal chains are locally connected. If K is a periodic level-(n+1) extremal chain, then it is locally connected by Lemma <ref> and the induction. Now suppose that K' is a strictly preperiodic level-(n+1) extremal chain such that f^q(K')=K is periodic of peorid p. Let E be the union of all periodic level-n extremal chains contained in K, and let E_k denote the component of f^-pk(E) that contains E for every k≥ 0. We may assume that E is a skeleton of every E_k rel P_f. Then for each k≥ 0, there is a unique component E_k' of f^-q(E_k) contained in K' such that E_k'⊂ E_k+1' and K'=⋃_k≥ 0 E_k'. The ends for K' can be similarly defined as the periodic case. If {Ω_k'} is an end of K', then there exists a unique end {Ω_k} of K such that f^q(∂Ω_k')=∂Ω_k for every large integer k. Therefore, using the same argument as that in the proof of Lemma <ref>, we can establish the local connectivity of K'. The details are omitted. §.§ Growing curves Let f be a PCF rational map. Let K be a growing continuum generated by an f-invariant continuum E. As previous, E_k denotes the component of f^-k(E) containing E and E is assumed to be a skeleton of E_k (rel P_f) for every k≥0. A curve : [0,1]→ K is called a growing curve if for any small number ϵ>0, there is an integer k≥0 such that [0, 1-ϵ]⊂ E_k. The point (1) is called the terminal of . By definition, any curve in E_k is growing, including the trivial ones. Here a curve is trival if its image is a singleton. Moreover, the image or a lift in K of a growing curve under f is also a growing curve. Growing curves will be crucial in constructing invariant graphs on a maximal Fatou chain in the next section. To this end, we aim to establish their existence through the following lemma. Suppose that E is locally connected. Then the following statements hold. * Any point of K is the terminal of a growing curve in K; * For any two points a and b in distinct components of K, there exist two growing curves _±⊂ K with the same terminal, such that E∪_+∪_- separates a from b. Let _1,_2:[0,1]→ be two curves with _1(1)=_2(0). The concatenation _1·_2 is a curve parameterized by _1·_2(t)={[ _1(2t), if t∈[0,1/2];; _2(2t-1), if t∈[1/2,1]. ]. If _1,…,_n can be successively concatenated, their concatenation is parameterized by _1·_2⋯_n(t):=_1·(_2·(⋯(_n-1·_n)))(t), t∈[0,1]. Suppose that E is locally connected. Then there exists a family of growing curves in K such that any point of K is the terminal of an element in , and that is sequentially compact under the uniform convergence, i.e., any infinite sequence in has a convergence subsequence whose limit is still in . Since E_1 is locally connected, each point w∈ E_1 can be joint to E by a curve β_w⊂ E_1 with the following conditions: if w∈ E, then β_w≡ w; otherwise, it holds that β_w(0)∈ E and β_w(0,1]∩ E=∅. By Lemma <ref>, we can require that _0={β_w:w∈ E_1} is equicontinuous. Thus, the homotopic diameters of curves in _0 are bounded above by a constant. For any integer k≥1 and any point z∈ E_k+1, set w:=f^k(z)∈ E_1. If w∈ E_0, we define β_z:≡ z. Otherwise, since E is a skeleton of E_k rel P_f, we have β_w(0,1]∩ P_f=∅. This implies that β_w has a unique lift by f^k based at z, which is defined as β_z. Since _0 is equicontinuous, the collection _k:={β_z,z∈ E_k+1} is also equicontinuous. According to Lemma <ref>, each curve in _k has diameter less than M/ρ^k for some constants M>0 and ρ>1. Now, for every k≥1 and any point z∈ E_k+1, we obtain a growing curve _z:=β_0·β_1··β_k which joins E to z such that β_i∈_i for every i=0,…,k. By its parameterization given in (<ref>), it follows that _z[0,1-1/2^k]⊂ E_k, for every k≥1. We claim that the family of curves _∞:={_z: z∈⋃_k≥1 E_k} is equicontinuous. Given any ϵ>0, there exists an integer N>0 such that M/(ρ^ N-1(ρ-1))<ϵ. Moreover, for every k≥0, there exists _k>0 such that |β(t_1)-β(t_2)|<ϵ if |t_1-t_2|<_k for any curve β∈_k. Set :=min{_0,…,_ N}. Let =β_0·β_1⋯·β_k be any element in _∞. If k≤ N, according to the parameterization of , we have |(t_1)-(t_2)|<2ϵ as |t_1-t_2|</2^ N+1. In the case of k> N, the diameter of [1-1/2^ N,1]=β_ N⋯β_k is less than M/ρ^ N+⋯+M/ρ^k<M/(ρ^ N-1(ρ-1))<ϵ. So |(t_1)-(t_2)|<ϵ when t_1,t_2≥ 1-1/2^ N. If t_1,t_2∈ [0,1-1/2^ N+1], then formula (<ref>) holds. So the claim is proved. Let be the union of _∞ and the limit of every uniformly convergent sequence in _∞. Then is also equicontinuous. By Ascoli-Arzela Theorem, is a normal family. If is the limit of a uniformly convergent sequence in , then there is a sequence of curves in _∞ which also uniformly converges to . Thus is sequentially compact. Due to (<ref>), for any γ∈Γ, we have [0, 1-1/2^k]⊂ E_k for every k≥0. Hence consists of growing curves in K. Fix a point z∈ K. If z∈ E_k for some k≥0, a curve in _∞ joins E to z. Otherwise, there exists a point z_k∈ E_k for every k such that z_k→ z as k→∞. For each k, let _k be a curve in _∞ joining E to z_k. By taking a subsequence if necessary, the curve _k uniformly converges to a curve ∈, which joins E to z. Statement (1) follows directly from Proposition <ref>. (2) If a and b belong to distinct components of E_m for some m≥ 0, we can choose the required curves _± in E_m since E_m is locally connected. So we assume that there exists an end {Ω_k} of K such that a,b∈Ω_k for every k≥0. Let U_a be the components of ∖ K containing a. Then U_a is contained in each Ω_k. Since K is locally connected by Lemma <ref>, it follows that ∂ U_a is locally connected. Let η:/→∂ U_a be a parameterization of ∂ U_a. A curve with endpoints in E is said to split {a,b} (rel E) if E contains a curve α with the same endpoints as those of such that ·α^-1 is not contractible in ∖{a,b}. Note that if splits {a,b}, then ·α^-1 is not contractible in ∖{a,b} for any curve α⊂ E with the same endpoints as those of . According to Proposition <ref>, for any t∈/, there exists a growing curve _t∈ with _t(0)∈ E and _t(1)=η(t)∈∂ U_a. Then for every t∈/, we have two curves (see Figure <ref>) ℓ^-_t:=_0·η[0,t]·_t^-1 and ℓ^+_t:=_t·η[t,1]·_0^-1; Since ℓ_t^-·ℓ_t^+=_0·η·_0^-1 which splits {a,b}, at least one of ℓ_t^+ and ℓ_t^- splits {a,b}. Note that ℓ_1^-=_0·η·_0^-1, which splits {a,b}. Let t_* denote the infimum of t∈[0,1] such that ℓ_t^- splits {a,b}. Then there exists a sequence of decreasing numbers {t_n}⊂ [t_*,1] such that t_n→ t_* and ℓ_t_n^- splits {a,b}. Let {s_n}⊂ [0,t_*] be a sequence of increasing numbers converging to t_*. It follows that each ℓ_s_n^+ splits {a,b}. Here t_n or s_n are possibly constants for large n. We claim that the curve _s_n·η[s_n,t_n]·_t_n^-1 splits {a,b} for each n≥1; See Figure <ref>. For otherwise, since ℓ_t_n^-=_0·η[0,t_n]·_t_n^-1=(_0·η[0,s_n]·_s_n^-1)·(_s_n·η[s_n,t_n]·_t_n^-1)=ℓ_s_n^-·(_s_n·η[s_n,t_n]·_t_n^-1) splits {a,b}, it follows that ℓ_s_n^- splits {a,b}, a contradiction to the choice of t_*. Since {_s_n} and {_t_n} are selected from a sequentially compact family of growing curves by Proposition <ref>, we may assume that {_s_n} and {_t_n} uniformly converge to growing curves _- and _+ respectively. As a consequence, both _± join E to η(t_*), and the curves _s_n·η[s_n,t_*]·_-^-1 and _+·η[t_*,t_n]·_t_n^-1 do not split {a,b} for each large integer n. Moreover, since _s_n·η[s_n,t_n]·_t_n^-1=(_s_n·η[s_n,t_*]·_-^-1)·(_-·_+^-1)·(_+·η[t_*,t_n]·_t_n^-1) splits {a,b} by the claim above, it follows that _-·_+^-1 splits {a,b}, and the lemma is proved. §.§ Accesses within a growing continuum In order to construct invariant graphs within extremal chains, we need a sufficient number of preperiodic growing arcs. These arcs will be constructed in this and the next subsections. Let (f,P) be a marked rational map. Suppose that K is a growing continuum generated by an f-invariant and locally connected continuum E. We still assume that E is a skeleton (rel P) of all E_k, where E_k denotes the component of f^-k(E) containing E. Let P_0=P∖ E. Then P_0∩ E_k=∅ for every k≥0 since E is a skeleton of E_k. Two growing curves α_1 and α_2 in K with a common terminal z are called equivalent if there is an integer k≥0 and a curve ⊂ E_k that joins α_1(0) to α_2(0), such that the closed curve :=α_1^-1··α_2 is contractible in ∖ P_0, i.e., there is a continuous map H: ℝ/ℤ× [0,1]→ such that the family of curves {H_s=H(·, s), s∈[0, 1]} satisfies H_0=γ, H_1≡{z}, H_s(0)=zH_s(0, 1)∩ P_0=∅, ∀ s∈(0, 1). This is clearly an equivalence relation. Note that possibly passes through some points in P∩ E. For each k≥0, any two growing curves in E_k with a common terminal are equivalent. A growing curve α is called infinitely growing if it is not equivalent to any curve (including trivial ones) in E_k for every k≥0. By definition, infinitely growing curves can not be trivial. In Figure <ref>, the curve α_3 is infinitely growing to z_1, while α_0 is not. An access to z is an equivalence class of all infinitely growing curves to z. By the interior of a curve :[0, 1]→, we mean the set (0, 1). The sub-curve |_[t_1,t_2] of means a curve whose image equal to [t_1,t_2]. An (open) arc is called a crosscut of a domain U⊂ if ⊂U with only the two endpoints in ∂ U. Recall that two curves _0,_1:[0,1]→ are homotopic rel P with endpoints fixed if there is a continuous map H:[0,1]× [0,1]→ such that H_0=_0,H_1=_1 and each curve H_s,s∈[0,1], has the same endpoints as _0 with its interior disjoint from P. Let α,α'⊂ K be two growing curves with a common terminal z. * The curves α and α|_[t,1] are equivalent for any t∈(0,1). * If α(t, 1)∩α'(t, 1)≠∅ for any t∈(0, 1), then α and α' are equivalent. * If α is infinitely growing, then for every large integer k there exists a number t_k∈ (0,1) such that α(t_k)∈ E_k and α(t_k,1)∩ E_k=∅. Moreover, the curve α|_[t_k, 1] contains an arc β_k that is homotopic to α|_[t_k, 1] rel P with endpoints fixed. In particular, β_k lies in the same access to z as α. * Suppose that α and α' belong to the same access to z, with their interiors disjoint from P. Then there is an integer m≥0 and a continuous family of curves {α_s}_s∈ [0, 1] such that α_0=α,α_1=α' and each α_s joins E_m to z with its interior disjoint from P. We fix a disk W such that z∈ W and (W∖{z})∩ P=∅. (1) The curve α|_[0,t]⊂ E_k for some k and α^-1·α|_[0,t]·α|_[t,1] is contractible. (2) There are some t, t'∈(0, 1) such that α(t)=α'(t') and α|_[t, 1],α'|_[t', 1] lie in W. It follows that α|_[t, 1] is equivalent to α|_[t', 1], and thus α and α' are equivalent by statement (1). (3) To prove the existence of such t_k's, suppose to the contrary that α(s_n)∈ E_k for a sequence {s_n}⊂(0,1) that converges to 1 and a certain k≥0. Then z∈ E_k. Since E_k is locally arcwise connected by Lemma <ref>, there is a curve γ⊂ E_k∩ W (possibly trivial) joining a certain α(s_n) to z. Thus γ^-1·α|_[s_n, 1] is contractible, which contradicts that α is infinitely growing. By this statement, we can find k_0>0 such that α|_[t_k,1]⊂ W and z∉α[t_k,1) for each k>k_0. It follows that α|_[t_k,1] contains an arc β_k with endpoints α(t_k) and z. Then β_k⊂ W and its interior avoids P. Hence β_k is homotopic to α|_[t_k,1] rel P with endpoints fixed. (4) If α' is a sub-curve of α, the conclusion is obviously true. So we only need to prove the statement for a pair of sub-curves α|_[t,1] and α'|_[t',1] of α and α' respectively. If α(t,1)∩α'(t,1)≠∅ for any t∈(0,1), then there are t,t'∈(0,1) such that α(t)=α'(t') and α|_[t,1],α'|_[t',1]⊂ W. Since the interiors of α and α' avoid P, it follows that α|_[t,1] and α'|_[t',1] are homotopic rel P with endpoints fixed. Hence statement (4) holds in this case. Otherwise, by statement (3) and in place of α,α' with their sub-curves, we can assume that α and α' are arcs with disjoint interiors such that α(0),α'(0)∈ E and α(0,1),α'(0,1)⊂∖ E. Let D and D' be the components of ∖ E which contain α(0,1) and α'(0,1), respectively. We claim that D=D'. If z∉E , the claim is obvious. Assume z∈ E. Since α and α' are infinitely growing, each component of D∖α and D'∖α' contains marked points. This implies D=D' because α and α' belong to the same access. The claim is proved. Since α and α' are arcs with disjoint interiors and belong to the same access, there is a simply connected domain D_* of D∖ (α∪α') such that D_*∩ P=∅ and α,α'⊂∂ D_*. Then the desired family of curves {α_s} can be easily chosen within D_*. Suppose that G is a locally connected skeleton of E. Let α_0,α_1⊂ K be two infinitely growing curves in the same access to z, with their initial points on G and their interiors disjoint from P. Then there exists a continuous family of curves {α_s}_s∈[0,1] joining G to z such that the interior of each α_s is disjoint from P. Let {β_s}_s∈[0, 1] be the family of curves derived from Proposition <ref> (4) such that α_0=β_0 and α_1=β_1. Then the curve defined by δ(s):=β_s(0) lies in a certain E_m. We will construct a continuous family of curves {η_s}_s∈[0,1] such that η_s(0)=(s), η_s(1)∈ G, and η_s≡η_s(0) if η_s(0)∈ G; η_s[0,1) avoids P otherwise. Then Proposition <ref> holds by taking α_s:=η_s^-1·β_s,s∈[0,1]. Set X={s∈[0,1]:(s)∈ G}. Since (0),(1)∈ G, each component of [0,1]∖ X is an open interval. If s∈ X, we define η_s≡(s). Let (s_1,s_2) be a component of [0,1]∖ X. Then there exists a component D of ∖ G such that (s_1),(s_2)∈∂ D and (s_1,s_2)⊂ D. Since ⊂ E_m and G is a skeleton of E_m, it follows that (s_1,s_2) avoids P and does not separate P. As a consequence, there is a disk D' compactly contained in D such that P∩ D⊂ D' and (s_1,s_2) is contained in the annulus D∖D'. Thus, we can choose a continuous family of curves {η_s}_s∈[s_1,s_2] such that η_s(0)=(s), η_s(1)⊂∂ D⊂ G and η_s(0,1)⊂ D∖D' for any s∈(s_1,s_2), and that η_s_i≡(s_i) for i=1,2. Then the construction of {η_s}_s∈[0,1] is complete. One main result of this subsection is the finiteness of accesses. For any z∈ K, there are finitely many accesses to z. Let be a finite collection of infinitely growing curves in K, which lie in pairwise distinct accesses to z. It is enough to show that #≤ (# P)^2. By Proposition <ref> (1)–(3), we may assume that all elements in are arcs with pairwise disjoint interiors, such that α(0, 1)⊂ D_α and α(0)∈∂ D_α for every α∈, where D_α is a component of ∖ E_m and m is a large integer independent on α. Note that every component D_α must intersect P. So there are at most # P such components. Suppose that a certain D_α contains the interiors of k arcs in . Then these arcs divide D_α into k or k+1 simply connected domains, each of which intersects P. It follows that k≤# P. Therefore, we have #Δ≤ (# P)^2. In the following content, we will construct numerous preperiodic growing arcs in K based on the above lemma. We first prove a lifting property for accesses. Let α⊂ K be an infinitely growing curve with terminal z. Then * the curve f∘α is also infinitely growing with terminal f(z); * if β and f∘α lie in the same access to f(z), then there is a curve β̃ in the same access as α such that f∘β̃=β. (1) To the contrary, suppose that f∘α is not infinitely growing. Then z must be contained in some E_k_0. By Proposition <ref> (3), for each large integer k, there is a number t_k∈ (0,1) such that α(t_k)∈ E_k and α(t_k,1)∩ E_k=∅. It follows that f∘α(t_k)∈ E_k-1 and f∘α(t_k,1)⊂ D_k-1 for a component D_k-1 of ∖ E_k-1. Note that the diameter of f∘α(t_k,1) tends to 0 as k→∞. Then there exists an arc ⊂ f∘α([t_m,1]) that is homotopic to f∘α|_[t_m,1] rel P with endpoints fixed for a large integer m. In particular, γ is a crosscut of D_m-1. By homotopy lift, we obtain a lift of by f that is homotopic to α|_[t_m,1] rel P with endpoints fixed. Thus γ̃ is infinitely growing. On the other hand, since f∘α is assumed to be not infinitely growing, one of the two components of D_m-1∖, denoted by D_*, avoids P. Thus, there is a component D̃_* of f^-1(D_*) with γ̃⊂∂D̃_*. As D̃_*∩ P=∅ and ∂D̃_*∖γ̃⊂ E_m, γ̃ is not infinitely growing, a contradiction. (2) By statement (1), both β and f∘α are infinitely growing. Then using Proposition <ref> (3), we can find numbers t_0, t_1∈(0, 1) such that f∘α(t_0, 1) and β(t_1, 1) are disjoint from P. Due to Proposition <ref> (4), there is a continuous family of curves {_s}_s∈ [0, 1] joining some E_m to z such that _0=f∘α|_[t_0, 1], _1=β|_[t_1, 1], and the interior of each _s is disjoint from P. For any t∈(0, 1), the curve {_s(t):s∈[0,1]} has a unique lift based at the point α|_[t_0,1](t). Thus, by the continuity of f, we obtain a continuous family of lifts {_s} of {_s} such that each _s joins E_m+1 to z with its interior avoiding P. This implies that _0=α|_[t_0,1] and _1 belong to the same access to z. Since f∘_1=β|_[t_1,1], there exists a growing curve β̃ such that f(β̃)=β and β̃|_[t_1,1]=_1. Then α and β̃ lie in the same access by Proposition <ref> (1). Suppose that E⊂ J_f and that G is a locally connected and f-invariant continuum as a skeleton of E rel P. Let α⊂ K be an infinitely growing curve joining G to a preperiodic point z. Then there is a growing arc β in K such that * the arc β joins G to z and lies in the same access as α; * for any t∈(0,1), there exists an integer n_t>0 such that f^n_t(β[0, t])⊂ G; * there exist two integers q≥0 and p≥1 such that f^q+p(β)⊂ f^q(β)∪ G, and that the growing curves f^i(β),i=0,…, q+p-1, lie in pairwise distinct accesses. By Lemma <ref> (1), the curves f^i(α),i≥0, are all infinitely growing, with initial points in G. According to Lemma <ref>, there exist minimal integers q≥ 0 and p≥ 1 such that f^q+p(α) and f^q(α) lie in the same access to w=f^q(z). Set α_0:=f^p+q(α) and α_1:=f^q(α). Then f^p(α_1)=α_0. By Lemma <ref> (2) we may assume the interior of α_0 is disjoint from P. Then α_1 joins G_p to z and its interior is also disjoint from P. For simplicity, set G=G_p and E=E_p. Due to Proposition <ref>, we have a continuous family of curves {α_s}_s∈[0,1] joining G to w such that α_s(0,1)∩ P=∅ for all s∈[0,1]. Define a curve _0:[0,1]→ G by _0(s):=α_s(0). As shown in the proof of Lemma <ref>, there is a continuous family of curves {α_s+1}_s∈[0,1] joining G_p to w such that f^p∘α_s+1=α_s. Thus α_1 and α_2 lie in the same access to w, and we obtain a curve _1:[0,1]→ G_p defined by _1(s):=α_s+1(0) such that f^p∘_1=_0. Inductively, for every k≥1, there is a curve _k⊂ G_pk and a growing curve α_k such that * f^p∘_k+1=_k and _k(1)=_k+1(0); * α_k(0)=_k(0),α_k(1)=w and f^p∘α_k+1=α_k; * α_k lies in the same access as α_0. For every m≥1, define a growing curve ℓ_m:=_0⋯_m-1·α_m. By Proposition <ref> (1) and point (3) above, the curves ℓ_m and α_0 lie in the same access to w for every m≥1. Due to Lemma <ref>, the diameters of _k and α_k exponentially decrease to 0. Then α_k→ w as k→∞, and ℓ_m uniformly converge to a growing curve β_q+p⊂ K with terminal w as m→∞. Clearly f^p(β_q+p)⊂β_q+p∪ G, and the curves β_q+p and α_0 lie in the same access. Successively using Lemma <ref>, for each i=1,…,q+p, there is a curve β_q+p-i joining G_i to f^q+p-i(z) such that f^i(β_q+p-i)=β_q+p and that β_q+p-i and α_q+p-i lie in the same access to f^(q+p-i)(z). By replacing G with G_q+p, the curve β_0 satisfies all requirements of the proposition, except for the possibility of not being an arc. To complete the proof, it is enough to find an arc β⊂β_0 joining G to z such that f^q+p(β)⊂ f^q(β)∪ G. Without loss of generality, we can assume that q=0. Take two small disks D_1 and D_2 containing z such that D_1⊂ D_2 and g=f^p: D_1→ D_2 is a homeomorphism. Let Y_i be the closure of component of D_i∩β_0 containing z for i=1,2. Clearly Y_1⊂ Y_2. Let ℓ⊂ D_2 Y_2 be an open arc joining z to a point in ∂ D_2β_0. Then for each i, the curves ∂ D_i, Y_i and ℓ bound a simply connected domain Ω_i with locally connected boundary such that Ω_1⊂Ω_2; See Figure <ref>. Let η_i=Y_i∩∂Ω_i be the curve joining z to some point z_i∈∂ D_i. Then η_1 is the closure of a component of η_2{z_1}. Since β_0 is locally g-invariant near z, the map g sends Y_1, η_1 and z_1 homeomorphically onto Y_2, η_2 and z_2, respectively. We claim that there is a unique arc λ_i⊂η_i joining z and z_i for i=1, 2. The existence of such an arc is due to the local connectivity of η_i. The curve (∂Ω_iη_i)∪λ_i bounds a disk W_i containing Ω_i. Clearly η_i⊂W_i. Suppose λ_i' is another such arc. Then ∂ W'_i⊂W_i and ∂ W_i⊂W'_i. Thus W_i=W_i', and this implies λ_i=λ_i'. Note that g(λ_1)⊂η_2 is an arc joining z and z_2. By the uniqueness of λ_1 and λ_2, we have that g(λ_1)=λ_2 and λ_1 is the subarc of λ_2 from z to z_1. Choose a large integer N such that G_ N contains λ_2∖λ_1, and define β:=λ_1. Then β⊂β_0 is an arc satisfying f^p(β)⊂β∪ G^ N. The proof is completed by replacing G with G_ N. §.§ Links between growing continua In the last subsection, we prove that if z∈ K is a preperiodic point, then there is a preperiodic growing arc within any access to z. In this final part of Section <ref>, we aim to find abundant preperiodic points as terminals of growing curves. Let K_± be growing continua generated by f-invariant and locally connected continua E_± respectively, such that E_-∩ E_+=∅. This implies that E_-,k∩ E_+,k'=∅ for any k, k'≥ 0, where E_±, k are the components of f^-k(E_±) which contain E_±, respectively. We still assume that E_± are skeletons of E_±,k (rel P) for every k≥0. A link between K_- and K_+ is a curve with (0)∈ E_-,k and (1)∈ E_+,k for some k≥0, such that one of the following two cases occurs: * is a growing curve in K_- or K_+ (one-side link), or * =α_-·α_+^-1, where α_± are growing curves in K_± respectively, such that their common terminal is disjoint from both P and any E_±,m for m≥0 (two-side link). The unique terminal z of the growing curves in is called the infinity-point of the link . By definition, #^-1(z)=1 and it holds for a two-side link that α_+∩α_-={z}. Moreover, a link is one-side if and only if the infinity-point is contained in a certain E_±, k, and if and only if the infinity-point is an endpoint of . The left picture in Figure <ref> illustrates two kinds of links: the curve ' is a one-side link; whereas is a two-side link. Set P_0=P∖ (E_+∪ E_-). Then P_0 is disjoint from E_±,m for every m≥0 since E_± are skeletons of E_±,m, respectively. Two links _1 and _2 between K_± are said to be equivalent if there are two curves _±⊂ E_±,k for some k, such that _- joins _1(0),_2(0)∈ E_-,k, _+ joins _1(1),_2(1)∈ E_+,k, and the closed curve _-·_2·_+^-1·_1^-1 is contractible in ∖ P_0. This is also an equivalence relation. Moreover, the link-equivalence is closely related to the access defined in the last subsection as follows. (i) If ⊂ K_- is a one-side link between K_±, then it must be an infinitely growing curve in K_-, since any E_-,k does not contain the infinity-point of . Moreover, any growing curve in the same access as is a link, and equivalent to as links. But the opposite conclusion is false, because two equivalent one-side links may have distinct terminals. (ii) If =α_-·α_+^-1 is a two-side link between K_±, then both α_± are infinitely growing. Moreover, if β_±⊂ K_± are growing curves in the same accesses as α_± respectively, then ':=β_-·β_+^-1 is a two-side link equivalent to . Corresponding to Proposition <ref>, we have the following result for links. Let be a link between K_±. Then the following statements hold. * Any sub-curve of joining E_±,k for an integer k is a link equivalent to . * For every large integer k, there are two numbers t_±,k∈[0,1] such that (t_±, k)∈ E_±, k respectively and (t_-,k,t_+,k) is disjoint from E_-,k∪ E_+,k. Moreover [t_-,k, t_+,k] contains an arc β_k homotopic to |_[t_-,k, t_+,k] rel P with endpoints fixed. In particular, β_k is a link between K_± that is equivalent to and has the same infinity-point as . * Suppose that γ and γ' are equivalent links between K_±, with their interiors disjoint from P. Then there is an integer m≥0 and a continuous family of curves {_s}_s∈ [0, 1] such that γ_0=,_1=', and each _s joins E_-,m to E_+,m with its interior disjoint from P. According to the relationship between link-equivalence and access stated above this proposition, statements (1)–(2) follow directly from Proposition <ref> (1)–(3). To prove statement (3), suppose first that the infinity-points of and ' coincide. Then and ' are either both one-side links in one of K_±, or both two-side links. In this case, statement (3) is an immediate consequence of Proposition <ref> (4). If the infinity-points of and ' are distinct, by statements (1),(2), we may assume that and ' are disjoint arcs serving as crosscuts of the unique annular component A of ∖(E_-∪ E_+). Since and ' are equivalent, there is a simply connected component D_* of A∖ (∪') such that ,'⊂∂ D_* and D_*∩ P=∅. The required curves {_s} can be chosen within D_*. Based on this proposition, we can prove our desired result. Suppose that K_±⊂ J_f and γ is a link between K_±. If the infinity-point of is wandering, then there exists a curve ℓ=β_-·β_+^-1 such that * β_± are growing curves in K_± respectively, and their common terminal is perperiodic; * there exists a sequence of curves {ℓ_k} such that each ℓ_k is homotopic to rel P_0 with endpoints fixed and ℓ_k→ℓ as k→∞. Note that the curve ℓ is not necessarily a link between K_± because the common terminal of β_± may be a marked point. We first claim that the links between K_± belong to finitely many equivalence classes. Let Σ be a finite collection of links between K_± in pairwise distinct equivalence classes. To prove the claim, it suffices to show that #Σ≤ (#P)^6. By Proposition <ref>, we may assume ∙ each curve in Σ is an arc that serves as a crosscut of some component of ∖ (E_-,m_0∪ E_+, m_0); ∙ if two arcs in Σ have distinct infinity-points, then they are disjoint. Let Z denote the set of infinity-points of links in Σ. Decompose Σ by Σ=⋃_z∈ ZΣ_z, where Σ_z is the collection of links in Σ with the infinity-point z. Pick a represent element in each Σ_z and denote their collection by Σ_1. Then #Σ_1=# Z and the links in Σ_1 are disjoint. By the same argument as that in the proof of Lemma <ref>, we have #Σ_1≤ (#P)^2. Fix z∈ Z. By the relationships (i),(ii) between link-equivalence and access as stated before Proposition <ref>, it follows from Lemma <ref> that #Σ_z≤ (#P)^4. Therefore, #Σ≤ (# P)^6. The claim is proved. Since the infinity-point z of is wandering, it cannot be iterated into P. Thus for each i≥0, the curve f^i() is a link between K_±. By the claim above, there are integers q≥ 0 and p≥ 1 such that f^q() and f^q+p() are equivalent. Set _0:=f^q+p() and _1:=f^q(). Due to Proposition <ref> (1), by taking sub-curves if necessary, we may assume that the interiors of γ_0 and γ_1 are disjoint from P. Then by Proposition <ref> (3), there is a continuous family {γ_s}_s∈[0, 1] of curves joining E_±, k_0, with their interiors disjoint from P. Define two curves δ_±,0 by δ_-,0(s):=_s(0) and δ_+,0(s):=_s(1), s∈[0,1]. Then _±,0⊂ E_±, k_0, respectively. Since f^p(γ_1)=γ_0, for any t∈ (0, 1), the curve {_s(t):s∈[0,1]} has a unique lift by f^p based at _1(t), which is denoted by {_s+1(t):s∈[0,1]}. Therefore, we obtain a continuous family of curves {γ_s+1}_s∈[0, 1] such that f^p∘_s+1=_s. As a consequence _2 is a link between K_± and equivalent to _1. Define two curves _±,1 by δ_-,1(s):=γ_s+1(0) and δ_+,1(s):=γ_s+1(1), s∈[0,1]. Then _±,1⊂ E_±, k_0+p and f^p(δ_±, 1)=δ_±, 0, respectively. Inductively using the argument above, for each k≥1, we obtain * two equivalent links _k and _k+1 between K_± such that f^p(_k+1)=_k; * a curve δ_-, k⊂ E_-, k_0+kp joining _k(0) to _k+1(0) such that f^p(_-,k)=_-,k-1; and * a curve δ_+, k⊂ E_+, k_0+kp joining _k(1) to _k+1(1) such that f^p(_+,k)=_+,k-1. Without loss of generality, we may assume that q=0. For each m≥1, let β_-,m and β_+,m denote the concatenations of {_-,k}_k=1^m and {_+,k}_k=1^m, respectively. By Lemma <ref>, the diameters of _k and δ_±,k exponentially decrease to 0. It follows that _k converges to a point x with f^p(x)=x, and that β_±,m uniformly converge to growing curves β_± in K_±, respectively, such that β_± have the common terminal x. For each m≥1, define ℓ_m:=β_-,m·_m+1·β_+,m^-1. Then ℓ_m is homeotopic to _1 rel P_0 with endpoints fixed. Obviously ℓ_m converges to ℓ:=β_-·β_+^-1 as m→∞. Finally, let K be a growing continuum generated by an f-invariant and locally connected continuum E. Similar to the notion of links between K_±, we can define self-links of K. A self-link of K is a curve ⊂ K with (0),(1)∈ E_k for some k≥0 such that one of the following two cases occurs: * is an infinitely growing curve in K (one-side self-link), or * =α_-·α_+^-1, where α_± are infinitely growing curves in distinct accesses such that their common terminal avoids both P and every E_k for k≥ 0 (two-side self-link). The unique terminal of the growing curves in is called the infinity-point of the self-link ; See the right picture of Figure <ref>. Let P_0=P∖ E. Two self-links _1 and _2 are called equivalent if there are two curves _±⊂ E_k for some k, such that _- joins _1(0) to _2(0), _+ joins _1(1) to _2(1), and the closed curve _-·_2·_+^-1·_1^-1 is contractible in ∖ P_0. Let be a self-link of K and let z be the infinity-point of . It is worth noting that f∘ is still a self-link provided that f(z)∉ P_0. Indeed, if is a one-side self-link, this result holds by Lemma <ref> (1). In the case that =α_-·α_+^-1 is a two-side self-link, if the conclusion is false, then f∘α_± lie in the same access. Since f is injective near z, it follows from Lemma <ref> (2) that α_± are in the same access to z, a contradiction. With these definitions and a parallel argument, we can apply nearly the same proof as that of Proposition <ref> to derive the following result. Details are omitted. Suppose that K⊂ J_f and γ is a self-link of K. If the infinity-point of is wandering, then there exists a curve ℓ=β_-·β_+^-1 such that * β_± are growing curves in K, and their common terminal is perperiodic; * there exists a sequence of curves ℓ_k such that each ℓ_k is homotopic to rel P_0 with endpoints fixed and ℓ_k→ℓ as k→∞. § INVARIANT GRAPHS IN MAXIMAL FATOU CHAINS In this section, we prove that every periodic level-n extremal chain admits an invariant graph on the Julia set if n≥ 1. Our proof relies on the inductive construction and the topology of extremal chains established in Section <ref> and Section <ref>, respectively. §.§ Invariant graphs associated with level-0 Fatou chains Let (f,P) be a marked rational map. We will analyze the dynamics of f on the union of periodic level-0 Fatou chains. Suppose that E is a component of the union of all periodic level-0 Fatou chains, with period p. Let K be the level-1 extremal chain containing E. The main result of this subsection is as follows, which generalizes Theorem <ref>. There exists a graph G⊂ K∩ J_f such that f^p(G)⊂ G and G is isotopic to a skeleton of ∂ E rel P. Moreover, for each point z∈ G E, there is an integer n_0≥ 1 and a component D of E with D∩ P=∅ such that f^n_0p(z)∈D. If E contains exactly one Fatou domain, this proposition is a combination of Theorem <ref> and Corollary <ref>. So we assume that E contains m≥ 2 Fatou domains. The proof of Proposition <ref> follows a similar approach as that of Theorem <ref>, with the distinction being the presence of intersection points between boundaries of different Fatou domains. A point x∈∂ E is called an intersection point if x belongs to the boundaries of at least two distinct Fatou domains in E. A circle C⊂∂ E is called an intersection circle if C lies on the boundary of a Fatou domain U⊂ E and C separates U from another Fatou domain in E. Recall that a circle C⊂∂ U is marked if C either intersects or separates P. Thus, every intersection circle is marked; See Figure <ref>. By definition, each intersection point of E is contained in an intersection circle, and conversely, each intersection circle of E contains intersection points. Note that there are at most 2(m-1) distinct intersection circles in E. Moreover, a component of E is not a disk if and only if its boundary contains an intersection circle. On the other hand, for each intersection circle C, there is at most one component D of E such that C⊂∂ D. Therefore, there are at most 2(m-1) components of E which are not disks. For each Fatou domain U⊂ E, we denote T_ U⊂∂ U the finite circle-tree spanned by ∂ U∩ P and all marked circles in ∂ U; See Lemma <ref> for background. Set T:=⋃_U⊂ ET_ U. Since the intersection points of E are contained in the intersection circles, which are all marked, it follows that T is connected. By Lemmas <ref> and <ref>, we also have f^p(T)⊂ T. Moreover, T is a skeleton of ∂ E (rel P) as each T_ U is a skeleton of ∂ U. Let X_0 be the union of P together with all intersection points of E and all cut points of T_ U for all Fatou domains U⊂ E. Then X_0 is compact and f^p(X_0)⊂ X_0. Moreover, each component of T X_0 is an open arc contained in a circle on the boundary of a Fatou domain in E. There are m components of ∖ T, each of which contains a Fatou domain in E. Let T_* denote the union of T and these m components. Since T_* contains all intersection circles of E, by the same reason as previous, there are at most 2(m-1) components of ∖ T_* which are not disks. Therefore, T has at most 2(m-1)+m complementary components which are not disks. By a boundary circle of T, we mean the boundary of a component of ∖ T_* which is a disk. A boundary circle C of T is called regular if # (C∩ X_0)=2 and D∩ P=∅, where D is the component of ∖ T with ∂ D=C, and called irregular, otherwise. There are finitely many irregular boundary circles of T. Let D be a component of T_* which is a disk. Then either D is a component of ∖U for a Fatou domain U⊂ E, or the boundary ∂ D is composed of at least two arcs, which are subarcs of distinct intersection circles. In the former case, if ∂ D is a regular circle of T_ U, then it is a regular boundary circle of T. Since T_ U contains finitely many irregular circles, there are finitely many irregular boundary circles of T of this kind. In the latter case, the circle ∂ D of T contains at least two intersection points, say z_1 and z_2. If ∂ D is irregular, then either D∩ P≠∅; or ∂ D{z_1, z_2} consists of two open arcs α_i⊂ C_i,i=1,2, where C_i is a circle of T_ U_i for a Fatou domain U_i⊂ E, such that α_1 or α_2 contains cut points of T_ U_1 or T_ U_2, respectively; or ∂ D∩ X_0 contains at least three intersection points. The first kind of components is clearly finite in number. Note that each C_i is an intersection circle and it contains finitely many cut points of T_ U_i. Then the second kind of components is also finite in number. To complete the proof of the lemma, it is enough to check the following claim. 0.2cm Claim. Let Ω_1, …, Ω_n, n≥ 2, be pairwise disjoint disks such that B:=⋃_i=1^nΩ_i is connected. Considering the components of B that are disks, the boundaries of all but finitely many of them contain exactly two intersection points of B, i.e., points belonging to at least two of ∂Ω_1,…,∂Ω_n. 0.2cm First suppose that n=2. If #(∂Ω_1∩∂Ω_2)=1, then B is connected and ∂ B contains only one intersection point. If #(∂Ω_1∩∂Ω_2)>1, then the boundary of any component of B contains exactly two intersection points. So the claim holds when n=2. By induction, we assume that the claim holds for n≥ 2. Let Ω_0 be a disk disjoint from Ω_1,…,Ω_n such that both ⋃_i=0^nΩ_i and B=⋃_i=1^nΩ_i are connected. Then Ω_0 is contained in a component D of B. The intersection points of B∪Ω_0 are the union of the intersection points of B together with ∂Ω_0∩∂ D. For any component D' of B other than D, the points in ∂Ω_0∩∂ D' are the intersection points of B in ∂ D'. So it is enough to check that the boundaries of all but finitely many components of D∖Ω_0 contain two intersection points of Ω_0∪ B. If ∂ D∩∂Ω_0 is a singleton, then D∖Ω_0 is connected. If #(∂ D∩∂Ω_0)≥2, except for finitely many ones, every component of D∖Ω_0 is a disk, whose boundary contains exactly two points of ∂ D∩∂Ω_0 and consists of one open arc in ∂ D and the other one in ∂Ω_0. Thus there are finitely many components of D∖Ω_0 whose boundaries contain more than two intersection points of Ω_0∪ B, since ∂ D has finitely many intersection points of B. The claim is proved. We use a similar argument as that in the proof of Theorem <ref>. For a regular boundary circle C of T, let C^± denote the two components of C X_0, and let B(C^-)=B(C^+) denote the closure of the component of ∖ T whose boundary is C. Set G_1=T⋃ C^-, where the union is taken over all regular boundary circles of T. By Lemma <ref>, G_1 is a graph as a skeleton of ∂ E rel X_0. Now we construct G_2⊂ f^-p(G_1). For each n≥1, set X_n:=f^-np(X_0). Then X_n⊂ X_n+1. Note that if z∈ X_1∩ G_1, then f^p(z)∈ X_0∩ T⊂ G_1. Thus, for a component α_1 of G_1 X_1, its image f^p(α_1) is a component of T X_0. * If f(α_1)=C^- for a regular boundary circle C of T, since C^+ and C^- are isotopic rel X_0, there is a unique component α_1^+ of f^-p(C^+) isotopic to α_1 rel X_1. Such an arc α_1 is called a deformation arc of G_1. Denote B(α_1) the component of f^-p(B(C^-)) that contains α_1. Then B(α_1) is a closed disk such that B(α_1)∩ G_1=α_1 and B(α_1)∩ X_1={α_1(0),α_1(1)}. * In the other case, we have f^p(α_1)⊂ G_1 by the construction of G_1. We define the graph G_2 by G_2:=(G_1⋃α_1)∪⋃α^+_1, where the union is taken over all deformation arcs of G_1. By the discussion above, we have f^p(G_2)⊂ G_1 and there is an isotopy Θ^1: × [0,1]→ rel P such that Θ^1_t:=Θ^1(·,t) satisfies * Θ^1_0=id on , * Θ^1_t(z)=z on a neighborhood of attracting cycles of f for t∈ [0,1], * if z∈ G_1 is not in any deformation arc, then Θ^1_t(z)=z for t∈ [0,1], and * if α_1 is an deformation arc of G_1, then Θ^1_1(α_1)=α_1^+ and Θ^1(α_1×[0,1])=B(α_1). As a consequence, θ_1(G_1)=G_2 with θ_1:=Θ^1_1. By inductively applying Lemma <ref>, we obtain an isotopy Θ^n: × [0,1]→ rel P and a graph G_n+1 for each n≥1, such that Θ^n_0=id and Θ^n_t∘ f^p(z) =f^p∘Θ^n+1_t(z) for all z∈,t∈ [0,1], and that G_n+1=θ_n(G_n) with θ_n:=Θ^n_1. Thus f^p(G_n+1)⊂ G_n. Besides, there are some components of G_n∖ X_n, called the deformation arcs of G_n (under Θ^n), such that (a) if z∈ G_n is not in any deformation arc of G_n, then Θ^n_t(z)=z for t∈ [0,1], (b) if α_n is a deformation arc of G_n, then the deformation of α_n under Θ^n, denoted by B(α_n), is a closed disk such that B(α_n)∩ G_n=α_n and B(α_n)∩ X_n={α_n(0),α_n(1)}. Denote ϕ_n=θ_n-1∘⋯∘θ_0 for n≥ 1 with θ_0:=id. Then G_n=ϕ_n(G_1). By Lemma <ref>, {ϕ_n} uniformly converges to a quotient map φ of . It follows that f^p(G)⊂ G with G:=φ(G_1). Fix a deformation arc α_n of G_n, n≥1, and set α_n-k:=f^kp(α_n) for 0≤ k≤ n. From the lifting construction of Θ^n, it follows that, when 0≤ k≤ n-1, α_n-k is a deformation arc of G_n-k and f^kp(B(α_n))=B(α_n-k), and that α_0=C^- for a regular boundary circle C of T and f^np: B(α_n)→ B(α_0) is a homeomorphism. Let α_m and β_n be two distinct deformation arcs of G_m and G_n respectively, with m≥ n≥1. Then either B(α_m)⊂ B(β_n), or # (B(α_m)∩ B(β_n))≤ 2. Set β_0:=f^np(β_n) and α_m-n=f^np(α_m). We claim that either B(α_m-n)⊂ B(β_0), or # (B(α_m-n)∩ B(β_0))≤ 2. Note that β_0=C^- for a regular boundary circle C of T. The two open arcs C^± are contained in the boundaries of Fatou domains U_1,U_2⊂ E, respectively. If U_1=U_2, the interior of B(β_0) is a component of ∖U_1 and B(α_m-n)⊂D for a component D of ∖U_1. Thus either B(β_0)= D or #(B(β_0)∩D)≤ 1 by Lemma <ref>. Then the claim holds. If U_1≠ U_2, there exists a component D of ∖U_1 such that U_2⊂ D and the interior of B(β_0) is a component of D∖U_2. Moreover, there is a component W of ∖ (U_1∪U_2) with B(α_m-n)⊂W. If W is a component of ∖U_1 or ∖U_2, then #(W∩ B(α_0))≤ 1 by Lemma <ref>. Otherwise, W is a component of D∖U_2. In this case, either W=B(β_0), or W∩ B(β_0) consists of at most two intersection points in X_0∩ C. Then the claim also holds. The proposition follows directly from the above claim and a pullback argument. The remaining parts of the proof of Proposition <ref> are the same as the corresponding parts in the proof of Theorem <ref> and Corollary <ref>. We omit the details. Suppose that K≠ K' are periodic level-1 extremal chains. Let G⊂ K and G'⊂ K' be invariant graphs derived from Proposition <ref>. Then G∩ G'=∅. Without loss of generality, we may assume that both K and K' are f-invariant. Let E and E' denote the union of all periodic level-0 Fatou chains contained in K and K', respectively. Then K=⋃_k E_k and K'=⋃ E_k'. Moreover, E_k∩ E'_m=∅ for any k, m≥ 0. Suppose to the contrary that G∩ G' contains a point z. We can assume that f^n(z)∉E for all n≥0 because E∩ E'=∅. Since E∩ E_k'=∅ for every k≥1, all E_k' lie in the same component of ∖ E. On the other hand, by Proposition <ref>, there is an integer n_0≥1 and a component D of ∖ E such that D∩ P=∅ and f^n_0(z)∈D. As f^n_0(z)∉E, we obtain f^n_0(z)∈ D. Then K' intersects D. It follows that E_k' intersects D for a large integer k, and hence E'⊂ D. However, this contradicts D∩ P=∅. Suppose that K is an f-invariant level-1 extremal chain, and E is the union of boundaries of periodic Fatou domains in K. Let G⊂ K be the invariant graph obtained in Proposition <ref>. Set S:= E∪ G. Then, S_n⊂ K for n≥ 1 and G_ N is a skeleton of S_n for some N and all n≥ N, where S_n and G_n are the components of f^-n(S) and f^-n(G) containing S and G, respectively. By the construction of G, there is a graph _0 as a skeleton of E rel P and an isotopy Ψ^0:× [0,1]→ rel X_0 such that Ψ^0_0=id, Ψ^0_1(_0)=G, and Ψ^0_s_k(_0)⊂ E_k for a sequence {s_k}_k≥1⊂ (0,1) with s_k→ 1 as k→∞. Fix any n≥ 1, by Lemma <ref> there is a unique component _n of f^-n(_0) as a skeleton of E_n. Let Ψ^n:× [0, 1]→ rel X_0 be the lift of the isotopy Ψ^0 by f^n such that Ψ^n_0=id. Then Ψ^n_s_k(Γ_n) are contained in E_k+n and converge to Ψ^n_1(_n) as k→∞, which is a component of f^-n(G). Thus Ψ^n_1(_n)⊂ K. If X_0∩ E=∅, then ∂ K=E is a Jordan curve, and this corollary clearly holds. Otherwise, we have X_0∩ E⊂Ψ^n_1(_n)∩ G_n. Therefore, G_n=Ψ^n_1(_n)⊂ K. Note that both E and G are skeletons of S. By Lemma <ref>, E_n and G_n are the unique components of f^-n(E) and f^-n(G) contained in S_n, respectively. Thus S_n= E_n∪ G_n⊂ K. Finally, by Corollary <ref> and Lemma <ref>, there is an N>0 such that _ N is a skeleton of E_n for every n≥ N. Since _n∼ G_n rel P, the graph G_ N is a skeleton of S_n for every n≥ N. §.§ Invariant graphs on extremal chains Let (f,P) be a marked rational map with J_f≠. The sketch to the construction of invariant graphs on extremal chains is as follows. Suppose that E is the intersection of J_f with a component of the union of all periodic level-0 Fatou chains. Let K be the intersection of J_f with the level-1 extremal chain containing E. By Proposition <ref>, there is an invariant graph G⊂ K isotopic to a skeleton of E rel P. To construct an invariant graph that serves as a skeleton of K, a natural approach is to add a finite number of arcs to G such that * the combined set of G and the added arcs forms a skeleton K; and * each added arc is preperiodic with respect to G, i.e., there are q≥0,p≥1 such that f^q+p()⊂ f^q()∪ G. Indeed, the first condition can be derived from Lemma <ref>, while the second one follows from Propositions <ref>, <ref> and <ref>. By employing a similar inductive argument, we can construct an invariant graph on any periodic level-n extremal chain for every n≥1. Let (f, P) be a marked rational map. Let K_1,…,K_m be pairwise distinct continua such that each K_i is the intersection of J_f and a periodic level-n extremal chain with n≥1. Suppose that 𝐊=⋃_i=1^m K_i is connected and f(𝐊)=𝐊. Then there exists a graph G as a skeleton of 𝐊 rel P such that f(G)⊂ G. This proposition immediately implies Theorem <ref>. It is worth mentioning that the proposition is not true if the level n=0, as shown in Theorem <ref>. The proof goes by induction on the level n. First assume that n=1. For each 1≤ i≤ m, let E_i denote the union of boundaries of all periodic Fatou domains within K_i. By Lemma <ref>, each K_i is the growing continuum generated by E_i. As indicated at the beginning of Section <ref>, we may assume that E_i is a skeleton of E_i,k (rel P) for every k≥1, where E_i,k denotes the component of f^-p_ik(E_i) containing E_i and p_i is the period of E_i. 0.2cm Claim. There are infinitely growing curves _1,…,_r in K with preperiodic terminals such that, by replacing each E_i with E_i, N for a large N, the set (⋃_i=1^m E_i)∪(⋃_j=1^r _j) is a skeleton of 𝐊. Let z be a marked point in 𝐊. Then z∈ K_i for some 1≤ i≤ m. If z∉E_i, by Lemma <ref> (1), there is a growing curve α_z⊂ K_i joining E_i to z. Since E_i is a skeleton of every E_i,k, it holds that z∉⋃_k>0 E_i,k. Thus α_z is infinitely growing. Suppose x,y∈ P are separated by 𝐊. Then there is a smallest integer s≥1 such that, by re-enumerating K_i if necessary, the points x and y are separated by the union of K_1, …, K_s. In the case of s=1, if x and y are separated by E_1,k for some k≥1, then they are separated by E_1 as E_1 is a skeleton. Otherwise, by Lemma <ref> (2), there is a curve η=β_-·β_+^-1⊂ K_1 such that E_1∪η separates x and y, where β_± are growing curves in K_1. If the common terminal z of β_± is disjoint from E_1,k for all k, then the curve η serves as a two-side self-link of K_1 provided that z∉ P. If z is contained in some E_1, k_0, then one of β_±, say β_-, is infinitely growing and β_-∪ E_1,k_0 separates x and y. In this case β_- serves as a one-side self-link of K_1, and we reset η=β_-. In both cases, we can apply Proposition <ref> to the self-link η, and thus obtain a curve η_z=β_z'·β_z^-1⊂ K_1 such that the common terminal z of the growing curves β_z' and β_z is preperiodic, and that η_z∪ E_1 separates x from y. By replacing E_1 with some E_1, k, we may further assume that each of β'_z and β_z is either trivial or infinitely growing. In the case of s=2, let D be the component of ℂ (K_1∪ K_2) containing x. Since ∂ D is locally connected by Theorem <ref>, a Jordan curve α⊂∂ D separates x from y. By the minimum of s, there is a unique arc α_1 among components of α K_2 such that α_1∪ K_2 separates x from y. Let α_2 be an arc in K_2 with the same endpoints as α_1. Then α_1∪α_2 forms a Jordan curve that separates x and y. If s≥ 3, with similar arguments, there are s arcs α_i⊂ K_i,i=1,…,s, such that their union is a Jordan curve separating x from y. Let Z be the set of endpoints of the s arcs α_1,…,α_s. Fix a point z∈ Z. There are exactly two distinct integers i=i(z) and i'=i'(z) among {1,…,s} such that z∈α_i∩α_i'⊂ K_i∩ K_i'. By Lemma <ref> (1), there are growing curves β̃_z and β̃'_z in K_i and K_i' respectively with the common terminal z. We can further require that β̃_z (resp. β̃_z') is a trivial curve if z∈ E_i, k_0 (resp. E_i', k_0) for some k_0. If z is preperiodic, we set β_z=β̃_z and β_z'=β̃_z'. Otherwise η̃_z=β̃_z'·β̃_z^-1 is a link between K_i and K_i'. In particular, it is a two-side link if and only if z is disjoint from E_i, k and E_i', k for all k≥0. In this case, we can apply Proposition <ref> to the link η̃_z and obtain a curve η_z=β_z'·β_z^-1 such that η̃_z and η_z are homotopic rel {x, y} with endpoints fixed and the common terminal of the growing curves β_z⊂ K_i and β_z'⊂ K_i' are preperiodic. By the minimality of s, for a large integer k_0, the union of η_z, z∈ Z, and all E_j, k_0, 1≤ j≤ s, is connected and separates x from y. In replace of each E_j with some E_j, k, we may assume * for each z∈ Z, either z∈ E_i for some E_i, or z avoids E_i,k for all 1≤ i≤ s and k≥0; * each β_z (resp. β_z') is either trivial or infinitely growing. Finally, the required growing curves _1,…,_r consist of all α_z and the non-trivial curves β_z and β_z' described above. Then the claim is proved. Let Q⊂𝐊 denote the set of all points in the orbits of _1(1),…,_r(1). Then f(Q)⊂ Q. According to Proposition <ref> and Corollary <ref>, each K_i contains a graph G_i such that * G_i is a skeleton of S_i:=G_i∪ E_i rel P and contains Q∩ E_i; * f(⋃_i=1^m G_i)⊂⋃_i=1^m G_i and S_i∩ S_j=∅ if i≠j. By Corollary <ref>, each K_i is also the growing continuum generated by S_i. For every k≥1, we denote by S_i,k and G_i,k the components of the k-th preimage by f^p_i of S_i and G_i respectively such that S_i⊂ S_i,k and G_i⊂ G_i,k. Let be a maximal collection of infinitely growing curves in K_1,…, K_m, such that they have initial points in ⋃_i=1^m G_i and terminals in Q, and they belong to pairwise distinct accesses. According to Lemma <ref>, contains finitely many elements. The claim above implies that the union of G_i,i=1,… m, together with all curves in is a skeleton of 𝐊 rel P. For any ∈ with terminal z:=(1), its image f() is an infinitely growing curve to f(z)∈ Q by Lemma <ref> (1). By the maximality of , we obtain a self-map f_h:→ such that f_h() is defined to be the unique element of in the same access as f(). Mark a curve _* in each cycle under f_h. Suppose that _*⊂ K_i and has period p under f_h. By Proposition <ref>, we may assume that * for any t∈(0,1), there is an integer k>1 such that _*[0,t]⊂ G_i,k, and * _* is an f^p-invariant arc in the sense that f^p(_*)⊂_*∪ G_i. Since Δ has finitely many elements, any curve ∈ is eventually iterated by f_h to a marked one _*. Let q≥ 0 be the smallest number such that f_h^q()=_*. Assume (0)∈ G_j. By Lemma <ref> (2), there exists a lift ' of _* by f^q which lies in the same access as and has the initial point in G_j,q. Let N be a large integer such that the initial point of each ' with ∈ lies in ⋃_i=1^m G_i, N. We define G:=(⋃_i=1^m G_i, N)∪(⋃_∈'). The discussion above shows that f(G)⊂ G and G is a skeleton of 𝐊 rel P. Since the curves in are infinitely growing and lie in pairwise distinct accesses, by Proposition <ref> (2), there exists ϵ>0 such that δ'[1-ϵ,1) with δ∈ are pairwise disjoint, and each of them is disjoint from G_i, N,i=1,…,m. On the other hand, the arcs δ'[0,1-ϵ],δ∈ are contained in ⋃_i=1^m G_i, N_1 for some N_1>N. Thus, the locally branched points of G are contained in those of ⋃_i=1^m G_i, N_1 together with Q, which are finite. So G is a graph. Now we proved this proposition in the case of n=1. Suppose that the proposition holds for level-n extremal chains with n≥ 1. Let K_1,…,K_m be pairwise distinct continua such that each K_i is the intersection of J_f and a periodic level-(n+1) extremal chain. For each i∈{1,…,s}, denote E_i the intersections of J_f and the union of periodic level-n extremal chains within K_i. Then K_i is the growing continuum generated by E_i. By induction, there is a graph G_i as a skeleton of E_i such that f(⋃_i=1^m G_i)⊂⋃_i=1^m G_i. Note that in this case, we have G_i⊂ E_i and set S_i:=E_i. In contrast, in the case of n=1, the graph G_i is not necessarily contained in E_i, and thus we performed a transformation from E_i to S_i=E_i∪ G_i using Corollary <ref> therein. By the same argument as that in the case of n=1, we obtain the desired invariant graph G⊂𝐊. § INVARIANT GRAPHS OF RATIONAL MAPS Let (f, P) be a marked rational map with J_f≠. As stated in the introduction, it suffices to prove Proposition <ref> in order to construct the invariant graph required by Theorem <ref>. Due to Corollary <ref> and Theorem <ref>, by possibly enlarging P, there exists a stable set ⊂ J_f which induces a cluster- decomposition of (f,P), such that the decomposition =𝒦⊔𝒱⊔𝒜⊔𝒮 satisfies the following properties: (P1) Each component of contains points of P. (P2) Every component of 𝒱 is complex-type and disjoint from any attracting cycle of f. (P3) Every component of 𝒮 is a simply connected domain of simple type. (P4) Every component A of 𝒜 is an annulus of annular type. Moreover, if A∩ f^-1(𝒦)≠∅, then A contains an annular-type component of f^-1(𝒦). Therefore, we only need to prove Proposition <ref> under the properties (P1)-(P4). The proof of this proposition will be divided into three parts. First, we identify a graph in each component of ℰ=⊔ such that their union is f-invariant. Next, we construct invariant arcs in 𝒜 to connect these graphs together. Finally, we join every marked point in 𝒮∩ J_f to the previous graph. At the beginning, we select several specific marked points. In each cycle of under f_#, we designate a preferred component V. Denote its period by p. For each n≥0, let V_n denote the unique complex-type components of f^-np(V) contained in V. By Theorem <ref> and property (P2), there is a marked rational map (g,Q_g) as the blow-up by π of the exact sub-system f^p:V_1→ V, i.e., * π(J_g)=⋂V_n and π∘ g=f^p∘π on J_g; * π sends the closure of each Fatou domain onto a component of ∖ V_n for some n≥0. Due to property (P1), the marked set Q_g coincides with the union of π^-1(P∩ V) and the centers of Fatou domains outside π^-1(V). By conditions of the proposition, let G_g⊃ Q_g be a g-invariant regulated graph. Then for each Fatou domain D of g, the set Y_ D :=G_g∩∂ D satisfies: * g(Y_ D )⊂ Y_ g( D ), and Y_ D ≠∅ if D ∩ Q_g≠∅; * Y_ D is a finite set, and there are only finitely many Fatou domains D such that #Y_ D ≥ 3. Since V avoids the periodic Fatou domains by property (P2), the choice of Q_g implies that Y_ V:=⋃_ D π(Y_ D ) lies in ∂ V and each component of ∂ V intersects Y_ V, where D ranges over all marked Fatou domains of (g,Q_g). Moreover, we have f^p(Y_ V)⊂ Y_ V. If V' is another component of such that f^q_#(V')=V, set Y_ V':=f^-q(Y_ V)∩∂ V'. Thus, Y_:=⋃ Y_ V is an f-invariant and finite set in ∂⊂, where the union is taken over all components of . For a finitely connected domain W, an oriented boundary component of W means a component of ∂ W equipped with an orientation pointing to W. Let Λ be the collection of oriented boundary components of all annuli in Comp(). Then any two elements of Λ are distinct even if they overlap. For any λ∈Λ, since λ⊂ and is a stable set, there is either an annular-type component A_1 of f^-1() or an annular-type component V_1 of f^-1() such that λ is an oriented boundary component of A_1 or V_1. Thus, its image f(λ) is either still an element of Λ, or an oriented boundary component of a certain V∈ Comp(). Set Λ_*={λ∈Λ: f^n(λ)∈Λ for all n≥0}. Since f(∂)⊂∂, the orbit of any λ∈Λ∖Λ_* will stay in ∂ after leaving Λ. Due to Theorem <ref>, one can assign a point z_λ in each element λ∈Λ_* such that f(x_λ)=x_f(λ). Then the finite set {x_λ:λ∈Λ_*} is f-invariant and contained in . On the other hand, there exists an integer M>0 such that f^ M(λ)⊂∂ for any λ∈Λ∖Λ_*. Since f(Y_)⊂ Y_⊂, we obtain an f-invariant and finite set Q:=(f^- M(Y_)∩) ⋃ {x_λ:λ∈Λ_*}⊂ 0.1cm Part I. Construct invariant graphs in ℰ=⊔. By Theorems <ref>, <ref> and Lemma <ref>, each component K of contains a graph G_ K as a skeleton of K rel P∪ Q such that the union ⋃_ K G_ K is f-invariant. Let V be a preferred f_#-periodic component of with period p. We denote by the collection of the complementary components of V_n for all n>0. By Theorem <ref>, for each B∈, π^-1(B)= D and π^-1(∂ B)=∂ D, where D is a Fatou domain of g, and π^-1(z) is a singleton if z does not belong to any element of . We set :=π(G_g) and Y_ B:=π(Y_ D ) with B=π( D ). According to the properties of Y_ D presented at the second paragraph of the proof, we have that * Y_ B⊂∂ B and f^p(Y_ B)⊂ Y_ B' if ∂ B'=f^p(∂ B); * Y_ B is a finite set and there are only finitely many B∈ with Y_ B≥ 3; * Y_ V=⋃_ B Y_ B and Y_ B≠∅, where B is taken over all components of ∖ V; * if z∈∖⋃_B∈ B, then z∈ J_f and f^p(z)∈. To obtain an f^p-invariant graph associated with V, we need to revise ∩ B to an appropriate graph G_ B for each B∈ that intersects . If B is a component of ∖ V, then ∂ B⊂ K for a component K of . We define G_ B=G_ K. Note that G_ K contains Y_ B by the choices of Q and G_ K. If B is not a component of ∖ V, then B∩ P=∅, and there is a smallest positive integer k and a component B' of ∖ V such that ∂ B is a component of f^-kp(∂ B'). Let K and K' be the components of f^-kp() that contain ∂ B and ∂ B', respectively. Then f^kp(K)=K'. By Lemma <ref>, the set G̃_ B=f^-kp(G_ B')∩ K is a component of f^-kp(G_ B') contained in B. Thus G̃_ B is a graph. Since f^kp(Y_ B)⊂ Y_ B', it follows Y_ B⊂G̃_ B. We define G_ B as follows. * If # Y_ B≥3, set G_ B=G̃_ B, and if # Y_ B=1, set G_ B=Y_ B. * If # Y_ B=2, let G_ B be an arc in G̃_ B joining the two points of Y_ B such that f^kp(G_ B)⊂ G_ B' and f^p(G_ B)⊂ G_ f^p(B). Thus, we obtain an f^p-invariant continuum G_ V:=(∖⋃_ B∈ B)⋃(⋃_ B∈ G_ B), which lies in J_f and contains P∩ V. As the diameters of B∈ exponentially converge to zero by Lemma <ref>, the continuum G_ V is a graph. If V' be a component of such that f^q_#(V')=V for a smallest q≥ 1, then we define G_ V'=f^-q(G_ V)∩ V'. Note that the accumulation set of G_ V' on ∂ V' is contained in Y_ V'⊂ Q. We define the set _:=(⋃_ K∈ Comp() G_ K) ⋃(⋃_ V∈ Comp() G_ V), which is f-invariant and contains Q. Moreover, it satisfies the following two properties. (a) For each component E of ℰ, the set _ℰ∩ E is a graph as a skeleton of E∩ J_f rel P. (b) For each component V of and any component V' of f^-1(V), any pair of distinct boundary components λ_± of V' can be joint by an arc in f^-1(_ℰ), which lies in the annulus A(λ_+,λ_-) bounded by λ_± and has the endpoints in f^-1(Y_ V). For property (a), it is enough to show the connectivity of _ℰ∩ E. Let V⊂ E be any component of . By construction, for each boundary component λ of V, the accumulation points of G_ V on λ are non-empty and lie in the graph G_ K, where K is a component of contained in E such that λ⊂ K. This implies _∩ E is connected. To prove property (b), we choose a sequence of domains V_ϵ compactly contained in V which converge to V as ϵ→ 0, such that V∖V_ϵ consists of annuli disjoint from P, and that G_ϵ=(V_ϵ∩ G_ V)∪∂ V_ϵ is connected. Then each G_ϵ is a skeleton of V_ϵ rel P and lim_ϵ→0 G_ϵ=(V∩ G_ V)∪∂ V. Set V_ϵ'=f^-1(V_ϵ)∩ V'. Then V_ϵ' is a domain, and each of its boundary component is parallel to a component of ∂ V' and vice versa. Moreover lim_ϵ→0V_ϵ'=V'. By Lemma <ref>, G'_ϵ:=f^-1(G_ϵ)∩V_ϵ' is connected. Thus it contains all components of ∂ V_ϵ. As a consequence, the Hausdorff limit G' of G'_ϵ is connected and contains ∂ V'. Moreover G'∩V'=f^-1(G_ V)∩V'. By the discussion above, there are pairwise disjoint open arcs α_1,…,α_m in G'∩ V' and components λ_-=λ_1,…,λ_m+1=λ_+ of ∂ V' such that each α_i joins λ_i to λ_i+1 and its endpoints belong to f^-1(Y_ V). Note that for every i∈{2,…,m-1}, λ_i is contained in a component K_i⊂ A(λ_-,λ_+) of f^-1(). Thus we can find an arc β_i⊂ K_i joining α_i-1(1) to α_i(0) such that f(β_i)⊂ G_ f(K_i). Finally, the arc (⋃_i=1^mα_i)∪(⋃_j=2^m-1β_i) satisfies property (b). Part II. Connect the graphs in ℰ. By properties (P2)–(P4), any two components of _ℰ are separated by a component of , and vice versa. Thus, to obtain a global invariant graph, we need to construct appropriate arcs serving as bridges that cross and join components of _ℰ together. 0.15cm Step 0. Assign a preperiodic point x_λ∈ Q in every λ∈Λ. 0.15cm Recall that Λ is the collection of oriented boundary components of all annuli A∈ Comp(), and that Λ_*⊂Λ consists of all elements whose orbits under f stay in Λ, see (<ref>). We have assigned one point x_λ∈λ for each λ∈Λ_* such that f(x_λ)=x_f(λ) and x_λ∈ Q. So it remains to assign a point in each element of Λ∖Λ_*. Fix any λ∈Λ∖Λ_*. It is an oriented boundary component of a unique component A of . If f(λ)⊂∂ V for a component V of , then there is an annular-type component V_1 of f^-1(V) contained in A such that λ is an oriented boundary component of V_1. The boundary ∂ V_1 has the other annular-type component λ'. By property (b) of _, there is an open arc β⊂ A(λ,λ') joining λ to λ', such that f(β)⊂_ and the endpoints of β lie in f^-1(Y_ V). Define x_λ to be the endpoint of β in λ. It follows that x_λ belongs to f^-1(Y_ V)∩⊂ Q. If f(λ)∈Λ and x_f(λ)∈ f(λ) has been chosen, we assign a point x_λ∈λ such that f(x_λ)=x_f(λ). Then x_λ belongs to Q by the definition of Q. 0.15cm Step 1. Construct the initial graph G_0. 0.15cm For each component A of , we denote its two oriented boundary components by λ_±, A. Let z_±, A⊂λ_±, A be the points assigned to λ_±, A, respectively. If A intersects f^-1(), we call it intersection-type; otherwise f(A) is still a component of . In the latter case, there is a smallest integer n_ A≥ 1 such that f^n_ A(A) is an intersection-type component of , as f has no Herman rings. We claim that there is an open arc γ_ A joining z_±, A in each component A of such that f(γ_ A)=γ_ f(A) when A is not intersection-type. Firstly, we choose an open arc α_ A with endpoints z_±, A in each component A of . Fix an intersection-type component A of . For any component A' of with f^n( A')(A')=A, the curve α=f^n( A')(α_ A') lies in A and joins z_±, A. Consequently, α is homotopic to α_ A with endpoints fixed, up to an N(A')-times twist around A. Let N be the smallest common multiple of all such numbers N(A') and set γ_ A=T^ N(α_ A), where T(·) denotes the twist map around A. Then A' contains a unique component _ A' of f^- n( A')(_ A) with endpoints z_±,A'. The claim is proved. Since the endpoints of each _ A belong to Q⊂_ℰ, the arc _ A joins the two components of _ℰ adjacent to A together. Thus, we obtain the initial graph G_0=_ℰ∪⋃_ A, where A ranges over all components of . The vertices of G_0 are composed of the points in Q∪(P∩_) and the locally branched points of _. Then each _ A is an edge of G_0. 0.15cm Step 2. Construct a graph G_1⊂ f^-1(G_0) isotopic to G_0. 0.15cm We first construct a curve γ_ A^1 for each component A of such that _ A^1(0,1)⊂ A, f(γ_ A^1)⊂ G_0 and γ_ A^1 is homotopic to _ A (rel P) with endpoints fixed. If A is not intersection-type, we define _ A^1=_ A by the claim in Step 1. If A is intersection-type, let A_1, …, A_s (s≥ 2) be the annular-type components of A∖ f^-1() arranged from left to right by property (P4). Let λ_±, i be the annular-type boundary components of A_i. Then λ_+, i∪λ_-, i+1 is contained in an annular-type component K_i of f^-1() for each 1≤ i≤ s-1. By Lemma <ref>, _i:=f^-1(G_ f(K_i))∩ K_i is a graph as a skeleton of K_i. If f(A_1) is a component of , let α_1 be the lift of γ_ f(A_1) based at z_-, A. Otherwise, f(A_1) is a component of . By property (b) of _ given in Part I and the choice of z_-, A given in Step 0, there exists an open arc α_1⊂ A_1 that joins z_-, A to λ_+, 1 and satisfies f(α_1)⊂_. Similarly, we can find an open arc α_i⊂ A_i∩ f^-1(G_0) for every i∈{2,…,s} such that α_i joins λ_±,i and one endpoint of α_s is z_+, A. Therefore, the points z_±, A can be connected by an open arc β_ A in ⋃_i=1^sα_i∪⋃_i=1^s-1_i, and it holds that β_ A⊂ A∩ f^-1(G_0). Note that β_ A is homotopic to γ_ A with endpoints fixed up to an m_ A-times twist around A. Since _1 is a skeleton of K_1, the graph _1 separates ∂ A. Thus, one can find a curve β⊂_1 such that _ A^1=(β_ A∖ K_1)∪β is a curve homotopic to γ_ A rel P with endpoints fixed. We define a graph G_1:=_ℰ∪⋃γ_ A^1⊂ f^-1(G_0), where A ranges over all components of . Although a certain _ A^1 may have self-intersection, we also consider it as an edge of G_1. Thus each edge of G_0 is homotopic rel P to an edge of G_1 with endpoints fixed, and the homotopy is identity when the edge is in _. For n≥0, let _n be the union of all annular-type components of f^-n(𝒜). As a consequence, the components of _n are annuli and _n+1⊂_n. By inductively homotopic lift of the edges of G_0 and G_1, we obtain a graph G_n=_ℰ⋃ (∪γ_ A^n) for every n≥0, where A runs over all components of , such that f(G_n+1)⊂ G_n, and that the curves _ A^n+1,_ A^n are homotopic rel P with endpoints fixed, which differ only within _n. Since the degree of f^n on each component of _n tends to infinity as n→∞, there is an integer N≥ 0 such that the n-th lift of each γ^1_ A is an arc for every n≥ N. Therefore, there is a homeomorphism h_0:→ that is isotopic to id rel ∖_ N such that h_0(G_ N)=G_ N+1. For the sake of simplicity, we assume that N=0. 0.15cm Step 3. Construct an invariant graph G'. 0.15cm By Lemma <ref>, we get a sequence of homeomorphisms {h_n}_n≥0 such that h_n is isotopic to id rel ∖ f^-n() and h_n∘ f=f∘ h_n+1 on . Recursively define the graph G_n+1=h_n(G_n). It then follows that h_n(x)=x if x∈ G_n∖_n,h_n(x)∈_n if x∈ G_n∩_n. Let ϕ_n:=h_n∘⋯∘ h_0 for n≥ 0. By Lemma <ref>, ϕ_n uniformly converges to a quotient map ϕ:→. So G_n+1=ϕ_n(G_0) converges to a continuum G':=ϕ(G_0) in the sense of Hausdorff metric. Consequently f(G')⊂ G'⊂ J_f. In order to prove G' is a graph, it suffices to show that ϕ^-1(z)∩ G_0 is connected for any z∈ G'. In other words, we will check that, for any two distinct points x, y∈ G_0 with ϕ(x)=ϕ(y), there exists an arc l_x,y⊂ G_0 joining x and y such that ϕ(l_x,y) is a singleton. Fix a pair of distinct points x and y. Denote x_n=ϕ_n-1(x) and y_n=ϕ_n-1(y), which lie in G_n. Since ϕ(x)=ϕ(y), at least one of x and y, say x, satisfies that x_n∈_n for all n by (<ref>). If x_n and y_n lie in the closure of the same component of _n for each n, then ϕ([x,y]) is a singleton, where [x,y] denotes the arc in G_0∩𝒜 joining x and y. Indeed, let A_n be the component of _n such that x_n,y_n∈A_n. Then (x_n,y_n)=ϕ_n-1(x,y) is the open arc in G_n∩ A_n joining x_n and y_n. Since f^n[x_n,y_n] is an arc contained in G_0∩, by Lemma <ref>, the diameter of [x_n,y_n] converges to 0 as n→∞. Thus ϕ[x,y] is a singleton. On the other hand, since ϕ(x)=ϕ(y), it follows from (<ref>) that x_n and y_n can not be separated by components of _n for each n. Hence, we are reduced to the case that there exists some m≥0 such that x_m and y_m are neither contained in the closure of a component of _m, nor separated by components of _m. Then there are two possibilities: (i) x_m ∈ A and y_m∈ E∖λ, where A is a component of _m, E is a component of ∖_m and λ=E∩∂ A is a boundary component of A. (ii) x_m∈ A_1 and y_m∈ A_2, where A_1 and A_2 are distinct components of _m, such that each A_i has a boundary component λ_i contained in a component E of ∖𝒜_m. Case (i). Let z_λ∈λ be the assinged point to λ given in Step 0. Then z_λ≠y_m and y_m=ϕ(y). Since ϕ(x)=ϕ(y), the point x_m+k must belong to the unique component of _m+k whose boundary contains λ, for each k≥0. However, by the previous discussion, we have ϕ(x)=z_λ, which contradicts the assumption that ϕ(x)=ϕ(y). Case (ii). Let z_1∈λ_1 and z_2∈λ_2 be the assigned points to λ_1 and λ_2, respectively. Similarly as above, the points x_m+k and z_1 (resp. y_m+k and z_2) belong to the closure of the same component of _m+k for each k≥ 0. Therefore, [x_m+k,z_1] and [z_2,y_m+k] converge to z_1 and z_2, respectively. Since ϕ(x)=ϕ(y), it follows that z_1=z_2. Thus ϕ(l_x,y) is a singleton with l_x,y=ϕ_m-1^-1([x_m,z_1]∪[z_1,y_m]). Therefore, G' is an f-invariant graph, and by property (P3), its complementary components are all simply connected domains of simple type. Part III. Completion of the proof of Proposition <ref>. To complete the proof, it remains to join the marked points in ∩ J_f to the graph G'. Since each complementary component of G' contains at most one marked point, it follows that f^-n(G') is connected for all n>0. By replacing G' with f^-n(G') if necessary, we may assume that each point of P is either contained in G' or never iterated into G'. Let K be the growing continuum generated by G'. It is clear that K=J_f. Let z∈ J_f be a point in P∖ G' with period p. According to Lemma <ref> (1), there is an infinitely growing curve in K which joins G' to z. Since each complementary component of G' contains at most one point of P, the growing curve f^p() belong to the same access to z as . Therefore, by Proposition <ref>, we can assume that is a growing arc in K such that f^p()⊂∪ G'. As a consequence, the union of G' and ⋃_i=0^p-1f^i() is an f-invariant graph and contains the orbit of z. We repeat the process for each cycle in (P∖ G')∩ J_f and then take an m-th iterated preimage for a large integer m. The resulting graph G is an f-invariant skeleton of J_f rel P. The proof of Proposition <ref> is complete. § §.§ Orbifold metric and homotopic length Let f be a PCF rational map. Denote by P_f' the post-critical points of f in the Fatou set. Then there exists a complete metric ω called orbifold metric on ∖ P_f'; See <cit.> or <cit.>, as well as <cit.>. This metric is induced by a conformal metric ω(z)|dz| with ω(z) smooth in the complement of P_f, and has a singularity of the type ω=A(z_0)|dz|/|z-z_0|^1-1/n(z_0), n(z_0)>1, near each post-critical point z_0∈ J_f. Moreover, we have ||f'(z)||_ω>1 when z,f(z)∈∖ P_f'; See <cit.> for details. Fix a compact set 𝒪⊃ J_f such that f^-1(𝒪)⊂𝒪 and ∖𝒪 is a small neighborhood of P_f'. Let σ(z)|dz| be the standard spherical metric. There are constants C>0 and ρ>1 such that ||f'(z)||_ω≥ρ for z∈ f^-1(𝒪), and σ(z)≤ C·ω(z) for z∈∖ P_f. Let P⊂ be a finite set in . Two curves _0,_1:[0,1]→ are called homotopic rel P with endpoints fixed if there is a continuous map H:[0,1]× [0,1]→ such that * H(·,0)=_0 and H(·,1)=_1; * each curve _s:=H(·,s),s∈[0,1] has the same endpoints as _0 and _s(0, 1)⊂∖ P. Let :[0, 1]→ be a curve with (0, 1)∩ P_f=∅. The homotopic length of , denoted by L_ω[], is defined as the infimum of the lengths of curves under the orbifold metric, among all smooth curves that are homotopic to rel P_f with endpoints fixed. By (<ref>), we have dist((0),(1)):= dist_σ((0),(1))≤ C· L_ω[]. For a path-connected set E⊂, its homotopic diameter H-diam_ω(E) is defined as the supremum of homotopic lengths of all curves in E. It follows from (<ref>) that diam(E):= diam_σ(E)≤ C·H-diam_ω(E). Let _n,⊂𝒪 be curves such that γ(0, 1)∩ P_f=∅ and f^n:_n→ is a homeomorphism. Then L_ω[_n]≤ L_ω[]/ρ^n. Moreover, suppose that E and E_n are two path-connected sets in 𝒪 such that f^n:E_n→ E is a homeomorphism and H- diam_ω(E)<∞. Then diam(E_n)≤ C· H- diam_ω(E_n)≤ C· H- diam_ω(E)/ρ^n. The first conclusion follows from inequality (<ref>). Choose any curve α_n⊂ E_n. Then f^n:α_n→α:=f^n(α_n)(⊂ E) is a homeomorphism. Thus L_ω[α_n]≤ L_ω[α]/ρ^n≤H-diam_ω(E)/ρ^n. As α_n is arbitrary chosen, it holds that H-diam_ω(E_n)≤H-diam_ω(E)/ρ^n. §.§ Lift of isotopies Applying the usual homotopy lifting theorem for covering maps (see <cit.>), it is not difficult to prove the following result about lifts of isotopies by rational maps. The details of the proof can be found in <cit.>. Suppose that f,g:→ are PCF rational maps, and h_0,h_0:→ are homeomorphisms such that h_0=h_0 on P_f and h_0∘ f=g∘h_0 on . Let H:× [0,1]→ be an isotopy rel P_f with H_0=h_0. Then H can be uniquely lifted to an isotopy H:× [0,1]→ rel f^-1(P_f) such that H_0=h_0 and H_t∘ f=g∘H_t on for all t∈[0,1]. Let (f,P) be a marked rational map. Let 𝒪 be the compact set given in Appendix <ref>. Then :=∖𝒪 is a small neighborhood of P_f'. Let θ_0:→ be a homeomorphism isotopic to id rel P∪. By Lemma <ref>, there is a homeomorphism θ_1:→ isotopic to id rel P such that θ_0∘ f=f∘θ_1. Inductively, we have a sequence of homeomorphisms {θ_n,n≥1} of isotopic to the identity rel P such that θ_n∘ f=f∘θ_n+1. Denote ϕ_n=θ_n-1∘⋯∘θ_0. A continuous onto map π: → is a quotient map if π^-1(z) is either a singleton or a full continuum for any point z∈. The sequence {ϕ_n} uniformly converges to a quotient map of as n→∞. Let Θ^0: × [0,1]→ rel P be an isotopy such that Θ^0_0=id, Θ^0_1=θ_0 and Θ^0_t(z)=z for all z∈ P∪ and t∈ [0,1]. By inductively applying Lemma <ref>, for each n≥1, we obtain an isotopy Θ^n: × [0,1]→ such that * Θ^n_0=id and Θ^n_1=θ_n; * Θ^n_t(z)=z for all z∈ f^-n(P∪) and t∈ [0,1], and * Θ^n_t∘ f =f∘Θ^n+1_t for all z∈ and t∈ [0,1]. For each point z∈, define a curve _z:[0,1]→ by _z(t):=Θ^0_t(z). From the compactness, there is a constant L_0 such that L_ω[_z]≤ L_0 for all z∈∖. To prove the lemma, it is enough to show that there are constants M>0,ρ>1 such that for all z∈ and n≥ 1 dist(ϕ_n(z),ϕ_n+1(z))≤ Mρ^-n. Fix any z∈ and n≥1. Set w=f^n(ϕ_n(z)). Let β be the lift of γ_w based at ϕ_n(z). The other endpoint of β is ϕ_n+1(z). If w∈ P∪, then _w is a singleton and hence ϕ_n(z)=ϕ_n+1(z). Otherwise, it follows from Lemma <ref> and equality (<ref>) that dist(ϕ_n(z),ϕ_n+1(z))≤ CL_ω[β]≤ CL_0ρ^-n. Thus {ϕ_n} uniformly converges to a continuous map ϕ_∞ of as n→∞. Since ϕ_∞ is a uniform limit of homeomorphisms, it is a quotient map; See e.g. <cit.>. §.§ Local connectivity It is known that a continuum E⊂ is locally connected if and only if the boundary of each component of E is locally connected and the spherical diameters of components of E converge to zero; See e.g. <cit.>. We will show that Let f be a PCF rational map, and let E be a continuum with ∂ E⊂ J_f. Then E is locally connected if and only if the boundary of each component of E is locally connected and the homotopic diameters of components of ∖ E disjoint from P_f converge to zero. First suppose that E is locally connected. Since the homotopic lengths of curves in ∖ P_f vary continuously, each component of E disjoint from P_f has a finite homotopic diameter. To the contrary, assume that {D_n} is a sequence of components of E disjoint from P_f, such that H-diam_ω(D_n)≥ϵ_0>0. Since diam(D_n)→ 0 as n→∞, by taking a subsequence, we may assume that {D_n} converges to a point a∈ E. For any ϵ>0, let Δ(ϵ) be the round disk with center a and the orbifold radius ϵ. Then Δ(ϵ) contains at most one point of P_f when ϵ is small enough. On the other hand, as n is large enough, D_n⊂Δ(ϵ_0/3). This implies that H-diam_ω(D_n)≤ 2ϵ_0/3, a contradiction. The converse part of the lemma follows directly from (<ref>). The following result is well-known; See e.g. <cit.>. Let X be a connected and compact metric space. If X is locally connected, then it is arcwise connected and locally arcwise connected. Let E⊂ be a locally connected continuum. Then there exists a family of curves in E which are equicontinuous such that any two points of E are joint by a curve in this family. For any component U of ∖ E, we fix a Riemann mapping ϕ_ U:U→. Since ∂ U is locally connected, ϕ_ U^-1 has a continuous extension from to U. For any crosscut α of U, let D(α) denote the component of U∖α with the smaller diameter. Here a crosscut of U means an arc with its interior in U and its endpoints on ∂ U. By the local connectivity of E, for any ϵ>0, there exists ρ_ϵ>0 such that for each component U of E * if the distance between a,b∈∂ is less than ρ_ϵ, then |ϕ_ U^-1(a)-ϕ_ U^-1(b)|<ϵ; * if the diameter of a crosscut α of U is less than ρ_ϵ, then (D(α))<ϵ. Let be the collection of all line segments with endpoints in E. We will revise each ∈ to an arc ⊂ E such that {:∈} is equicontinuous. Fix a ∈. Denote X_:={t∈[0,1]:(t)∈ E}. Then for any component I of [0,1]∖ X_, the open segment α=(I) is a crosscut for some component U of ∖ E. Let α̃=∂ϕ_ U(D(α))∩∂. Then there is a linear map h_I:α→α̃. Now we define a map :[0,1]→ E by (t):={[ (t), if t∈ X_;; ϕ_ U^-1∘ h_I∘γ(t), if t∈ I and (I)⊂ U, ]. where I is the component of [0,1]∖ X_ containing t. We claim that is a curve. To see this, let {I_n} be a sequence of components of [0,1]∖ X_ converging to a point t_*, and let U_n be the component of ∖ E such that α_n:=(I_n) is a crosscut of U_n. Then (α_n)→ 0 as n→∞ by the continuity of . It follows from point (2) above that diam (D(α_n))→ 0 as n→∞. Since γ̃(I_n)=∂ D(α_n)∩∂ U, it follows that (I_n)→(t_*) as n→∞. So is continuous and the claim is proved. We will prove that the family of curves {,∈} is equicontinuous. Given any ϵ>0, since the family is equicontinuous, there is a number >0 such that |(t_1)-(t_2)|<{ρ^2_ϵ/(2π), ϵ} whenever |t_1-t_2|< for every ∈. Fix any ∈. If t_1,t_2∈ X_, then |(t_1)-(t_2)|=|(t_1)-(t_2)|<ϵ whenever |t_1-t_2|<. We now assume that t_1, t_2∈I for a component I of [0,1]∖ X_. Let α=γ(I). If (α)<ρ_ϵ, point (2) above implies |(t_1)-(t_2)|≤(D(α))<ϵ. Otherwise, we have |h_I'|<2π/ρ_ϵ. In this case, if |t_1-t_2|<, it holds that |h_I∘(t_1)-h_I∘(t_2)|= |(t_1)-(t_2)|·|h_I'|< ρ_ϵ. It then follows from point (1) above that |(t_1)-(t_2)|<ϵ. Finally, assume that t_1 and t_2 lie in the closures of distinct components I_1 and I_2 of [0,1]∖ X_, respectively. If |t_1-t_2|<, the two endpoints t'_1 and t_2' of I_1 and I_2 between t_1 and t_2 satisfy that |t_1-t'_1|< and |t_2-t'_2|<. Then according to the previous two cases, |(t_1)-(t_2)|≤ |(t_1)-(t'_1)|+|(t'_1)-(t'_2)|+|(t'_2)-(t_2)|<3ϵ. Therefore, the family {,∈} is equicontinuous. FF BD1 L. Bartholdi, D. Dudko, Arithmetic aspects of branched coverings, Ann. Fac. Sci. Toulouse Math., 26 (2017), no. 5, 1219–1296. BD2 L. Bartholdi, D. Dudko, Algorithmic aspects of branched coverings IV/V. Expanding maps, Trans. Amer. Math. Soc. 370 (2018), no. 11, 7679–7714. BCT X. Buff, G. Cui and Lei Tan, Teichmüller spaces and holomorphic dynamics, in Handbook of Teichmüller Theory. Vol. IV, IRMA Lectures in Mathematics and Theoretical Physics 19 (2014), European Mathematical Society,717–756. BM M. Bonk and D. Meyer, Expanding Thurston maps, Mathematical Surveys and Monographs, 225. American Mathematical Society, Providence, RI, 2017. CFP J. Cannon, W. Floyd, and W. Parry, Constructing subdivision rules from rational maps, Conform. Geom. Dyn. 11 (2007), 128–136. CGNPP K. Cordwell, S. Gilbertson, N. Nuechterlein, K. Pilgrim and S. Pinella, On the classification of critically fixed rational maps, Conform. Geom. Dyn. 19 (2015), 51–94 (electronic). CGZ G. Cui, Y. Gao and J. Zeng, Invariant graphs of rational maps, Adv. Math. 404 (2022), Paper No. 108454, 50 pp. CPT G. Cui, W. Peng and L. Tan, On a theorem of Rees-Shishikura, Annales de la Faculté des Sciences de Toulouse, Vol. XXI, 5 (2012), 981–993. CPT2 G. Cui, W. Peng and L. Tan, Renormalizations and wandering Jordan curves of rational maps, Comm. Math. Phys. 344 (2016), no.1, 67–115. CT1 G. Cui and L. Tan, A characterization of hyperbolic rational maps, Invent. Math. 183 (2011), no.3, 451–516. CYY G. Cui, F. Yang and L. Yang, Renormalizations of rational maps and stable multicurves, arXiv:2309.03464. DH1A. Douady and J. H. Hubbard, A proof of Thurston's topological characterization of rational functions, Acta Math. 171 (1993), 263-297. DH2 A. Douady, J. Hubbard, Exploring the Mandelbrot set. The Orsay Notes., <http://www.math.cornell.edu/ hubbard/OrsayEnglish.pdf>. DH3 A. Douady, J. Hubbard, On the dynamics of polynomial-like mappings. Ann. Sci. Éc. Norm. Supér. 18, 287–343 (1985) DHS D. Dudko, M. Hlushchanka and D. Schleicher, A canonical decomposition of postcritically finite rational maps and their maximal expanding quotients, arXiv:2209.02800v1. FPP1 W. Floyd, W. Parry, K. Pilgrim, Expansion properties for finite subdivision rules I, Sci. China Math. 61 (2018), no.12, 2237–2266. FPP2 W. Floyd, W. Parry, K. Pilgrim, Expansion properties for finite subdivision rules II, Conform. Geom. Dyn. 24 (2020), 29–50. GHMZ Y. Gao, P. Haïssinsky, D.Meyer and J. Zeng, Invariant Jordan curves of Sierpiński carpet rational maps, Ergodic Theory Dynam. Systems 38 (2018), no.2, 583–600. GT Y. Gao, G. Tiozzo, The core entropy for polynomials of higher degree, J. Eur. Math. Soc., 24 (2022), no.7, 2555–2603. Ha A. Hatcher, Algebraic Topology, Cambridge Univ. Press, Cambridge, 2002. H M. Hlushchanka, Tischler graphs of critically fixed rational maps and their applications, arXiv:1904.04759. HM M. Hlushchanka, D. Meyer, Exponential growth of some iterated monodromy groups, Proc. Lond. Math. Soc. (3) 116 (2018), no. 6, 1489–1518. Li1 Z. Li, Periodic points and the measure of maximal entropy of an expanding Thurston map, Trans. Amer. Math. Soc. 368 (2016), no. 12, 8955–8999. Li2 Z. Li, Equilibrium states for expanding Thurston maps, Comm. Math. Phys. 357 (2018), no. 2, 811–872. LZ1 Z. Li, T. Zheng, Prime orbit theorems for expanding Thurston maps: Dirichlet series and orbifolds, Adv. Math. 443 (2024), Paper No. 109600, 89 pp. LZ2 Z. Li, T. Zheng, Prime orbit theorems for expanding Thurston maps: Lattès maps and split Ruelle operators, Adv. Math. 449 (2024), Paper No. 109723, 96 pp. LZ3 Z. Li, T. Zheng Prime orbit theorems for expanding Thurston maps: Genericity of strong non-integrability condition, Adv. Math. 450 (2024), Paper No. 109765, 32 pp. LMS1R. Lodge, Y. Mikulich and D. Schleicher, Combinatorial properties of Newton maps, Indiana Univ. Math. J.,70 (2021), 1833–1867. LMS R. Lodge, Y. Mikulich and D. Schleicher, A classification of postcritically finite Newton maps, In the tradition of Thurston II Geometry and groups, Edited by Ken'ichi Ohshika and Athanase Papadopoulos, Springer, 2022, 421–448. MSS R. Mane, P. Sad, and D. Sullivan, On the dynamics of rational maps. Ann. Sci. Éc. Norm. Supér., 16 (1983), 193–217. Mc1 C. McMullen, The classification of conformal dynamical systems, Current developments in mathematics, 1995 (Cambridge, MA), pages 323–360, Internat. Press, Cambridge, MA, 1994. Mc2 C. McMullen, Automorphisms of Rational Maps. In Holomorphic Functions and Moduli I, Vol.10, Mathematical Sciences Research Institute Publications, 31–60. New York: Springer. Mc3 C. McMullen, Complex Dynamics and Renormalization, Ann. Math. Studies 135, Princeton University Press, Princeton, NJ. M, D. Meyer, Invariant Peano curves of expanding Thurston maps, Acta Math. 210 (2013), no. 1, 95–171. Mi1 J. Milnor, Dynamics in One Complex Variable. Princeton University Press 2006. P I. Park, Julia sets with Ahlfors-regular conformal dimension one, arXiv:2209.13384. Pil K. Pilgrim, Combinations of complex dynamical systems, Lecture Notes in Mathematics, vol. 1827, Springer-Verlag, Berlin, 2003. Poi2 A. Poirier, Hubbard trees, Fund. Math. 208(2010), no. 3, 193–248. Th D. Thurston, A positive characterization of rational maps, Ann. of Math., 192 (2020), no. 2, 1–46. Th+ W. Thurston, H. Baik, Y. Gao, J. Hubbard, K. Lindsey, L. Tan, D. Thurston, Degree-d invariant laminations, In What's Next? The Mathematical Legacy of William P. Thurston, edited by D. Thurston, Annals of Mathematics Studies 380, Princeton, 2019. Ti1 G. Tiozzo, Topological entropy of quadratic polynomials and dimension of sections of the Mandelbrot set, Adv. Math. 273 (2015), 651–715. Ti2 G. Tiozzo, Continuity of core entropy of quadratic polynomials, Invent. Math. 203 (2016), no. 3, 891–921. WYZ X. Wang, Y. Yin, J. Zeng,Dynamics on Newton maps, Ergodic Theory Dynam. Systems, 43 (2023), no. 3, 1035–1080. Why2 G. Whyburn, Analytic topology, American Mathematical Society Colloquium Publications, Vol. XXVIII, AMS, Providence, R.I., 1963.
http://arxiv.org/abs/2408.12142v1
20240822055947
MDD-5k: A New Diagnostic Conversation Dataset for Mental Disorders Synthesized via Neuro-Symbolic LLM Agents
[ "Congchi Yin", "Feng Li", "Shu Zhang", "Zike Wang", "Jun Shao", "Piji Li", "Jianhua Chen", "Xun Jiang" ]
cs.CL
[ "cs.CL", "cs.AI" ]
New Limits on Light Dark Matter-Nucleon Scattering Joshua Wood Received date / Accepted date ================================================== [2]Work done during an internship. [3]Corresponding authors. § ABSTRACT The clinical diagnosis of most mental disorders primarily relies on the conversations between psychiatrist and patient. The creation of such diagnostic conversation datasets is promising to boost the AI mental healthcare community. However, directly collecting the conversations in real diagnosis scenarios is near impossible due to stringent privacy and ethical considerations. To address this issue, we seek to synthesize diagnostic conversation by exploiting anonymous patient cases that are easier to access. Specifically, we design a neuro-symbolic multi-agent framework for synthesizing the diagnostic conversation of mental disorders with large language models. It takes patient case as input and is capable of generating multiple diverse conversations with one single patient case. The framework basically involves the interaction between a doctor agent and a patient agent, and achieves text generation under symbolic control via a dynamic diagnosis tree from a tool agent. By applying the proposed framework, we develop the largest Chinese mental disorders diagnosis dataset MDD-5k, which is built upon 1000 cleaned real patient cases by cooperating with a pioneering psychiatric hospital, and contains 5000 high-quality long conversations with diagnosis results as labels. To the best of our knowledge, it's also the first labelled Chinese mental disorders diagnosis dataset. Human evaluation demonstrates the proposed MDD-5k dataset successfully simulates human-like diagnostic process of mental disorders. The dataset and code will become publicly accessible in https://github.com/lemonsis/MDD-5k. § INTRODUCTION Mental health issues have garnered increasing attention in recent years. According to the statistics of World Health Organization (WHO), one in every eight people in the world lived with a mental disorder in 2019, and people living with anxiety and depressive disorders kept rising significantly because of the COVID-19 pandemic <cit.>. With the recent progress of large language models (LLMs) <cit.>, which emerges capabilities of relevant context incorporation and human-like text generation, many researchers turn to building conversational AI system for mental healthcare. Current implementations can be divided into two categories, finetuning a small model (e.g. Llama2-7B <cit.>) with physician-patient conversations <cit.> or building prompt-based physician-patient role-playing framework <cit.> with state-of-the-art language models (e.g. ChatGPT <cit.>). Regardless of the method employed, domain-specific mental health datasets play a fundamental and indispensable role. We focus on diagnostic conversation dataset for mental disorders in this work. The clinical diagnosis of mental disorders differs from other diseases in that it primarily relies on the mental status examination of patients, which is reflected through conversations between psychiatrists and patients rather than physiological indices <cit.>. Therefore, the collection of mental disorders diagnostic conversation is promising to facilitate a variety of downstream tasks in AI mental health research like auxiliary diagnosis chatbot, mental disorders classification, etc. However, while many previous studies <cit.> focused on emotional support or psychological counseling data, few work shed light on diagnostic conversations of mental disorders. This can be attributed to two main factors. First, diagnostic conversations in real scenarios are extremely hard to acquire due to the privacy and ethical consideration. Second, synthesizing diagnostic conversations from scratch is also challenging. Unlike psychological counseling or empathetic dialogue, diagnosis follows standardized process and requires professional medical knowledge. Consequently, directly employing LLMs for data synthesis in this context often yields poor outcomes <cit.>. D^4 <cit.> made the first attempt by simulating diagnostic conversations with employed workers. However, it only covers depressive disorder and entirely depends on human annotation. The generated content is also short and far from similar to diagnostic conversations in real scenarios. We propose a neuro-symbolic multi-agent framework that takes patient cases as input to synthesize diagnostic conversations of mental disorders. The framework involves three types of large language model agents: a doctor agent, a patient agent, and a symbolic tool agent responsible for managing diagnostic topic shift. This framework features two major innovations: (1) One-to-many patientcase-to-dialogue generation that maximizes the utilization of precious real patient cases. As shown in Figure <ref>, unlike previous studies <cit.> that generate one conversation with one patient case. Our proposed framework is capable of generating multiple diverse diagnostic conversations with one single patient case. Specifically, three methods ensure the diversity and correctness of diagnostic process. First, doctor agents with different diagnosis habits are designed and randomly selected for each conversation. Second, we use LLM with knowledge graph to generate multiple fictitious patient experiences based on one patient case. The patient experiences serve as background information for patient agents during generation. Since the diagnosis of mental disorders mainly relies on symptoms rather than concrete events, integrating the fictitious patient experiences enhances the diversity of synthesized conversation while maintaining the accuracy of the diagnostic process. Third, the sequence of diagnostic topics is randomly determined for each conversation. (2) Another significant innovation lies in text generation under symbolic control via a dynamic diagnosis tree. This tree consists of a fixed symptom inquiry tree and a dynamic experience inquiry tree. Clinical diagnosis of mental disorders strictly follows standards from ICD-11 <cit.> or DSM-5 <cit.>. To simulate this process, we design the fixed symptom inquiry tree based on Structured Clinical Interview for DSM-5 (SCID-5) <cit.>, covering all the diagnostic topics for important symptoms inquiry. The experience inquiry tree is constructed by extracting possible topics from patient's response of past experiences. It's designed to establish deeper engagements with the patient. By applying the proposed framework, we release the largest Chinese Mental Disorder Diagnosis dataset MDD-5k. It's also the first labelled mental disorders diagnostic conversation dataset with diagnosis results from professional psychiatrists. MDD-5k contains 5000 high-quality diagnostic conversations, with an averaged 26.8 turns and 6906.8 Chinese words per dialogue. It is built upon 1000 real patient cases from a pioneering psychiatric hospital, covering more than 25 different diseases. All the patient cases have been cleaned and filtered in accordance with global standards to ensure the complete protection of patient private information. The MDD-5k dataset will become publicly accessible when the ethical review finishes. The contributions of this work can be summarized as: * We specially design a neuro-symbolic multi-agent framework for synthesizing diagnostic conversation of mental disorders, which features controllable and diverse one-to-many patientcase-to-dialogue generation. * The largest Chinese mental disorders diagnosis dataset MDD-5k is proposed, which contains 5000 high-quality long conversations with convincing diagnosis results as labels. To the best of our knowledge, it's also the first mental disorders diagnosis dataset with labels. * Comprehensive human evaluation shows the proposed MDD-5k dataset outperforms several compared datasets in professionalism, communication skills, fluency, safety, and mirrors human-like diagnostic process. § RELATED WORK §.§ Mental Health Dataset Corpora of physician-patient conversations focused on mental health are crucial for AI mental healthcare research, especially in the large language model era. We divide current mental health datasets into three categories based on the degree of required professional knowledge. Emotional support datasets feature empathetic dialogue and comfort. ESconv dataset <cit.> consists of 1300 conversations covering 10 topics. SoulChatCorpus <cit.> contains over 2 million single-turn and multi-turn conversations generated by ChatGPT. Psychological counseling datasets typically contain more domain knowledge than common emotional support dataset. The Emotional First Aid Raw dataset <cit.> is built by crawling from psychological counseling websites and communities. PsyQA <cit.> is a single-turn Chinese dataset annotated by human. SmileChat dataset <cit.> expands PsyQA to multi-turn through ChatGPT. CPsyCounD <cit.> contains 3134 counseling conversations generated by the same number of psychological counseling reports. Diagnosis datasets aim to simulate diagnostic conversation of professional psychiatrists. D^4 <cit.> is a Chinese depression diagnosis dataset built by human annotators and supervised by psychiatrists. There are also some medical dialogue datasets, like MedDialog <cit.>, MTS-Dialog <cit.>, ChatDoctor <cit.>, which encompass a broader range of medical fields. §.§ Mental Disorders Conversation Simulation We mainly focus on the tuning-free prompting frameworks for mental disorders conversation simulation. <cit.> conducted a comprehensive analysis on the feasibility of utilizing LLM chatbots in diagnostic conversation. <cit.> proposed to simulate patient agent that integrates cognitive modeling with LLM, and applied this patient agent in cognitive behavior therapy (CBT) training. <cit.> built a planning and role-playing method to generate dialogue from clinical note, and proposed a dataset of synthetic patient-physician conversations. <cit.> introduced Memo2Demo framework which converts counseling report to counseling note and then applies it to generate conversations. <cit.> designed AIME framework which uses a self-play based simulated environment with automated feedback for diagnostic conversation generation. § METHODOLOGY The synthesis process of mental disorders diagnostic conversations is presented. The neuro-symbolic multi-agent framework features one-to-many patientcase-to-dialogue generation, indicating multiple generated conversations are based on the same patient case. As shown in Figure <ref>, the framework basically involves the interactions among a doctor agent, a patient agent, and a tool agent. All the agents are played by large language models (LLMs). The doctor agent controlled by a dynamic diagnosis tree leads the diagnostic topic shift of the whole conversation. The patient agent responds to the doctor agent based on the preprocessed patient case and fictitious patient experience generated by the tool agent. The tool agent is also responsible for the several symbolic operations of dynamic diagnosis tree. §.§ Patient Cases Preprocessing The quality of patient cases is vital to the diagnostic conversation synthesis. We cooperate with a pioneering psychiatric hospital and obtain over 1000 real cases of patients with mental disorders. All these patient cases have undergone data masking to prevent the leakage of sensitive personal information. The data masking process follows the standards below: (1) Private information of patients (e.g. name, date of birth, date of examination, etc.) is removed from the patient case. (2) Patient age is rounded to the nearest ten. For example, the age of a 24-year-old patient is 20 on the preprocessed patient case. (3) All the concrete locations are replaced with vague ones. The above preprocessing steps strictly follow the Chinese information security technology guide for health data security (GB/T 39725-2020). A complete cleaned patient case is illustrated in Appendix. After filtering repetitive or incomplete patient cases, the final version for diagnostic conversation simulation and dataset generation contains 1000 patient cases with age, gender, diagnosis and corresponding International Classification of Diseases (ICD-10) <cit.> code, chief complaint, history of present illness, important past medical history, family history, personal history, mental examination, and treatment. As shown in Figure <ref>, the patient case is structurized as key-value pairs. §.§ Fictitious Patient Experience Generation We perform one-to-many patientcase-to-dialogue generation, which indicates one patient case will be applied to generate multiple diagnostic conversations. One key factor contributes to the diversity of generated conversations is the patient experience. It specifically refers to the past experiences that directly or indirectly lead to the mental illness problem of patients in this paper. The diagnosis of mental disorders differs from other illnesses in that it mainly depends on the conversations between psychiatrists and patients instead of physiological indices. Psychiatrists provide diagnosis result and treatment based on the acquired symptoms of patients during communicating. As a result, if the correspondence between symptoms and diagnosis can be assured, the correctness and quality of the synthesized diagnostic conversation is guaranteed and is not affected by detailed patient experience. In this sense, it's feasible to generate multiple patient experiences with one patient case for synthesizing multiple diagnostic conversations. Large language model (LLM) is applied to generate fictitious patient experience. To avoid the counterfactual conflicts between fictitious patient experience and true patient case, gender, age, work and diagnosis (Dx) information from one patient case is extracted and serves as patient persona in the prompt for generating patient experience. Persona = Prompt(Gender, Age, Work, Dx) In Equation (<ref>), the function Prompt indicates concatenating keywords into proper prompt. Next, we build knowledge graphs containing time, people, and concrete event that might cause mental disorders according to different patient age and gender. The example in Figure <ref> shows the predefined knowledge graph for 20-year-old female. The triplet (Time, People, Event) is randomly selected from the graph for fictitious experience generation. FicExp = Prompt(Time, People, Event) The final patient experience (FicExp) is generated through LLM by combining patient persona. FicExp = LLM(Prompt(Persona, FicExp)) §.§ Neuro-Symbolic Dynamic Diagnosis Tree To imitate conversations in real scenarios where the psychiatrist leads the entire diagnostic process, we design a neuro-symbolic dynamic diagnosis tree to achieve diagnostic topic shift and controllable doctor response generation. The dynamic diagnosis tree consists of a symptom inquiry tree and an experience inquiry tree. As shown in the example of Figure <ref>, the symptom inquiry tree is fixed and built according to the Structured Clinical Interview for DSM-5 (SCID-5) <cit.> and guidance from professional psychiatrists. It aims to cover inquiries about all the relevant symptoms to arrive at the final diagnosis of a patient. Considering the gender and age differences, the symptom inquiry tree is specially designed for male and female, teenagers (people under 20), adults (people aged between 30 and 50) and elders (people over 60). The example in Figure <ref> shows the symptom inquiry tree for female teenager. The experience inquiry tree dynamically constructs itself based on patient's response regarding previous experiences and personal details. It is described as “dynamic” since each patient provides unique information about their background. A tool agent powered by LLM is responsible for parsing patient's response and creating corresponding topics which form the nodes of the experience inquiry tree. The parse process follows a depth-first manner. When the tool agent determines that the discussion around a specific topic is insufficient, it will keep parsing this topic to sub-topics until conversation around this topic is considered complete. Then it will move to the next parsed topic. The design of experience inquiry tree aims to establish deeper engagements with patients to facilitate diagnostic conversation. The neuro-symbolic dynamic diagnosis tree is managed by the tool agent and offers for guiding both the doctor agent and the patient agent. Some operations are implemented by LLM and some are by rules. We first define five data types: Text, LNode, Tree, Graph, Bool. Text refers to natural language text. LNode stands for the leaf node of a dynamic diagnosis tree, indicating diagnostic topic in conversation. Tree refers to the hierarchical tree structure with a set of connected nodes. Although LNode can be viewed as a special Tree or Text, we treat it as a separate data type for clearer expression. Graph specifically refers to the knowledge graph for fictitious patient experience generation as explained before. Bool is a boolean variable is either true or false. The operations for a doctor agent include: * (tr: Tree) → ln: LNode. To improve the diversity of synthesized diagnostic conversation, we design the following leaf node visiting rules: (1) The parent nodes of these leaf nodes, representing high-level concept of diagnostic topic, are visited in a predefined order. (2) The leaf nodes under corresponding parent node, representing low-level specific diagnostic topic, are randomly visited. Visited leaf nodes will not be accessed again. is responsible for implementing the above rules. It takes the whole dynamic diagnosis tree tr as input and outputs one random leaf node ln. This operation is implemented by rules. * (ln: LNode, t: Text) → b: Bool. It takes current diagnostic topic ln and dialogue history t around this topic as input, and decides whether conversation surrounding this topic should continue or end. This operation is implemented by LLM. * (tr: Tree) → b: Bool. If all the leaf nodes of the dynamic diagnosis tree tr are visited, the operation will return true which marks the end of the diagnostic process. Else it will return false. This operation is implemented by rules. * (t: Text) → tr: Tree. The operation is responsible for building the dynamic experience inquiry tree. It takes patient responses t containing experience information as input, and replaces the initial empty experience inquiry tree with a tree whose root node is t and leaf nodes are possible topics related to t. The output tr is the updated dynamic diagnosis tree. This operation is implemented by rules and LLM. * (t: Text, tr^(1): Tree) → tr^(2): Tree. As the diagnostic conversation progresses, some predefined topics may have already been discussed. detects these duplicated topics in dialogue history t and deletes them from the dynamic diagnosis tree to prevent repetitive conversation. It takes the original tree tr^(1) as input and outputs an edited tree tr^(2). This operation is implemented by rules and LLM. * (ln: LNode, t^(1): Text) → t^(2): Text. In diagnostic conversation of mental disorders, the main goal of psychiatrist is to acquire symptoms from patients. Empathetic dialogue is not a must in this process. However, it sometimes helps in the diagnostic process and has been adopted by some doctors clinically. If the psychiatrist is accustomed to perform empathetic dialogue in daily consultations (this is reflected through the predefined doctor prompt which will be explained in the next subsection), will takes current diagnostic topic ln, dialogue history t^(1) as input and outputs comforting response t^(3). This operation is implemented by LLM. * (ln: LNode) → t: Text. It takes a leaf node ln of the dynamic diagnosis tree as input and outputs proper prompt t for instructing patient agent to respond around topic ln. This operation is implemented by LLM. The operations for a patient agent include: * (t: Text) → b: Bool. The operation decides whether it's time to trigger the operation or not, based on the doctor's question and dialogue history t. This operation is implemented by rules and LLM. * (g: Graph, t^(1): Text) → t^(2): Text. The operation performs fictitious patient experience generation as detailedly described in the previous subsection. g is the predefined knowledge graph. t^(1) is the patient case and t^(2) is the integrated patient information with real patient case and fictitious experience. This operation is implemented by LLM. The usage of these operations during agents interaction will be introduced in the next subsection. §.§ Conversation Synthesis with Agents The diagnostic conversation is synthesized by a doctor agent and a patient agent through role-playing LLMs. The doctor agent is under the guidance of the dynamic diagnosis tree. Initially, the dynamic diagnosis tree checks whether the current diagnostic topic should end. If affirmative, the doctor agent will turn to the next topic and check whether the future topics have been included in previous conversations. If not, the doctor agent will keep communicating with the patient around current topic. Then, if the patient talks about personal experience, the doctor agent will build the experience inquiry tree based on the patient response. The doctor agent generates responses through operation (ln: LNode, t^(1): Text) → t^(2): Text, which takes current diagnostic topic ln and dialogue history t^(1) as input and outputs response t^(2). To further improve the diversity of generated conversations, we design different diagnosis habits for the doctor agent. The diagnostic habits contain age, gender, specialties, empathetic dialogue, diagnosis speed, explanation, and serve as persona prompt for the doctor agent. An example is shown in Figure <ref>. Specifically, the factors of empathetic dialogue and diagnosis speed exert huger effect on the doctor's response. If the doctor agent is accustomed to communicate empathetically, the operation introduced before will replace the operation for generation. If the diagnosis speed is set as fast, the doctor agent will speed up the diagnosis process, which leads to shorter conversations. As to the patient agent, since the doctor agent leads the diagnosis process, the patient agent is designed to passively respond to the doctor based on known knowledge, including the patient case and generated fictitious experience. The operation (ln: LNode, t^(1): Text, t^(2): Text) → t^(3): Text is responsible for patient response generation, which takes current diagnostic topic ln, dialogue history t^(1), and patient case information t^(2) as input and outputs proper response t^(3). If the patient agent determines to respond with personal experience under the control of dynamic diagnosis tree, fictitious patient experience will be fused into the patient case as input t^(2). The whole process of the multi-agent framework for diagnostic conversation simulation of mental disorders is detailedly shown through pseudo-code in Algorithm <ref> in the Appendix. § EXPERIMENT SETUP §.§ Implemention Details The MDD-5k dataset is generated through the neuro-symbolic multi-agent framework with 1000 real patient cases. 5 different fictitious patient experiences are generated by OpenAI <cit.> based on 1 patient case, which leads to a total of 5000 experiences corresponding to the 5000 conversations in the dataset. We also create 5 doctors with different diagnosis habits, and 1 doctor will be randomly picked for generating each conversation in the dataset. Since the patient cases are still under the ethical review of psychiatric hospital, we randomly select 20 available patient cases and generate 100 conversations by for evaluation. The other 4900 conversations are currently generated by <cit.> deployed on NVIDIA A100-80G GPUs locally. We will apply for generating the rest 4900 conversations when the ethical review finishes. §.§ Compared Datasets As the evaluators' native language is Chinese, only Chinese datasets are considered to ensure the quality of human evaluation. Three datasets are selected as compared baselines. * D^4 <cit.> is a Chinese dialogue dataset for depression diagnosis, which is conducted by collecting conversations between professional psychiatrists. * CPsyCounD <cit.> is a synthetic consultation dataset covering nine representative topics (e.g. marriage, education) and seven classic schools of psychological counseling (e.g. cognitive behavioral therapy). * Direct Role-playing: To test the effectiveness of our designed multi-agent diagnostic conversation simulation framework, we directly apply role-playing LLMs () and generate 100 conversations with the same patient cases and prompts as MDD-5k for evaluation. We haven't found any available open-source mental disorders diagnosis datasets besides D^4, so the consultation dataset CPsyCounD is chosen as baseline, which can also demonstrate the differences between psychological counseling and diagnostic conversation. Statistics of these datasets are shown in Table <ref>. We randomly select 100 samples from each dataset for evaluation. §.§ Evaluation Metrics Human evaluation is conducted to fairly assess the quality of different datasets. Specifically, we design seven major metrics encompassing five different perspectives: professionalism, communication, fluency, similarity and safety. Professionalism measures if the psychiatrist can effectively collect all the required patient symptoms for diagnosis. Communication measures the psychiatrist's communication skills and the patient's response, including (1) Can the psychiatrist proactively ask patient for gathering key information and establish effective communication with the patients so that they are willing to share more information (e.g. daily life, past experiences) related to the mental illness problems? (2) Can the patient engage in the diagnostic process and tell related information? Fluency contains two aspects of criterion: (1) Are the generated conversations fluent in terms of both sentence and topic flow? (2) Is there any repetitive content or topic in the conversation? Similarity measures how similar is the synthesized conversation to real scenarios. The evaluators are guided to score from 1 to 10. A higher score indicates better performance. Safety measures the leakage of private information (e.g. address). 0 means safe generation while 1 indicates privacy leakage. Five annotators are employed for the human evaluation. Three of the them are professional psychiatrists who have years of clinical experience. The other two annotators are experienced in mental health data processing. The evaluation is conducted in a double-blind manner. § RESULTS AND EVALUATION §.§ Statistical Analysis of MDD-5k The Figure <ref> shows detailed patient information of the MDD-5k dataset. 65% of the patients are female and 35% are male. About 90% of the patients are between 20 and 40 years old. 15% of the patients report a family history of mental disorders and 14% of the patients have relevant physical illness. Patients suffer from depressive disorder (F32) and anxiety disorders (F41) makes up of 75% conversations of the dataset. Specifically, 51% of the patients in depressive disorder are diagnosed with depressive state (F32.901), and 35% of the patients are diagnosed with depressive episode (F32.900). 47% of the anxiety disorder patients are diagnosed with anxiety and depression state (F41.200x002) and 26% are diagnosed with anxiety state (F41.101). We also show details of other disorders which accounts for 11% of the whole dataset in Figure <ref>(h). All the disease code follows standards in the second version of Chinese clinical classification of disease and codes. If a patient is diagnosed with multiple diseases, these diseases are counted separately. The statistical details of MDD-5k and other compared datasets are presented in Table <ref>. The MDD-5k dataset contains diagnostic conversations covering over 25 mental health illnesses. It includes 5000 dialogues, each comprising an averaged of 6906.8 Chinese words which is almost ten times the compared datasets. The average dialogue turns are 26.8, slightly longer than the 21.6 turns of D^4. It is also a labelled dataset with diagnosis result from professional psychiatrist as label for each conversation. Compared to the direct role-playing method without applying the multi-agent framework, the generated doctor response is about the same length. But the dialogue turns and patient response of MDD-5k are significantly longer, highlighting the effectiveness of our proposed framework in diagnostic conversation simulation. In the case study presented in Appendix, we show three complete samples of conversation with corresponding doctor persona, patient case, fictitious patient experience and the dynamic diagnosis tree. §.§ Human Evaluation The results of human evaluation are presented in Table <ref>. MDD-5k exhibits superior performance across six major metrics. The evaluation scores on professionalism and similarity are significantly higher than other datasets, suggesting that our synthesized diagnostic conversations can mirror real scenarios of diagnosis to some extent. The communication quality of both doctor and patient is also impressive. Despite these strengths, MDD-5k does include some repetitive content, occasionally leading to less fluent conversations. The D^4 dataset ranks second. It achieves relatively high score on communication and fluency evaluation. The biggest problem of D^4 is that its conversations are too brief and only include symptom inquiries and short responses. An ablation study is conducted. The performance of the direct role-playing method is significantly worse compared to the neuro-symbolic multi-agent framework, particularly in terms of fluency and communication skills. This finding confirms that directly applying large language models for diagnostic conversation generation will lead to poor outcomes. The evaluation also shows the distinct differences between diagnostic conversation and psychological counseling. The evaluation scores of CPsyCounD are notably low, especially for the professionalism and similarity metric. Psychological counseling prioritizes comfort and healing with different therapies, while diagnostic conversation focuses on acquiring symptoms to arrive at a final diagnosis result. § CONCLUSION AND FUTURE WORK We design a neuro-symbolic multi-agent framework for synthesizing diagnostic conversation of mental disorders, and apply it for building the first and largest open-source Chinese mental disorders diagnosis dataset with diagnosis results as labels. The framework features controllable one-to-many patientcase-to-dialogue generation. Conversation between a doctor agent and a patient agent is guided by a dynamic diagnosis tree. We also apply several techniques to improve the diversity of generated conversations. Human evaluation shows the quality of the proposed MDD-5k dataset exceeds existing relevant datasets on seven indicators. The MDD-5k dataset is believed to contribute to a wide range of downstream tasks like mental disorders classification, mental disorders diagnosis assistant training, etc. The primary limitations of this work lie in three points: (1) The discrepancy between synthesized conversations and actual medical diagnostics remains a significant challenge. Large language models often struggle to interpret the full meaning of patient responses when they encapsulate diverse information aspects, consequently leading to redundant symptom inquiries. We are exploring various prompt strategies to mitigate this issue. (2) We mainly design dynamic diagnosis tree for depression (F32), anxiety (F41), sleep disorders (F51), childhood emotional disorder (F98), and unspecified mood disorder (F39), which covers over 85% conversations of MDD-5k. Nevertheless, some mental health conditions (e.g. obsessive-compulsive disorder (F42)) remain inadequately addressed, resulting in sub-optimal synthesized diagnostic conversation. Efforts are underway to expand our synthesis frameworks by designing more diagnosis trees to encompass a broader spectrum of mental disorders. (3) Only Chinese version of the MDD-5k dataset is proposed. We plan to translate it into English in the future. § ETHICS STATEMENT The collection of patient cases was conducted at the Shanghai Mental Health Center. All patients were informed that their information would be collected and used exclusively for research purposes. As detailed in the Patient Cases Preprocessing section, all data masking procedures strictly follow the Chinese information security technology guidelines for health data security (GB/T 39725-2020). Currently, the patient cases and the synthesized diagnostic conversation dataset, MDD-5k, are undergoing an ethics review at the Shanghai Mental Health Center. We plan to release the MDD-5k dataset for research purposes only after the ethics review finishes. To prevent any potential privacy data leakage, all experiments are conducted on servers located within the Shanghai Mental Health Center. § ACKNOWLEDGMENTS We are grateful for the GPU resources and accessible OpenAI API key provided by Shanda Group. Chen Frontier Lab for AI and Mental Health, Tianqiao and Chrissy Chen Institute leads the project of collecting mental disorders patient cases by cooperating with Shanghai Mental Health Center. We also appreciate the support and discussion from psychiatrists in Shanghai Mental Health Center. § PSEUDO-CODE OF THE PROPOSED FRAMEWORK In this section, we detailed explain the multi-agent framework for diagnostic conversation simulation through pseudo-code. As illustrated in Algorithm <ref>, the doctor agent, patient agent and tool agent are initialized with large language models. The whole process is guided by the tool agent. When the diagnostic conversation finishes, the diagnosis result from patient case will be appended to the dialogue history as label. The neuro-symbolic multi-agent framework for synthesizing diagnostic conversation of mental disorders Initialize: DocAgent, PatAgent, ToolAgent with LLM. ddt: Tree with predefined dynamic diagnosis tree. exp_kg: Graph with predefined knowledge graph. cur_topic: LNode with None. dial_hist, topic_hist: Text with empty list. pat_info: Text with patient case. treatment: Text with treatment from patient case not ToolAgent.(ddt) cur_topic is None cur_topic ← ToolAgent.(ddt) doc_resp ← DocAgent.(ToolAgent.(cur_topic), dial_hist) dial_hist.append(doc_resp), topic_hist.append(doc_resp) ToolAgent.(dial_hist) pat_info ← ToolAgent.(exp_kg, pat_info) pat_resp ← PatAgent.(ToolAgent.(cur_topic), dial_hist, pat_info) dial_hist.append(pat_resp), topic_hist.append(pat_resp) ToolAgent.(cur_topic, topic_hist) ddt ← ToolAgent.(topic_hist, ddt) topic_hist ← [], cur_topic ← ToolAgent.(ddt) cur_topic = experience_inquiry_treeddt ← ToolAgent.(dial_hist), cur_topic ← ToolAgent.(ddt) doc_resp ← DocAgent.(ToolAgent.(cur_topic), dial_hist) dial_hist.append(doc_resp), topic_hist.append(doc_resp) ToolAgent.(dial_hist) pat_info ← ToolAgent.(exp_kg, pat_info) pat_resp ← PatAgent.(ToolAgent.(cur_topic), dial_hist, pat_info) dial_hist.append(pat_resp), topic_hist.append(pat_resp) doc_resp ← DocAgent.(ToolAgent.(cur_topic), dial_hist) dial_hist.append(doc_resp), topic_hist.append(doc_resp) ToolAgent.(dial_hist) pat_info ← ToolAgent.(exp_kg, pat_info) pat_resp ← PatAgent.(ToolAgent.(cur_topic), dial_hist, pat_info) dial_hist.append(pat_resp), topic_hist.append(pat_resp) doc_resp ← DocAgent.(ToolAgent.(cur_topic), dial_hist) dial_hist.append(doc_resp), topic_hist.append(doc_resp) ToolAgent.(dial_hist) pat_info ← ToolAgent.(exp_kg, pat_info) pat_resp ← PatAgent.(ToolAgent.(cur_topic), dial_hist, pat_info) dial_hist.append(pat_resp), topic_hist.append(pat_resp) dial_hist.append(treatment) dial_hist § CASE STUDY We present three cases of the synthesized diagnostic conversations in MDD-5k, as well as all the material for carrying out the conversation simulation, including patient case, doctor prompt, diagnostic topic shift, and generated fictitious patient experience. Diagnostic conversation 1 and 2 are generated with the same patient case 1 to highlight our one-to-many patientcase-to-dialogue framework is capable of generating diverse conversation. Dialogue conversation 3 is generate with another patient case 2. The red words are diagnosis result, namely the label for each conversation. Both Chinese and English samples are provided. As to the Chinese version, Figure <ref>, Figure <ref> and Figure <ref> show the patient cases and other related material. Figure <ref> to Figure <ref> show diagnostic conversation 1. Figure <ref> and Figure <ref> show diagnostic conversation 2. Figure <ref> to Figure <ref> show diagnostic conversation 3. As to the English version, Figure <ref> and Figure <ref>, Figure <ref> and Figure <ref>, Figure <ref> and Figure <ref> show the patient cases and other related material. Figure <ref> to Figure <ref> show diagnostic conversation 1. Figure <ref> to Figure <ref> show diagnostic conversation 2. Figure <ref> to Figure <ref> show diagnostic conversation 3. § FEEDBACK FROM PSYCHIATRISTS During human evaluation, psychiatrists also provide us feedback for the synthesized diagnostic conversation. The discussion between professional psychiatrists and us is listed. * The patient response is so “sound and perfect”. In real diagnostic conversation, many mental disorders patients struggle to respond to the psychiatrists questions. For this issue, we are not sure whether a perfect patient answer or a poor but similar to real situation answer will benefit from the perspective of natural language processing. * The proportion of patients suffer from major depressive disorder is too small. The number of pediatric patients is also small. * Lack the role of companion (e.g. parent) which is quite common in real scenarios. This actually inspires us to design a multi-party dialogue system for dealing with diagnostic conversation simulation in the future. * Response from the doctor agent lacks proper explanation of symptoms to the patient. This is attribute to the poor mental health domain knowledge of large language models. Retrieve augmented generation (RAG) can be applied to alleviate this problem in the future. * The diagnostic process is not detailed. For example, the doctor agent merely ask if the patient can sleep well in the evening when consulting symptoms related to sleep. More details like insomnia onset time, frequency, aggravation or relief should be inquired. However, patient case contains rather limited information. So designing such detailed questions might lead to fictional reply from large language models. All the feedback is of great significance for future work in this domain.
http://arxiv.org/abs/2408.10977v1
20240820161134
A point-variety incidence theorem over finite fields, and its applications
[ "Xiangliang Kong", "Itzhak Tamo" ]
math.CO
[ "math.CO", "05B30, 51E30" ]
same theoremTheorem[section] corollary[theorem]Corollary lemma[theorem]Lemma proposition[theorem]Proposition constructionConstruction claimClaim exampleExample remark[theorem]Remark conjectureConjecture [name=Theorem,numberwithin=section]thm #1#1 𝒜ℬ𝒞𝒟 ℰℱ𝒢ℋ ℐ𝒥𝒦ℒ ℳ𝒩𝒪𝒫 𝒬ℛ𝒮𝒯 𝒰𝒱𝒲𝒳 𝒴𝒵 𝐚𝐛𝐜𝐝 𝐞𝐟𝐠𝐡 𝐢𝐣𝐤𝐥 𝐦𝐧𝐨𝐩 𝐪𝐫𝐬𝐭 𝐮𝐯𝐰𝐱 𝐲𝐳 𝐀𝐁𝐂𝐃 𝐄𝐅𝐆𝐇 𝐈𝐉𝐊𝐋 𝐌𝐍𝐎𝐏 𝐐𝐑𝐒𝐓 𝐔𝐕𝐖𝐗 𝐘𝐙 𝖠𝖡𝖢𝖣 𝖤𝖥𝖦𝖧 𝖨𝖩𝖪𝖫 𝖬𝖭𝖮𝖯 𝖰𝖱𝖲𝖳 𝖴𝖵𝖶𝖷 𝖸𝖹 [1]#1
http://arxiv.org/abs/2408.11324v1
20240821041426
HITS: High-coverage LLM-based Unit Test Generation via Method Slicing
[ "Zejun Wang", "Kaibo Liu", "Ge Li", "Zhi Jin" ]
cs.SE
[ "cs.SE" ]
acmlicensed 2018 2018 XXXXXXX.XXXXXXX [Conference acronym 'XX]Make sure to enter the correct conference title from your rights confirmation emaiJune 03–05, 2018Woodstock, NY 978-1-4503-XXXX-X/18/06 myresultbox[1][]
http://arxiv.org/abs/2408.12114v2
20240822035948
SPARK: Multi-Vision Sensor Perception and Reasoning Benchmark for Large-scale Vision-Language Models
[ "Youngjoon Yu", "Sangyun Chung", "Byung-Kwan Lee", "Yong Man Ro" ]
cs.CV
[ "cs.CV" ]
Search-Based LLMs for Code Optimization Shuzheng Gao^1, Cuiyun Gao^2∗, Wenchao Gu^1, Michael R. Lyu^1 ^1 Department of Computer Science and Engineering, The Chinese University of Hong Kong, China ^2 School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China szgao23@cse.cuhk.edu.hk, gaocuiyun@hit.edu.cn, wcgu@cse.cuhk.edu.hk, lyu@cse.cuhk.edu.hk ^∗ Corresponding author. The author is also affiliated with Peng Cheng Laboratory. August 26, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Large-scale Vision-Language Models (LVLMs) have significantly advanced with text-aligned vision inputs. They have made remarkable progress in computer vision tasks by aligning text modality with vision inputs. There are also endeavors to incorporate multi-vision sensors beyond RGB, including thermal, depth, and medical X-ray images. However, we observe that current LVLMs view images taken from multi-vision sensors as if they were in the same RGB domain without considering the physical characteristics of multi-vision sensors. They fail to convey the fundamental multi-vision sensor information from the dataset and the corresponding contextual knowledge properly. Consequently, alignment between the information from the actual physical environment and the text is not achieved correctly, making it difficult to answer complex sensor-related questions that consider the physical environment. In this paper, we aim to establish a multi-vision Sensor Perception And Reasoning benchmarK called SPARK that can reduce the fundamental multi-vision sensor information gap between images and multi-vision sensors. We generated 6,248 vision-language test samples to investigate multi-vision sensory perception and multi-vision sensory reasoning on physical sensor knowledge proficiency across different formats, covering different types of sensor-related questions. We utilized these samples to assess ten leading LVLMs. The results showed that most models displayed deficiencies in multi-vision sensory reasoning to varying extents. Codes and data are available at https://github.com/top-yun/SPARK § INTRODUCTION In recent days, Large-scale Vision-Language Models (LVLMs) have achieved significant breakthroughs in areas such as visual dialogue <cit.>, video analysis <cit.>, and document understanding <cit.>, establishing themselves as critical tools in the pursuit of artificial general intelligence (AGI). These models function similarly to the human brain by processing multimodal information and generating sophisticated inferences. For instance, the latest LVLMs, like OpenAI's GPT-4o <cit.>, have exhibited exceptional reasoning abilities that not only rival but in some cases exceed human performance. One emerging concept in modern AI research gaining significant attention is the development of large vision language models (LVLMs) capable of handling a variety of multimodal inputs, surpassing the capabilities of previous large language models (LLMs). LVLMs can process diverse forms of data simultaneously, including images, videos, and text  <cit.>. This ability also allows them to use multi-vision sensor data as input, including thermal sensors, depth sensors, and medical imaging <cit.>. To fully harness the potential of LVLMs, recent research has focused on effectively integrating various multi-vision sensor data to develop more sophisticated and practical AI systems for the real world. However, despite the remarkable advancements in LVLM models, significant challenges still remain in fully utilizing multi-vision sensors. LVLMs often overlook the nuances of the physical properties of individual vision sensors. Instead, they tend to make judgments based on prior visual or linguistic information from images they have learned using low-level features in two-dimensional data. This results in the models recognizing only superficial patterns in image inputs, missing the underlying logical structures or contextual understanding. When identifying specific objects in an image input, a model might rely on patterns learned from similar-looking images rather than considering the actual physical properties of the multi-vision sensors used to capture the image. This can hinder accurate identification and a deep understanding of the input images in fields where the LVLM's decision-making is crucial such as autonomous driving <cit.>, security systems <cit.>, and medical image diagnosis <cit.>. We evaluate the behavior of the recent LVLMs using multi-vision sensor images as input in Figure 1. The performance of sensory reasoning, which we devised to assess the understanding of fundamental knowledge of multi-vision sensors in the real world, significantly drops across different multi-vision sensors such as thermal infrared, depth, and X-ray (XR) images. This highlights the challenges that LVLMs face in accurately interpreting multi-vision sensor data and making correct inferences based on the physical properties of sensors. Additionally, from the interaction example shown below in Figure 1, while the LVLM can accurately identify the vision sensor used to capture the image for a relatively simple question, it struggles with understanding the actual purpose or context of the image in the sensor-related, more complicated questions. This indicates that current LVLMs have difficulty in understanding the fundamental knowledge of physical vision sensors beyond what the image looks like. For example, as illustrated in Figure 1, when humans look at a photograph of an X-ray medical image, they interpret it deeply, drawing upon their knowledge base and their physical understanding of the human body beyond the X-ray image itself. Despite never having seen their internal organs and the structure of bones with the naked eye, humans can comprehend the image through scientific contextual knowledge and their inherent understanding of the physical world. In contrast, current LVLMs try to understand the inside of the human body based solely on the two-dimensional data they have been trained on, revealing their limitations in fully grasping the physical environment of the real world. Therefore, establishing a comprehensive evaluation benchmark is necessary before LVLMs are implemented in critical and sensitive real-world applications. However, the assessment of Large Vision-Language Models (LVLMs) has significantly lagged behind their rapid development. Several initiatives are striving to close this gap by introducing a variety of multimodal evaluation benchmarks. Notable examples include MME <cit.>, MMBench <cit.>, LVLM-eHub <cit.>, and SEED-Bench <cit.>. These benchmarks aim to define key dimensions of multimodal capabilities and provide corresponding test samples. But, they cover a relatively narrow range of multimodal tasks, primarily focusing on fundamental abilities such as visual recognition and OCR. In this paper, to handle the aforementioned challenge, we design the SPARK benchmark to evaluate multi-vision input LVLMs on two fronts: multi-vision perception and multi-vision reasoning. Multi-vision perception pertains to the information needed, which measures the LVLM's effectiveness in satisfying visual perception needs. Multi-vision reasoning measures the LVLM's ability to base its responses on fundamental information from the provided sensor knowledge. To be specific, we generated 6,248 vision-language test samples to investigate multi-vision sensory perception and reasoning related to physical sensor knowledge proficiency, covering 6 types of multi-vision sensory instruction tasks across 2 different question-and-answer formats. We used these samples to assess 10 leading large-scale vision language models. The experiment results validate that most LVLMs displayed deficiencies in sensory reasoning to varying extents. In summary, the contributions of this work are as follows: * To the best of our knowledge, we first reveal the incapability of current LVLMs, which suffer from limited multi-vision sensory reasoning across different multi-vision sensors due to an absence of fundamental understanding of sensors in the physical world. * We propose a novel benchmark, SPARK, to rigorously test and evaluate the capabilities of LVLMs in understanding sensory knowledge, providing a comprehensive framework for assessing their performance. * We evaluated a total of 10 state-of-the-art LVLMs using our SPARK benchmark, which is designed to rigorously assess the capability of the LVLMs in handling fundamental knowledge related to multi-vision sensors. § RELATED WORK Large-scale Vison-Language Models. Recently, there has been significant interest in visual language multimodal learning. Visual language models such as LLAVA <cit.>, CollaVO  <cit.>, MoAI <cit.>, TroL <cit.>, Meteor <cit.>, IXC2.5 <cit.>, and QwenVL  <cit.> have shown impressive performance in a variety of downstream tasks. In addition, to obtain richer contextual information, LVLMs have developed the capability to handle multimodal inputs. introduces CogVLM, an advanced visual language foundation multimodal model that integrates a trainable visual expert module with a pretrained language model. InternVL2 <cit.> is an open-source multimodal large language model that bridges the gap between open-source and commercial models by enhancing visual understanding, dynamic high-resolution processing, and bilingual dataset quality. GPT4o <cit.> possesses advanced multimodal capabilities, allowing it to process and generate diverse multimodalities. This enables the model to understand and create content that integrates visual and textual information, making it suitable for a wide range of applications that require various modalities. Consequently, many LVLMs have emerged that take multi-vision sensor images as input. presents ImageBind, which creates a joint embedding space across multi-vision sensors including depth and thermal sensor data. PandaGPT <cit.> is a LVLM that integrates multimodal encoders and large language models to enable multi-vision and auditory instruction-following capabilities, performing complex tasks. However, relatively less attention has been devoted to whether LVLMs truly understand the physical meanings of multi-vision sensors used to capture the input image. Evaluation Benchmark for LVLMs. Numerous studies have leveraged existing vision-language datasets to develop benchmarks for assessing the reliability of LVLMs. MME  <cit.> includes 14 sub-tasks based on publicly available images with manually created annotations, evaluating both the recognition and perception capabilities of LVLMs through yes/no question answering. SEED-benchmark <cit.> designed to evaluate the generative comprehension capabilities of multimodal LVLM through human-annotated multi-choice questions across 12 evaluation dimensions. Other comparable benchmarks include LVLM-eHub <cit.>, MM-Vet <cit.>, and MMBench <cit.>. Additionally, there are benchmarks aimed at assessing specific target properties of LVLMs. POPE <cit.> focuses on evaluating object hallucination by asking yes/no questions about the presence of objects in the input image. M-HalDetect <cit.> introduces hallucination tasks using human-annotated labels for sentence-level classification. Unlike those previous evaluation benchmarks, the proposed SPARK is designed to rigorously test and evaluate the capabilities of understanding the physical meaning of multi-vision sensors. § EVALUATION AND INSTRUCTION DESIGN There are multiple formats available for evaluating the multi-sensor perception and reasoning capabilities of LVLM, each with distinct advantages and limitations. Free-form questions <cit.> offer flexibility and ease of creation but demand labor-intensive human assessment and present challenges in maintaining consistent scoring. Similarity-based assessment are less resource-intensive but can be significantly affected by biases present in the similarity metrics. Yes-or-No questions <cit.> are straightforward and easier to assess, but they may oversimplify the evaluation, failing to capture the full extent of LVLM's comprehension of multi-vision reasoning ability. First of all, to enable quantitative performance metrics for multi-vision perception, the instruction design aims to elicit “yes" or “no" responses from the model. This binary response format simplifies the evaluation process, allowing for clear, objective performance measurement. As a result, each instruction comprises two parts: a brief, targeted question and an explanation corresponding to either “yes" or “no." This structure ensures that the LVLM's comprehension can be precisely assessed. For every test image, two instructions are manually crafted, each posing a different question to the model. These questions are designed to test different aspects of the image's content and context. The rationale behind this approach is to ensure that the model's answers are not based on chance. When the LVLMs correctly answer both questions, it demonstrates an understanding of the image and its related information, rather than merely guessing. In addition, we also introduce a multi-vision sensor understanding evaluation design based on multi-choice questions. This format presents questions with a set of predetermined choices, allowing respondents to select the correct options. The multi-choice question format is advantageous for several reasons. First, it enables efficient grading and analysis of responses, as answers can be objectively evaluated against a fixed set of possible responses. Also, the multi-choice question format allows for precise control over the difficulty level of the questions. By varying the validity of each option, we can create questions that test different levels of understanding and comprehension. For example, including more plausible but incorrect options can increase the difficulty, ensuring that only models with a deeper understanding can consistently choose the correct answer. This flexibility in question design makes multi-choice questions a powerful tool for assessing the nuanced capabilities of multi-vision sensor systems. Furthermore, the Yes-or-No format can be seen as a specific case of multi-choice question, where the options are limited to “(A) Yes" and “(B) No." This simplification retains the benefits of the multi-choice question format while providing a straightforward way to measure binary decisions. Using accuracy as the evaluation metric for both multi-choice questions and Yes-or-No questions ensures consistency in how we assess the model's performance. Accuracy, defined as the proportion of correctly answered questions, provides a clear and intuitive measure of how well the model understands the given inputs. The adoption of the multi-choice question based evaluation design supports the development of a more comprehensive evaluation framework. The incorporation of both simple Yes-or-No questions and more complex multi-choice questions ensures that the evaluation covers both basic and advanced aspects of LVLM's understanding. § EVALUATION ON MULTI-VISION SENSOR TASKS Our instruction dataset was collected according to two multi-vision tasks: multi-vision perception and multi-vision reasoning. As illustrated in Figure 2, first of all, multi-vision perception focuses on the LVLM's ability to accurately interpret and identify objects, scenes, and relationships from various multi-vision inputs. This involves tasks such as object detection, image classification, scene recognition, and relationship detection, where the model must process and understand the content of images from multiple vision sensors. The goal is to ensure that the model can consistently recognize and categorize visual elements across different contexts from different vision sensors. On the other hand, multi-vision reasoning requires the model to not only perceive but also make inferences based on the multi-vision sensory data. This involves higher-order cognitive tasks such as understanding relationships between objects, prediction of intent of sensor use, and understanding sensor knowledge. For instance, the model might need to infer the cause of an event depicted in an image sequence or predict the purpose of a captured image. Multi-vision reasoning tests the LVLM's capability to integrate multi-vision information with contextual sensory knowledge, making logical deductions that go beyond mere perception. §.§ Multi-vision Perception Multi-vision perception is the foundational process by which Large Vision-Language Models (LVLMs) analyze images captured by various multi-vision sensors, including RGB, thermal, depth, and X-ray images. This process involves recognizing and interpreting the fundamental elements within each visual input based on cognitive science <cit.>. * Existence: LVLMs can identify and list common objects present in the image, such as people, vehicles, animals, furniture, and so on. * Count: LVLMs can count the number of identified objects or entities, providing a quantitative understanding of the scene. * Position: LVLMs can determine the spatial arrangement of objects within the image, noting their positions relative to one another. * General Description: LVLMs are also equipped to generate nuanced descriptions of the overall scene depicted in an image. They can articulate what is happening, identify objects, and provide factual information that enhances the understanding of the image itself. At the perception stage, LVLMs focus on extracting essential information directly from raw image data captured by multi-vision sensors. This foundational perception is critical for all subsequent reasoning tasks, serving as the foundation upon which more complex interpretations are built. §.§ Multi-vision Reasoning Multi-vision reasoning is where LVLMs truly showcase their advanced capabilities. Beyond simply perceiving images, LVLMs can engage in logical reasoning to derive deeper insights and make informed decisions. This distinguishes the recent LVLMs from traditional computer vision models, which primarily focus on understanding and interacting with the real world. * Contextual reasoning: LVLMs can utilize fundamental knowledge and contextual clues to make judgments about a given scenario. This type of reasoning allows LVLMs to refer to the underlying basis of physical sensor knowledge and ensure that the reasoning process remains consistent with the context provided by the image and the associated information. * Sensory reasoning: A more complex reasoning ability requires LVLMs to map 2D image data to the physical meanings associated with different multi-vision sensors. This process not only involves processing the raw data from images but also integrates it with contextual information about the underlying physical sensor knowledge in the real world. By combining fundamental sensor information, LVLMs can derive conclusions that are both accurate and contextually relevant. Sensory reasoning requires a deep understanding of the knowledge underlying the physical meaning of multi-vision sensor data. This goes beyond surface-level image recognition, demanding that LVLMs make sense of the sensor data in a way that reflects real-world physics and usage scenarios. Next, we integrate both visual and textual inputs into GPT-4, guided by meticulously crafted prompts. These prompts are specifically designed to align with each evaluation task, ensuring that the generated questions are both relevant and focused. To further enhance the quality of the benchmark, we introduce an additional filtering step. In the final stages of development, human annotators play a crucial role, selecting the correct answers and categorizing the questions according to their respective evaluation tasks. § EXPERIMENT §.§ Implementation Details §.§.§ Dataset Collection We collect six subsets for each multi-sensor vision task type, together with 4k images and 6k unique questions and answers. These instructions are built from five public datasets: MS-COCO <cit.>, M^3FD <cit.>, Dog&People <cit.>, RGB-D scene dataset <cit.>, and UNIFESP X-ray Body Part Classifier Competition dataset <cit.>. The MS-COCO dataset is a commonly used object detection dataset that contains RGB images with fine-grained object bounding boxes, categories, and attribute annotations. We sampled 1.2k images from validation dataset. Furthermore, for thermal sensor datasets, we sampled 1.2k images from two different thermal datasets (M^3FD and Dog&People) in order to collect a thermal dataset covering the widest possible range of diverse situations and objects. Additionally, we sampled 1.2k images from RGB-D scene dataset <cit.> for depth sensor because it covers a variety of indoor and outdoor scenes. Finally, we sampled 0.4k images from the public X-ray body part dataset for the XR sensor dataset because of the diversity of multiple human body parts. We described the overall distribution of data sources of the SPARK benchmark in Figure 3. §.§.§ Large Vision Language Models In our evaluation, we selected 10 state-of-the-art (SOTA) Large Vision-Language Models (LVLMs) that represent the leading edge of current research. These models were chosen to provide a comprehensive assessment of the capabilities and performance of both open-source and closed-source LVLMs across a variety of multi-vision sensor tasks on the SPARK benchmark. * Open source: CogVLM-Chat <cit.>, LLAVA-v1.5-7B <cit.>, InternVL2-8B <cit.>, TroL-7B <cit.>, Meteor-7B <cit.>, IXC2.5-VL-7B <cit.>, Qwen-VL-Chat <cit.> * Closed source: GPT-4o <cit.>, Claude 3.5 Sonnet <cit.>, Gemini-Pro1.5 <cit.> §.§ Experiment Result In this section, we conduct a comprehensive evaluation using the proposed SPARK benchmark, a rigorous framework designed to assess the capabilities of Large Vision-Language Models (LVLMs) in two target tasks: Multi-vision Perception and Multi-vision Reasoning. Multi-vision Perception presents the averaged performance on four dimensions for evaluating visual perception. Meanwhile, Multi-vision Reasoning demonstrates the averaged performance on two dimensions for evaluating the LVLMs' ability to understand and reason about multi-vision sensory data. As shown in Table 1, the evaluation revealed that performance varies significantly depending on the type of multi-vision sensor used to capture the input images. LVLMs generally perform well in simple Multi-vision perception tasks such as generating general descriptions, but more complex reasoning tasks like Multi-vision Reasoning reveal significant differences in model capabilities. Since they mainly trained with general RGB images, the performance of multi-vision perception and reasoning in RGB sensor is consistently maintained at high levels. However, the performance of LVLMs drops noticeably when dealing with images captured using thermal, depth, and X-ray(XR) sensors. This decline is particularly evident in the Multi-vision Reasoning task, especially in Sensory Reasoning. Sensory Reasoning requires LVLMs to not only recognize and describe images but also to understand the physical principles underlying the sensor data. For example, interpreting thermal data involves understanding heat signatures, while depth data requires an understanding of the need for spatial geometry beyond simple 2D interpretation. The experiment demonstrates LVLMs' limited proficiency in interpreting and mapping sensor data to its physical meaning. Table 2 provides a clear comparison of the performance of various LVLMs across different multi-vision sensors and tasks. It highlights the strengths and weaknesses of each model, particularly the advantage that closed-source models have in maintaining high performance across more complex reasoning tasks with diverse vision sensor types. Considering the overall accuracy score (ALL), GPT-4o excels in the proposed SPARK benchmark. §.§ Ablation study In the previous section, we observed that LVLMs frequently struggle to accurately infer the purpose or context of an image when the data is sourced from multi-vision sensors other than RGB. However, as demonstrated in Figure 1, even when the input image lacks explicit information about the sensor type, LVLMs can still identify the sensor correctly. This suggests that while LVLMs have already acquired sensor-related knowledge through textual data, they face challenges in mapping fundamental knowledge to real-world scenarios. Thus, in Table 3, we conducted an ablation experiment on data-centric enhancement by adding sensor information as a text prompt at the beginning of the question (“This is a {Thermal, Depth, X-Ray} image.") and measured the sensory reasoning performance change. The experiment demonstrated that sensor information can enhance the reasoning capabilities of LVLMs, particularly for thermal and depth images, while XR data showed the least impact. This implies that LVLM models, including GPT-4o, are not fully utilizing the knowledge they already possess to understand multi-vision sensory data. § CONCLUSION In this study, we focus on evaluating the ability of Large Vision-Language Models (LVLMs) to understand and process multi-vision sensory inputs. As LVLMs are increasingly deployed in real-world applications, their ability to accurately interpret and reason about data from diverse vision sensors has become crucial. To address this, we propose an evaluation benchmark called SPARK, which generates instruction tuning samples aimed at specific physical sensor understanding in various question-and-answer formats. Through extensive experiments, we assess the performance of understanding sensory knowledge in the latest state-of-the-art LVLMs handling multi-vision input. We believe this approach, integrating a sensory knowledge annotated evaluation benchmark paves the way for promising future applications of LVLMs.
http://arxiv.org/abs/2408.11053v1
20240820175856
Revisiting VerilogEval: Newer LLMs, In-Context Learning, and Specification-to-RTL Tasks
[ "Nathaniel Pinckney", "Christopher Batten", "Mingjie Liu", "Haoxing Ren", "Brucek Khailany" ]
cs.SE
[ "cs.SE", "cs.AI" ]
Revisiting VerilogEval: Newer LLMs, In-Context Learning, and Specification-to-RTL Tasks Nathaniel Pinckney12, Christopher Batten31, Mingjie Liu1, Haoxing Ren1 and Brucek Khailany1 1NVIDIA Corporation, Santa Clara, CA 2Email: mailto:npinckney@nvidia.comnpinckney@nvidia.com 3Cornell, Ithaca, NY August 26, 2024 ================================================================================================================================================================================================================= § ABSTRACT The application of large-language models (LLMs) to digital hardware code generation is an emerging field. Most LLMs are primarily trained on natural language and software code. Hardware code, such as Verilog, represents only a small portion of the training data and few hardware benchmarks exist. To address this gap, the open-source VerilogEval benchmark was released in 2023, providing a consistent evaluation framework for LLMs on code completion tasks. It was tested on state-of-the-art models at the time including GPT-3.5, GPT-4, and codegen-16b-verilog-sft. However, VerilogEval and other Verilog generation benchmarks lack failure analysis and, in present form, are not conducive to exploring prompting techniques. Also, since VerilogEval's release, both commercial and open-source models have seen continued development. In this work, we evaluate new commercial and open-source models of varying sizes (GPT-4 Turbo, Llama 3.1 8B/70B/405B, Llama 3 70B, Mistral Large, Deepseek Coder 33B and 6.7B, CodeGemma 7B, and RTL-Coder) against an improved VerilogEval benchmark suite. We enhance VerilogEval’s infrastructure and dataset by automatically classifying failures, introduce new prompts for supporting in-context learning (ICL) examples, and extend the supported tasks to specification-to-RTL translation. We find a measurable improvement in commercial state-of-the-art models, with GPT-4 Turbo achieving a 59% pass rate on specification-to-RTL tasks. We also study the performance of open-source and domain-specific models that have emerged since the original release of VerilogEval, and demonstrate that models can benefit substantially from ICL. We find that recently-released Llama 3.1 405B achieves a pass rate of 58%, effectively matching that of GPT-4 Turbo, and that the much smaller domain-specific RTL-Coder 6.7B models achieve an impressive 37% pass rate. However, prompt engineering is key to achieving good pass rates, and varies widely with model and task. A benchmark infrastructure that allows for prompt engineering and failure analysis is key to continued model development and deployment. Large Language Models, Hardware Description Languages, Verilog RTL Generation, Benchmark § INTRODUCTION Applications of large-language models (LLMs) to software coding have reached wide deployment, with examples such as Github CoPilot <cit.>. Yet, applications of LLMs to hardware design are still in their infancy <cit.>. Only a small handful of Verilog code design benchmarks exist in the literature, including RTLLM <cit.>, VerilogEval <cit.>, VeriGen <cit.>, and most recently RTL-Repo <cit.>. While RTLLM benchmarked conversational specification-to-RTL generation performance, VerilogEval, VeriGen, RTL-Repo are code completion benchmarks. Additionally, none of the benchmarks explore a model's generation performance using in-context learning <cit.> examples nor do they provide a detailed way to inspect the reasons for a model's failure. This work aims to address these limitations by extending VerilogEval <cit.> to support specification-to-RTL tasks in addition to the original code completion task. We also incorporate a variable number of in-context learning prompts, and provide a robust failure classification mechanism, to provide a more comprehensive evaluation framework for Verilog code generation tasks. The significance of this work lies in its potential to push LLM development forward for hardware design, through offering insights into model performance and the efficacy of prompt tuning, and to point out differences in generation quality across tasks. Even with similar problem statements and in-context learning examples, we find divergent responses by large-language models. This variability highlights the importance of understanding how different models respond to various prompts and contexts through the use of the benchmarks providing granular failure feedback. Moreover, we evaluate newer large-language models than those tested in the original VerilogEval paper, including GPT-4 Turbo <cit.>, open-source models like Llama 3.1 <cit.>, and domain-specific models such as RTL-Coder <cit.>. In short, we assess the latest state-of-the-art language models to determine the current frontier of LLM-based Verilog code generation while also evaluating the impact of prompt tuning. We find that recent open-source models are becoming competitive with last year's closed models, and that prompt tuning varies considerably across models. The following new features are part of the proposed benchmark infrastructure: * Specification-to-RTL task support: VerilogEval only supported code completion tasks, such as used in CoPilot<cit.>, while many models are tuned and deployed as instruction-tuned models<cit.>, with question and answer prompting. * In-context learning examples: No in-context learning (ICL) <cit.> examples were supported as part of the prompt in VerilogEval. * Failure classification: VerilogEval only reported pass/fail results of a benchmark problem, and did not give fine-grained feedback on failures. * Makefile-based evaluation environment: The original VerilogEval benchmark <cit.> used a monolithic dataset, whereas the proposed infrastructure uses a textfile approach. This allows for easier scaling while sweeping evaluation settings across more models than the original benchmark, and easier human inspection of the dataset. The improved VerilogEval benchmark is available publicly at <https://github.com/NVlabs/verilog-eval>. § BENCHMARK IMPROVEMENTS §.§ Specification-to-RTL Task Support The proposed benchmark supports both code completion and specification-to-RTL tasks to better match the instruction-tuning <cit.> of recent models. The full 156-problem dataset from VerilogEval is converted into specification-to-RTL prompting in this work. Code completion has the problem description in Verilog-compatible comments and always appends the module interface declaration to the end of the prompt. On the other hand, specification-to-RTL's prompt style is as a chat bot, with well-defined "Question" and "Answer" sections. The specification-to-RTL prompting is implemented in a manner similar to the Mostly Basic Python Problems (MBPP) benchmark<cit.> with and tags surrounding code blocks. Examples of these two styles can be found in listings <ref> and <ref> with only the highlighted code indicating the prompt styles. §.§ Support for In-Context Learning Examples In-context learning (ICL) was proposed by <cit.> to add examples of task questions and desired responses into the prompt context, so that an LLM can better respond to a given task. ICL is implemented through simple Verilog code examples, tailored for both code completion (Listing <ref>) and specification-to-RTL tasks (Listing <ref>). The listings contain the 1-shot examples used for both tasks, except line width and whitespace was adjusted for printing. The examples were selected to be short and simple, while including a full module (from declaration to ). Two additional examples for each task are added: a sequential incrementer similar to the first 1-shot example, and a basic finite-state machine for the third example. The number of shots is parameterized and can easily be swept to determine sensitivity of a model's pass rate as ICL examples are added to the prompt. 1-shot includes only the combinational incrementer, 2-shot adds the sequential incrementer, and 3-shot includes all three examples in the context prompt. frame=topline [ caption=The 1-shot in-context learning example for code completion tasks. The highlighted code is the prompt style., label=lst1:1shot_example_code_completion, language=verilog, belowskip=0pt,backgroundcolor=] // Implement the Verilog module based on the // following description. Assume that sigals // are positive clock/clk triggered unless // otherwise stated. // // The module should implement an incrementer // which increments the input by one and // writes the result to the output. Assume // all values are encoded as two's complement // binary numbers. module TopModule ( input logic [7:0] in_, output logic [7:0] out ); frame=bottomline [language=verilog,aboveskip=0pt] // Combinational logic assign out = in_ + 1; endmodule frame=topline [ caption=The 1-shot ICL example for specification-to-RTL tasks. The highlighted code is the prompt style., label=lst1:1shot_example_spec_to_rtl, language=, belowskip=0pt,backgroundcolor= ] Question: Implement a hardware module named TopModule with the following interface. All input and output ports are one bit unless otherwise specified. - input in_ (8 bits) - output out (8 bits) The module should implement an incrementer which increments the input by one and writes the result to the output. Assume all values are encoded as two's complement binary numbers. Enclose your code with [BEGIN] and [DONE]. Only output the code snippet and do NOT output anything else. Answer: frame=bottomline [language=,aboveskip=0pt] [BEGIN] module TopModule ( input logic [7:0] in_, output logic [7:0] out ); // Combinational logic assign out = in_ + 1; endmodule [DONE] §.§ Support for Failure Classification Failures of LLM-generated responses are automatically classified by broad reasons for failure, both Verilog compile time errors and simulation runtime errors, such as incorrectly using a wire as a register, incorrect bit widths, and missing module interface definitions. This classification feature provides insight into the most common reasons for failures and how to mitigate poor code generation through prompt tuning. The classification is dependent on specific warnings and errors given by Icarus Verilog or the test harness. The failures are classified in Table <ref>. Classifications were developed by human inspection of common failure modes across the code completion benchmark. For example, LLMs were observed frequently mixing up the use of registers and wires. Solutions in prompt tuning could vary: from adding prompt rules to only use wires on ports to suggesting the use of SystemVerilog port types, obviating the immediate type confusion, to allowing the LLM to generate the interface entirely on its own (as in the case of specification-to-RTL generation, rather than code completion). By classifying failures, the impact of prompt changes on code generation performance can be directly observed and guided. §.§ Other Infrastructural Improvements The original VerilogEval benchmark contained all problems in a monolithic format. This was efficient to run, but inefficient to inspect manually using a text editor. In the improved benchmark each problem was split into its own set of files, with problem prompts, module interfaces, and test benches. Autoconf<cit.> and GNU Make<cit.> were employed to target a model evaluation build directory to a specific evaluation target, including the LLM itself to run, number of shots, number of samples, task to complete, and other parameters. For each problem, a resulting problem evaluation directory is created containing a log of the LLM prompt/responses, generated Verilog file, and the Icarus Verilog output log. This infrastructure allows for scalable sweeps through the use of Make's parallel run feature, helps continue an evaluation run if it is interrupted, and allows for easy human inspection of the resulting collateral. § BENCHMARK EVALUATION We evaluate eight publicly available large-language models on the proposed benchmark: * OpenAI GPT-4 Turbo <cit.> (gpt-4-1106-preview) * OpenAI GPT-4<cit.> (gpt-4-0613) * Mistral AI Mistral Large<cit.> * Meta Llama 3.1 405B <cit.> * Meta Llama 3.1 70B <cit.> * Meta Llama 3 70B <cit.> * Meta CodeLlama 70B <cit.> * Google CodeGemma 7B <cit.> * DeepSeek Coder 33B and 6.7B<cit.> * Meta Llama 3.1 8B <cit.> * RTL-Coder DeepSeek v1.1 6.7B<cit.>. The models are comprised of a range of closed and open source, parameter sizes, and general-purpose to specialized. Model results were captured as both a 20-sample (n=20) high temperature (T=0.85, top_p=0.95) set and 1-sample (n=1) low temperature (T=0.0, top_p=0.01) set. The 20-sample set is similar to the model parameters from VerilogEval <cit.> which had a 20-sample set with temperature T=0.80. The graph in Figure <ref> illustrates the performance of various large-language models (LLMs) on code completion and specification-to-RTL translation tasks, as measured by the benchmark pass rate (pass@1 in <cit.>). Models are arranged along the x-axis by model size, with undisclosed model sizes on the right. The evaluation compares models with and without 1-shot in-context learning (ICL) examples, represented by arrows indicating the change in performance as 1-shot examples are added. For code completion tasks, GPT-4 Turbo achieves the the highest pass rate at approximately 48%, surpassing the previously established state-of-the-art frontier of 43% for 0-shot by GPT-4 <cit.>. Further adding an ICL example in the 1-shot result leads to to the highest performance yet at 58%. This highlights GPT-4 Turbo's robust improvement over GPT-4 for RTL generation tasks despite being a general-purpose model. Llama 3.1 405B demonstrates that open models have matched closed models by scoring 57% in the 0-shot code completion task, exceeding both GPT-4 and GPT-4 Turbo, while Llama 3.1 70B nearly matching GPT-4 despite being much smaller in size. While Llama 3.1 generally improves with in-context learning examples, Llama 3 70B declines in pass rate when the 1-shot ICL example is added to the prompt, which will be discussed in detail in the next section. Among the smaller specialized models, RTL-Coder showed significant improvements with 1-shot ICL examples, reaching pass rates of around 35%, while being much smaller than general-purpose models. RTL-Coder when originally sampled did not properly insert whitespace after statements and would often repeat code blocks. We modified our post-process script that extract the Verilog code form the response to match the post-processing in RTL-Coder's evaluation scripts <cit.>, and Figure <ref>'s RTL-Coder results are shown using their corrected extraction. Specification-to-RTL task results showed generally similar pass rate performance compared to code completion. GPT-4 Turbo showed noticeable pass rate improvement in spec-to-RTL 0-shot tasks, but similar pass rates for 1-shot. Mistral Large showed the opposite trend, with measurable improvement in 1-shot results and Llama 3 and Llama 3.1 70B saw improvement in both, as did Llama 3.1 8B. In Llama 3.1 405B across both tasks, adding an ICL example made little difference in pass rate. Interestingly, RTL-Coder initially fails at the specification-to-RTL task with 0-shot but recovers with 1-shot examples. This variability underscores the importance of tailored prompt tuning and the potential of ICL to enhance code generation performance in certain models. The full results are shown in Table <ref> and include both n=20 (20 samples, temperature=0.85, top_p=0.95) from Figure <ref> along with deterministic n=1 (1 sample, temperature=0.0, top_p=0.01). RTL-Coder results are shown for both the corrected and original extraction methods, with the original method also applied to the other models. As mentioned above, RTL-Coder at high temperatures (temperature=0.85, n=20) has a near-zero (1.6%) pass rate in 0-shot specification-to-RTL, but does have a respectable pass rate (37%) when temperature=0 (n=1). Inspection of the RTL-Coder responses with high temperature show that it tries to do code completion instead of specification-to-RTL in 0-shot and often omits the modular declaration. Adding an ICL example in 1-shot corrects this behavior. Overall, larger models generally achieve higher pass rates, though resource costs and model-specific responses to ICL examples vary significantly. Within the context of VerilogEval, GPT-4 Turbo and Llama 3.1 405B have become clear leaders for the highest achieved pass rates, demonstrating that open models (Llama 3.1 405B) have reached parity with closed models. Additionally, smaller (70B) open models have become competitive last year's larger closed models. Domain-specific models (RTL-Coder) are also competitive in some scenarios at a much smaller size. § IMPACT OF ICL ON PASS RATES AND FAILURES §.§ Higher-Shot ICL Results As demonstrated from the previous section, in-context learning examples improve model generation accuracy in some conditions but degrade accuracy in others. ICL impact bears further investigation. Higher-shot ICL runs were conducted for four models across parameter size classes: GPT-4 Turbo, Llama 3 70B, Llama 3.1 70B, and RTL-Coder. The second ICL example was similar to the first example but requested a sequential (flopped) incrementer instead of a combinational incrementer. The third example involved designing a finite-state machine. Pass rates across three models for the two tasks across 0-shot to 3-shots is shown in Figure <ref>. Notably, GPT-4 Turbo exhibits stable and high performance across all ICL example counts of at least 1-shot, maintaining a pass rate of 55% to 60%. In contrast, Llama 3 70B demonstrates divergent trends; its spec-to-RTL performance improves from 40% to nearly 50% with more ICL examples, whereas its code completion performance declines from 35% to just above 20%. Llama 3.1 70B is similar to Llama 3 for spec-to-RTL, and doesn't demonstrate degradation in code completion. RTL-Coder shows significant variability, with its spec-to-RTL performance improving dramatically from around 5% at 0-shot to almost 35% at 3-shot. As mentioned in the previous section, RTL-Coder drops to a very lower pass rate at high temperature with 0-shot spec-to-RTL because it omits the module declaration, but recovers to a nominal pass rate once ICL examples are added. This graph highlights the varying impact of ICL examples on different models and tasks, emphasizing the potential benefits of task-specific tuning and the necessity of providing contextual examples to enhance model outputs. §.§ Failure Analysis Figure <ref> employs the new failure classification feature of the improved benchmark infrastructure to illustrate the number and types of failures encountered by different models across various numbers of in-context learning (ICL) examples. The y-axis represents the number of failures, with lower values indicating better pass rates. Each bar is segmented to show different categories of errors, with orange shades representing compiler errors and blue shades representing runtime errors. The figure is divided into three sections for the three models from Figure <ref>, highlighting the numbers and types of failures across 0-shot to 3-shot ICL examples. As compiler errors will be flagged and mask runtime errors, the bars on the graph are best read from bottom to top. A reduction in runtime errors for the same total bar height indicates that compiler errors have displaced runtime errors. This layering effect should be considered when interpreting the improvements or degradations in model performance as additional ICL examples are introduced. For RTL-Coder, the model shows notable improvement in both tasks up to 2-shot ICL examples, after which the performance stabilizes. The primary source of failure in 0-shot examples are compile-time errors, but with the addition of ICL examples, these errors decrease significantly. As mentioned previously, in the specification-to-RTL task, the model attempts code completion when the temperature is high, leading to a high number of “module missing” errors, which are reduced with the introduction of ICL examples. Llama 3 exhibits a different pattern, where code completion performance degrades with the addition of ICL examples due to frequent errors. In contrast, for the specification-to-RTL task, adding ICL examples mitigates errors related to wires being declared as registers but introduces other compiler errors. GPT-4 Turbo shows a mixed response to ICL examples. For code completion, the model benefits from ICL examples, as indicated by a reduction in compiler errors across the board. However, in the specification-to-RTL task, the performance slightly degrades with more ICL examples, resulting in an increase in compiler errors. The results emphasize the need for careful tuning of ICL examples to optimize results. While ICL can help correct certain types of mistakes, it can also introduce new issues leading to similar or even worse performance. In addition to the failure classification feature capturing high-level counts of types of failures across different models and prompting settings, it also allows for detailed inspection on a problem-by-problem basis within a run. This granular analysis helps identify whether specific problems or categories of problems have systematic types of failures. Such insights can guide more careful tuning of prompts across the benchmark, leading to more effective and targeted improvements in model performance. A careful analysis of the problem categories within VerilogEval and comparative failure counts could help find the best ICL examples to use for a given model. § CONCLUSIONS The enhanced VerilogEval benchmark provides a more robust framework for evaluating the performance of large-language models (LLMs) on digital hardware code generation tasks. Our findings demonstrate that both Llama 3.1 405B and GPT-4 Turbo push the frontier of performance with a 60% and 48% 0-shot pass rate on code completion tasks, respectively, surpassing the previously established 43% pass rate by GPT-4 (non-Turbo) <cit.>. Open-source general-purpose models, namely Llama 3 70B, and domain-specific models, RTL-Coder, show favorable pass rates compared to last year's closed models, 37% and 32%, respectively. The addition of specification-to-RTL task support in the improved VerilogEval benchmark reveals even better model capabilities. GPT-4 Turbo achieves an impressive 59% pass rate in specification-to-RTL tasks, exceeding Llama 3.1 405B at 56%, while Llama 3 70B and RTL-Coder 6.7B also demonstrate strong competitiveness with pass rates of 42% and 37%, respectively. Adding in-context learning examples led to notable improvements (GPT-4 Turbo nearly achieving the same pass rate for code completion as spec-to-RTL), although the impact varies widely across different models and tasks. This variability underscores the importance of task-specific tuning to optimize performance. The improved benchmark infrastructure, including the new failure classification feature, provides deeper insights into the types of errors encountered by different models. For example, Llama 3 70B frequently encounters missing errors during code completion, which careful prompt tuning or model alignment may be able to fix. The ability to classify and inspect failures on a problem-by-problem basis is critical for understanding and mitigating poor code generation, leading to more effective and targeted improvements in LLM performance for digital hardware code generation. In the future, the research community would benefit from digital hardware benchmarks further expanded to include more tasks beyond RTL code generation representative of the digital hardware design flow. Some examples include verification-related tasks<cit.>, testbench stimulus generation<cit.>, along with many more<cit.>. The enhanced VerilogEval benchmark in this work is meant to be a step towards facilitating additional task support on top of a common set of design problems that allows for a more comprehensive assessment of model performance for hardware design. § ACKNOWLEDGMENT This paper would not have been possible without the generous help of NVIDIA Applied Deep Learning Research (ADLR), especially Teodor-Dumitru Ene, and the NVIDIA Inference Microservices (NIM) teams. IEEEtran
http://arxiv.org/abs/2408.11924v1
20240821181537
On reduced basis methods for eigenvalue problems, with an application to eigenvector continuation
[ "Louis Garrigue", "Benjamin Stamm" ]
math-ph
[ "math-ph", "math.MP", "math.SP" ]
Ejecta masses in Type Ia Supernovae – Implications for the Progenitor and the Explosion Scenario[Based in part on observations obtained with the Hobby-Eberly Telescope (HET), which is a joint project of the University of Texas at Austin, the Pennsylvania State University, Ludwig-Maximillians-Universitaet Muenchen, and Georg-August Universitaet Goettingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly.] [ August 26, 2024 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT We provide inequalities enabling to bound the error between the exact solution and an approximated solution of an eigenvalue problem, obtained by subspace projection, as in the reduced basis method. We treat self-adjoint operators and degenerate cases. We apply the bounds to the eigenvector continuation method, which consists in creating the reduced space by using basis vectors extracted from perturbation theory. § INTRODUCTION A classical issue in eigenvalue problems is to reduce the number of degrees of freedom of the studied systems by extracting only the relevant ones, the full considered Hilbert space being too large to be addressed in its exact form. Reduced basis method approximations aim at approximating by a well-chosen low-dimensional subset , created via an orthogonal projector . Our interest here will be eigenvalue problems. Denoting the exact self-adjoint operator by H, then the approximated operator is H →, the restriction of H to →, and we want to study its eigenmodes. Among other works, reduced basis problems have been investigated in <cit.>, the case of several eigenvalues was examined in <cit.>. In Theorem <ref>, Propositions <ref> and <ref>, we provide bounds enabling to estimate the error between the eigenmodes of the exact operator H and the ones of the approximated operator H →. We treat the degenerate and almost-degenerate case by using the formalism of density matrices, and we treat the non-degenerate cases with a vector formalism. We sought to derive general bounds which could be applied to diverse settings. We then apply our bounds to a reduced basis method which uses the derivatives of the eigenvectors to build the reduced space. Such a method was introduced in the context of computational engineering science in <cit.>, and was named eigenvector continuation in <cit.>. Recently, many works showed the very interesting performance of this method applied to quantum physics, see for instance <cit.>, providing perspectives to improve several areas of quantum physics. This method gives a systematic way of forming effective systems. The situation is illustrated on Figure <ref>, on which we represent the spectra of H(λ) and of H(λ) →, where H(λ) is the exact self-adjoint operator, depending on one parameter λ∈. Denoting one eigenvector of H(λ) by ϕ(λ), if ^̣nλ̣^nϕ(λ)_ 1mu height 2ex2mu λ = 0∈ for all n ∈{0,…,ℓ}, it was practically remarked that the corresponding eigenmode of H(λ) → is very close to the exact one, much closer than the perturbation approximation. To explain this phenomenon, quantitative bounds are provided in Corollaries <ref> and <ref> and in Theorem <ref>. § DEFINITIONS We choose a standard but general mathematical setting which can address common operators involved in quantum mechanics, including Dirac operators, many-body Schrödinger operators and Bloch transforms of periodic operators. §.§ First definitions Let be a separable Hilbert space, endowed with a scalar product ·,· and a corresponding norm ·. We will denote by B := ∈\{0}B the canonical operator norm. Let us consider a self-adjoint operator H of , we want to approximate some of its eigenmodes by using a reduced basis method. Let us take a self-adjoint operator A of , possibly unbounded, which will implement the energy norm, and we consider that it has a dense domain and a dense form domain. On vectors ∈, the energy norm is e := A , it is the natural norm for eigenvectors. For instance when = L^2(^3), in the case of a Schrödinger operator H = -Δ + v, it is natural to choose A = √(-Δ) and ·e is equivalent to the Sobolev norm H^1(^3). We define ·e,0 := · and ·e,1 := ·e, so for any ∈ and δ∈{0,1}, e,δ = A^δ. We will always assume that c_A < +∞ and c_H < +∞ where c_A := A^-1, c_H := A^-1 H A^-1. §.§ Density matrices For any ∈, we denote by P_ the orthogonal projector onto . For any orthogonal projection P, we will use the notation P^⊥ := 1 - P. The analogous objects as eigenvectors, but for degenerate systems, are density matrices of a set of eigenvectors. For any := (_μ)_μ=1^ν∈^ν, we define the corresponding density matrix := ∑_α=1^ν_α= ∑_α=1^ν P__α, being an orthogonal projection on , that is ^2 = ^* =. We denote by _ν := {U ∈^ν×ν | U^* U = 1} the group of unitary matrices of dimension ν and for any U ∈_ν we define its action U := ((U )_α)_α=1^ν on ^ν where (U )_α := ∑_β=1^ν U_αβ_β. We have U = uniformly in U ∈_ν and ∈^ν. For any operators B, D on , the Hilbert-Schmidt scalar product is denoted by B,D := B^* D its norm B2 := B^* B, and the corresponding normed space is the space of Hilbert-Schmidt operators, denoted by := { B : →, B2 < +∞}. For δ∈{0,1} and any B ∈, we use the notation B2,δ := A^δ B 2. The norm ·2,1, called the energy norm, is the natural one on the set of density matrices, as ·e is the natural norm on vectors. §.§ Consider a reduced space Let us take an orthogonal projection on , we assume that is neither the identity nor the null projection to avoid the trivial cases, and we set ^⊥ := 1 -. The reduced space is , it can be infinite-dimensional, and we will need to assume that c_ < +∞ where c_ := A A^-1. Our central object will be → : →, which is the restriction of H to , hence it is an operator of , while H is an operator of . If d := is finite, we can see this operator as a d × d matrix. We take → as an approximation of H, in the sense that its eigenmodes will well approximate the ones of H. Remark that since ≠ 1, σ H = σ→∪{0} because ^⊥⊂ H. Moreover, in our approach we avoid to use a variational point of view, so that we can reach eigenvalues having continuous spectrum below for instance. §.§ Choose sets of eigenmodes Take ν∈, we choose a set of eigenvalues {E_μ}_1 ≤μ≤ν∈σ(H) in the spectrum of H, they are counted with multiplicity and their normalized eigenvectors are denoted by ϕ_μ and grouped into := ϕ_μ_μ=1^ν. We define the associated density matrix Γ := ∑_μ=1^νϕ_μ = . The purpose of taking ν≥ 2 is to be able to treat the almost-degenerate and degenerate cases, i.e. when eigenvalues are close or even equal. If the eigenvalues are not close, one can take the non-degenerate case ν = 1 since no singular quantity will appear. Note that the eigenvalues E_μ are not necessarily sorted in increasing order. For any operator B, we denote by σ_(B) the discrete spectrum of B. Then we assume that H → has at least ν eigenvalues in its discrete spectrum, we take ν of them, we denote them by {_μ}_1 ≤μ≤ν⊂σ_ H →, we denote by ψ_μ the corresponding normalized eigenvectors, grouped into := ψ_μ_μ=1^ν. We define the associated density matrix Λ := ∑_μ=1^νψ_μ = . We will study the closeness between ϕ_μ and ψ_μ for any μ∈{1,…,ν}, So to each level μ of H corresponds to a level μ of H →. But they are not sorted in increasing order, so for instance if we follow a variational approach, the label μ can denote the 3^rd level of H and the 5^th level of H →, and ϕ_μ - ψ_μ can be small. For example Figure <ref> illustrates this principle. §.§ Definition of partial inverses For any self-adjoint operator B, if {e_μ}_μ=1^α⊂σ_(B), then there exists κ_B > 0 such that σ(B) \{e_μ}_μ=1^α∩∪_μ=1^α ]e_μ - κ_B , e_μ + κ_B[ = ∅. In addition to (<ref>) we will also assume that ∩⊕_μ = 1^ν H - _μ = ν, to ensure that all the eigenvectors associated to {_μ}_μ=1^ν are taken into account. For any z ∈{_μ}_μ=1^ν∪\σ H → we define z - H _⊥^-1 := {[ z - H _ 1mu height 2ex2mu Λ^⊥→Λ^⊥^-1 Λ^⊥,; 0 Λ⊕^⊥, ]. extended by linearity on . We also define R_μ := _μ - H _⊥^-1. For any μ∈{1,…,ν}, by (<ref>) and (<ref>) there exists κ_ H > 0 such that R_μ≤κ_ H ^-1. § MAIN RESULT In this section we present our main result, which is a comparision between exact and approximated eigenmodes. It is a basic estimate that does not yet consider the parametrized setting, which is left for Section <ref>. We take the same notations as in Section <ref>. §.§ Clusters of eigenmodes Take a Hilbert space , and a self-adjoint operator A which is built to form a norm. Take a self-adjoint operator H which eigenmodes will be approximated. Consider an orthogonal projector , assume that H and H have at least ν eigenvalues (counted with multiplicity). We consider ν eigenmodes of respectively H and H, denoted by respectively (E_μ,ϕ_μ) and (_μ,ψ_μ), where μ∈{1,…,ν}, ϕ_μ = ψ_μ = 1 and we assume (<ref>) and (<ref>). We define := (ϕ_μ)_μ=1^ν, := (ψ_μ)_μ=1^ν, Γ :=, Λ :=. We assume that c_, c_A < +∞, where those quantities are defined in Section <ref>, and that all the quantities involved in the following are finite. For δ∈{0,1}, we have Γ - Λ = ∑_μ=1^ν(1 + H R_μ )^⊥Γ P_ψ_μ+ + Ω, where Ω2,δ≤ c_A^δ^⊥Γ2,δ^2 +1 + c_A c_^2^δ1 + c_A (1+c_A)A Λ^2δΓ - Λ2,δ^2 + 2 c_^δ + ν c_A^2δ^⊥ H Λ1 ≤μ≤νA^δ R_μc_A 1 + A Λ^δ^⊥Γ2,δΓ - Λ2,δ. A proof is given in Section <ref>. The term “s.a” denotes the self adjoint operator. The next result provides another bound for Γ - Λ, using another method. For any z ∈{E_μ}_μ=1^ν∪\σ( H ) we define z - H_⊥^-1 := {[ z - H _ 1mu height 2ex2mu Γ^⊥→Γ^⊥^-1 Γ^⊥,; 0 Γ ]. extended by linearity on . Let us make the same assumption as in Theorem <ref>, and moreover assume that {_μ}_μ=1^ν , σ_H_ 1mu height 2ex2mu Γ^⊥→Γ^⊥ > 0, {E_μ}_μ=1^ν , σ_ H _ 1mu height 2ex2mu Λ^⊥→Λ^⊥ > 0. Then Γ - Λ2,δ≤ c_A^δ^⊥Γ2,δ^2 + ν c_^δ1 ≤μ≤ν A^δ_μ - H_⊥^-1^⊥ H Λ + c_A c_A Γ^δ2 + ν1 ≤μ≤νE_μ - H _⊥^-1 H ^⊥^⊥Γ2,δ. The proof of this result is provided in Section <ref>. §.§ One eigenmode In the case where we treat only one eigenmode, one can obtain more precision about the errors, this is the object of the following result. We drop the subscripts 1 labeling the different eigenvectors, because we consider only one of them and write ϕ := ϕ_1, ψ := ψ_1, E := E_1, := _1, and R := R_1. Make the same assumptions as in Theorem <ref>, take ν = 1 and remove the subscripts 1. Thus (E,ϕ) is an eigenmode of H and (,ψ) is an eigenmode of H →. In a gauge where ψ,ϕ∈, ϕ - ψ = 1 + R H^⊥ϕ - 12 ϕ - ψ^2 ψ + - E R ϕ - ψ, E - = ^⊥ϕ, - H1 + R H^⊥ϕ + (E-) ϕ - ψ^2 - ϕ - ψ^2 ^⊥ϕ, (H-E) (ϕ - ψ)+E-^2 ϕ - ψ, R ϕ - ψ. The proof is presented in Section <ref>. §.§ Remarks Let us now proceed with some remarks. Since ^⊥Λ = 0, we have ^⊥Γ2,δ = A^δ^⊥ A^-δ A^δΓ - Λ2,δ≤ (1 + c_)^δΓ - Λ2,δ, hence we see in (<ref>) that Ω is quadratic in Γ - Λ2,δ, and thus negligible in (<ref>) when Γ - Λ2,δ is small. The leading term for Γ - Λ is thus ∑_μ=1^ν(1 + H R_μ )^⊥Γ P_ψ_μ+, as emphazised in (<ref>). All the quantities involved in (<ref>) and (<ref>) and (<ref>) are invariant under the transformations → U and → V, where U,V ∈_ν. From Theorem <ref>, for Γ - Λ2,δ small enough, and δ∈{0,1}, Γ - Λ2,δ≤ 4 νc_AA Λ^δ^⊥Γ2,δ1 ≤μ≤νA^δ (1+ R_μ H) ^⊥ A^-δ, see Section <ref> to have more details on how to obtain this inequality. Moreover, by Lemma <ref>, there exists a rotation U ∈_ν such that ∑_μ=1^νA_μ - U _μ≤ c A ^⊥Γ2 for some constant c, and again by Lemma <ref>, and yet for another constant c, the error in the sums of eigenvalues is quadratic, that is ∑_μ=1^νE_μ - _μ≤ c A ^⊥Γ2^2. Hence those errors are controled by the key quantity A ^⊥Γ. If ϕ - ψe is small enough, then Proposition <ref> yields ϕ - ψe ≤ 2 1 + c_H ARAA ^⊥ϕ, E - ≤ 4c_H + c_A^2 E1 + c_H ARA^2 A ^⊥ϕ^2, see Section <ref> to have more details on the derivation of those inequalities. Thus those errors are controlled by the key quantity A ^⊥ϕ. The term ^⊥ϕ = ^⊥ϕ - ψ is controled by ϕ - ψe in norm. When ϕ - ψe is small, in (<ref>) the leading term is 1 + R H^⊥ϕ, then the second term is of order 2 and - E R ϕ - ψ is of order 3. In (<ref>), the leading term is ^⊥ϕ, - H1 + R H^⊥ϕ (which is of order 2), the second term is of order 4, the third term is of order 4 and the last one of order 6. Consider the vector case corresponding to Proposition <ref>. We numerically see that making larger by adding more vectors to decreases the error, in general. This can be expected from the form of the leading term 1 + R H^⊥ϕ, in which, for any vector ∈, ^⊥ decreases. However, as will be seen in Section <ref>, there are some exceptional cases where making larger increases the error. The quantity ^⊥ H Λ can be interpreted as an one. When Γ is close to Λ it is small because, since [H,Γ]=0 and ^⊥Λ = 0, then ^⊥ H Λ = ^⊥ [H, Λ - Γ]. The bound (<ref>) involves Γ - Λ2,δ^2 while (<ref>) does not. If one rather needs quantification,  (<ref>) might be better. § APPLICATION TO EIGENVECTOR CONTINUATION We now put the results of the previous section in the context of the eigenvector continuation. We refer to Figure <ref> to illustrate our reasoning. §.§ Definitions and assumptions We start by introducing some definitions and making some assumptions, which will enable to apply Rellich's theorem and Theorems <ref> and <ref>. §.§.§ Analytic family of operators We present here assumptions which will be sufficient to use the Rellich theorem on the existence of analytic eigenmodes. Let us take a self-ajoint operator H^0 such that σ(H^0) ≠, so there exists r ∈ and > 0 such that σ(H^0) ∩ ]r-,r+[ = ∅. Let us take a self-adjoint energy norm operator A, for instance one can take A = H^0 - r^ 12. We will choose a simple case for the family of operators, that is we consider M ∈, a series of self-adjoint operators H^n for n ∈{0,…,M} such that D(H^0) ⊂ D(H^n) where D(·) denotes the domain of an operator, and such that n ∈H^n H^0 - r^-1 < +∞, n ∈A^-1 H^n A^-1 < +∞. For instance, one can take H^n as bounded operators for any n ∈{1,…,M}. We also define H^n := 0 for any n ≥ M +1. Finally, we define H(λ) := ∑_n=0^+∞λ^n H^n. §.§.§ Choose a set of eigenvalues of H(λ) Let us assume that H^0 has at least ν eigenvalues {E_μ^0}_μ =1^ν⊂σ_d(H^0) in the discrete spectrum, counted with multiplicity but not necessarily sorted in increasing order. By (<ref>), there exists κ_H^0 > 0 such that σ(H^0) \{ E_μ^0 }_μ=1^ν∩∪_μ=1^ν ]E_μ^0 - κ_H^0 , E_μ^0 + κ_H^0[ = ∅, and assume that ⊕_μ = 1^νH^0 - E_μ^0 = ν. Rellich's theorem states that the eigenmodes of H(λ) are also analytic in λ, see <cit.>, <cit.>, <cit.> and <cit.> for instance. The extension to infinite-dimensional space also holds under some technical assumptions, see <cit.>,  <cit.>,  <cit.> and <cit.>. We denote by E_μ(λ),ϕ_μ(λ) the eigenmodes, analytic in λ, respecting E_μ(λ) = E^0_μ and ϕ_μ(λ) , ϕ_α(λ) = δ_μα for any μ,α∈{1,…,ν}. The phasis of the vectors is not fixed by those conditions, meaning that taking smooth maps θ_μ : →, the eigenvectors e^iθ_μ(λ)ϕ_μ(λ) also respect the previous conditions. For any λ∈ ]-λ_0,λ_0[, we define Γ(λ) := (λ) and the partial inverse K_μ(0) := {[ E_μ(0) - H(0) _ 1mu height 2ex2mu Γ(0)^⊥→Γ(0)^⊥^-1 (Γ(0))^⊥,; 0 Γ(0) , ]. extended by linearity on . By (<ref>) we have K_μ(0)≤κ_H^0^-1, and we assume that 1 ≤μ≤νA K_μ(0) A < +∞. §.§.§ Starting point for H(λ) → The starting point of the analysis of the reduced operator will be λ = 0, on which the eigenmodes under study of the exact and reduced operators are equal. So the first step consists in exploiting this fact. Let us consider an orthogonal projection , where can be infinite-dimensional. The hypothesis of eigenvector continuation, which we will see later, imply that the exact eigenvector ϕ_μ(0) belongs to , hence H(0) ϕ_μ(0) = E_μ(0) ϕ_μ(0), so ϕ_μ(0) is also an eigenvector of ( H(0) )_→ with eigenvalue E_μ(0). We need to assume that {E_μ(0)}_μ =1^ν⊂σ_d H(0) → and ∩⊕_μ=1^ν ( H(0) - E_μ(0)) = ν. Those last assumptions mean that the reduction from to does not produce spectral pollution close to the E_μ(0)'s for H(0). §.§.§ Analytic branches for H(λ) → To be able to apply Rellich's theorem for H(λ) →, we make several assumptions. Let us assume that σ H(λ) →≠, so there exists r_∈ and _ > 0 such that σ H(λ) → ∩ ]r_-_,r_+_[ = ∅, assume that n ∈ H^n H^0 - r__ 1mu height 2ex2mu →^-1 < +∞. and that for any n∈, D( H^0 ) ⊂ D( H^n ). Rellich's theorem ensures the existence of ν eigenmodes _μ(λ), ψ_μ(λ)_μ =1^ν of H(λ) →, analytic in λ∈ ]-λ_0,λ_0[ where λ_0 >0, such that _μ(0) = E_μ(0), ψ_μ(0) = ϕ_μ(0) and ψ_μ(λ) , ψ_α(λ) = δ_μα for any μ,α∈{1,…,ν}. We take λ_0 small enough so that for some κ_H > 0 (which does not depend on λ) and any λ∈ ]-λ_0 , λ_0[, σ H(λ) →\{_μ(λ)}_μ=1^ν ∩∪_μ=1^ν ]_μ(λ) - κ_H , _μ(λ) + κ_H [ = ∅, meaning that the rest of the spectrum remains far from {_μ(λ)}_μ=1^ν, uniformly in λ. Together with (<ref>), this implies that for any λ∈ ]-λ_0 , λ_0[, ∩⊕_μ=1^ν ( H(λ) - E_μ(λ)) = ν. For any λ∈ ]-λ_0 , λ_0[ we can hence define R_μ(λ) := {[ _μ(λ) - H(λ) _ 1mu height 2ex2mu Λ(λ)^⊥→Λ(λ)^⊥^-1 Λ(λ)^⊥,; 0 Λ(λ) ⊕^⊥(λ) , ]. extended by linearity on . From (<ref>) we have R_μ(λ)≤κ^-1_H(λ). §.§ Statement of the results For any λ∈ ]-λ_0,λ_0[, we recall that Γ(λ) := (λ). For any n ∈ and any μ∈{1,…,ν}, ϕ_μ^n := ^̣n/λ̣^nϕ_μ(λ)_ 1mu height 2ex2mu λ = 0, Γ^n := ^̣n/λ̣^nΓ(λ)_ 1mu height 2ex2mu λ = 0. See Proposition <ref> to see how to obtain the Γ^n's. Let us also define ξ_,ℓ,δ^ := ∑_μ=1^ν1 + R_μ(0)H(0)^⊥Γ^ℓ +1P_ϕ_μ(0) + s.a2,δ. The main theorem of this section is about the closeness of the density matrix Γ(λ) associated to the exact operator H(λ) with the one of the approximate operator H(λ) →, when contains the first ℓ + 1 derivatives of Γ(λ). As in Section <ref>, consider a Hamiltonian family H(λ) := ∑_n=0^Mλ^n H^n and consider ν analytic families of eigenmodes E_μ(λ), ϕ_μ(λ)_μ=1^ν with ϕ_μ(λ),ϕ_α(λ)=δ_μα. Make the presented assumptions (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>) and (<ref>). Consider an orthogonal projector , satisfying (<ref>),  (<ref>). Then, there are ν eigenmodes _μ(λ), ψ_μ(λ)_μ=1^ν of H(λ) →, analytic in λ such that _μ(0) = E_μ(0), ψ_μ(0) = ϕ_μ(0) and ψ_μ(λ), ψ_α(λ)= δ_μα for any μ,α∈{1,…,ν}. We define (λ) := ϕ(λ)_μ=1^ν, (λ) := ψ(λ)_μ=1^ν, Γ(λ) := (λ) and Λ(λ) := (λ). Given ℓ∈, if ∀ n ∈{0,…,ℓ}, Γ^n ⊂, then there exists λ_0 > 0 such that for any λ∈ ]-λ_0,λ_0[ and δ∈{0,1}, Γ(λ) - Λ(λ)2,δ - λ^ℓ +1ξ^_,ℓ,δ ≤ c λ b^ℓ +2, where b and c are independent of λ and ℓ. We give a proof in Section <ref>. Proposition <ref> recalls the results of<cit.> showing how to obtain Γ^n. The next result provides a practical way of building the reduced space used in (<ref>), via an explicit and simple basis. Consider the context of Corollary <ref>. Take (_μ)_μ=1^ν∈^ν to be a basis of the unperturbed space ⊕_μ=1^νH(0) - E_μ(0). Then ⊕_n=0^ℓΓ^n = Γ^n _μ | 0≤ n ≤ℓ, 1 ≤μ≤ν. A proof is provided in Section <ref>. We now discuss the vector case and as in Proposition <ref> we drop the subscripts 1, so R(λ) := R_1(λ), ϕ(λ) := ϕ_1(λ), ψ(λ) := ψ_1(λ), E(λ) := E_1(λ), (λ) := _1(λ), ϕ^n := ϕ^n_1. We define ξ_,ℓ,δ^,V := 1 + R(0) H(0)^⊥ϕ^ℓ +1e,δ ξ_,ℓ^,E := ^⊥ϕ^ℓ +1, (H(0) - E(0)) 1 + R(0) H(0)^⊥ϕ^ℓ +1. We now state the corresponding result but in the non-degenerate case and for vectors. We make the same assumptions as in Corollary <ref>, we take ν =1 and remove the subscripts 1, and we take ℓ∈. We choose the phasis of ϕ(λ) and ψ(λ) such that ϕ^0 , ϕ(λ)∈_+ and ϕ(λ) , ψ(λ)∈. If ∀ n ∈{0,…,ℓ}, ^̣nλ̣^nϕ(λ)_ 1mu height 2ex2mu λ = 0∈, then there exists λ_0 > 0 such that for any λ∈ ]-λ_0,λ_0[ and δ∈{0,1}, ϕ(λ) - ψ(λ)e,δ - λ^ℓ +1ξ^_,ℓ,δ ≤ c λ b^ℓ +2, E(λ) - (λ) - λ^2(ℓ +1)ξ^,E_,ℓ ≤ c λ b^2ℓ +3, where b and c are independent of λ and ℓ. We provide a proof in Section <ref>. §.§ Remarks Now, several remarks seem in order. Inequality (<ref>) could be written as ϕ(λ) - ψ(λ)e,δ = λ^ℓ +1ξ^_,ℓ,δ + Oλ b^ℓ +2, where O(·) would be a function bounded in λ and ℓ. A consequence of (<ref>) and (<ref>) is that for all n ∈{0,…,ℓ}, k ∈{0,…,2ℓ +1}, ^̣n λ̣^nψ(λ)_ 1mu height 2ex2mu λ = 0 = ^̣n λ̣^nϕ(λ)_ 1mu height 2ex2mu λ = 0, ^̣k λ̣^k(λ)_ 1mu height 2ex2mu λ = 0 = ^̣k λ̣^k E(λ)_ 1mu height 2ex2mu λ = 0, meaning that the first perturbation terms of eigenvector continuation are the same as the ones of the exact problem. Intermediate normalization is reviewed in Appendix <ref>. Instead of building the reduced space from the ϕ^n's, one can form it by using the eigenvectors in intermediate normalization, denoted by Φ^n. Using this last normalization is more convenient because it involves less computations. From (<ref>) we have Φ^k, 0 ≤ k ≤ℓ = ϕ^k, 0 ≤ k ≤ℓ. Hence one can form the reduced space of eigenvector continuation by using either intermediate or unit normalization perturbation vectors, this is equivalent. We provide a comparision of eigenvector continuation with perturbation theory in Section <ref>. Corollary <ref> is stated for a one-dimensional parameter space, parametrized by λ, but one can straightforwardly extend it to general parameter spaces. §.§ Vectors in the degenerate case The bounds of Corollary <ref> do not enable to obtain bounds on individual eigenvectors and individual eigenvalues in the degenerate case. Nevertheless, following a different strategy of proof can lead to such bounds and this is the purpose of this section. §.§.§ Assumptions on derivatives Let us define := Γ(0) H^1 Γ(0), we denote by its restriction as an operator of Γ(0). Let us make the hypothesis on the E_μ(λ)'s, but we could make them on the _μ(λ)'s, this is equivalent since Γ(0) H^1 Γ(0) = Λ(0) H^1 Λ(0). We assume that ∀α, β∈{1,…,ν}, E_α(0) = E_β(0), i.e. the system is exactly degenerate. For any α∈{1,…,ν}, we define E_α'(0) := λ̣ E_α(λ)_ 1mu height 2ex2mu λ = 0, and it is well-known that from first-order perturbation theory (see <cit.> for instance) the E_α'(0)'s are the eigenvalues of . We take μ∈{1,…,ν} and we make the assumption that the eigenvalue E_μ'(0) is non-degenerate for , implying the the other branches have a different derivative at zero. Thus there exists κ_ > 0 such that σ\{E_μ'(0)}∩ ]E_μ'(0) -κ_,E_μ'(0) + κ_[ = ∅, and we can define _μ(0) := {[ E_μ'(0) - _ 1mu height 2ex2mu P_ϕ_μ(0)^⊥Γ(0) ^-1 P_ϕ_μ(0)^⊥Γ(0) ,; 0 Γ(0)^⊥⊕ϕ_μ(0), ]. extended by linearity on all of . More explicitely, we have _μ (0)= ∑_1 ≤α≤ν α≠μE_μ'(0) - E_α'(0)^-1 P_ϕ_α(0). We then define, for δ∈{0,1}, ξ^_,μ,ℓ,δ := 1 + _μ(0) H^1 1 + R_μ(0) H^0 ^⊥ϕ_μ^ℓ +1(0)e,δ. §.§.§ Statement of the result We are now ready to state our last result on eigenvector continuation. We make the same assumptions as in Corollary <ref> except (<ref>), so we consider a cluster of ν eigenmodes E_μ(λ) , ϕ_μ(λ)_μ=1^ν. Moreover, let us assume (<ref>), take some μ∈{1,…,ν} and assume (<ref>). We choose the phasis of ϕ_μ(λ) and ψ_μ(λ) such that ϕ_μ^0 , ϕ_μ(λ)∈_+ and ϕ_μ^0 , ψ_μ(λ)∈_+. Take ℓ∈ and δ∈{0,1}. If ∀ n ∈{0,…,ℓ}, ^̣nλ̣^nϕ_μ(λ)_ 1mu height 2ex2mu λ = 0 ∈, ∀α∈{1,…,ν}, ϕ_α(0) ∈, then there exists λ_0 > 0 such that for any λ∈ ]-λ_0,λ_0[, ϕ_μ(λ) - ψ_μ(λ)e,δ - λ^ℓ +1ξ^_,μ,ℓ,δ ≤ c λ b^ℓ +2, where b and c are independent of λ and ℓ. As in (<ref>), Corollary <ref> only provides a convergence of the density matrices and of the sum of eigenvalues in a cluster, not a convergence of the individual eigenvectors and eigenvalues. Hence Theorem <ref> provides more information. An error in individual eigenvalues can be deduced from an error in individual eigenvectors by Lemma <ref>. The proof of Theorem <ref>, provided in Section <ref>, is very different from the ones of the previous results, and uses a purely perturbative approach. § COMPARISION BETWEEN EIGENVECTOR CONTINUATION AND PERTURBATION THEORY In this section, we present a numerical experiment investigating eigenvector continuation in the perturbative regime. We consider non-degenerate levels, and the vector case, as treated in Corollary <ref>. §.§ Operators H^n We will work with periodic one-dimensional Schrödinger operators. Take = L_per^2() to be the space of L^2 functions with period L >0, take V_j : → for j ∈{1,2,3} three smooth functions, H^0 = -Δ + V_1, H^1 = V_2, H^2 = V_3 and H^n = 0 for any n ≥ 3. We represent the V_j's on Figure <ref> together with their ground states denoted by u_j. §.§ Eigenvector continuation versus perturbation theory We define the approximation of ϕ_μ(λ) given by perturbation theory and the corresponding eigenvalue approximation _μ(λ) := ∑_n=0^ℓλ^n ϕ^n_μ∑_n=0^ℓλ^n ϕ^n_μ, e_μ(λ) := _μ(λ), H(λ) _μ(λ). It is well-known that those quantities coming from perturbation theory respect the following bounds. Let us make the definitions and assumptions of Sections <ref> and <ref>. By defining ξ^_,ℓ,δ := ϕ^ℓ+1_μe,δ and ξ^,E_,ℓ := E^2(ℓ+1)_μ, for δ∈{0,1} we have ϕ_μ(λ) - _μ(λ)e,δ - λ^ℓ +1ξ^_,ℓ,δ ≤ c λ b^ℓ +2, E_μ(λ) - e_μ(λ) - λ^2(ℓ +1)ξ^,E_,ℓ ≤ c λ b^2ℓ +3, where b and c are independent of λ and ℓ. A proof is provided in Section <ref>. The errors given by eigenvector continuation and perturbation theory have the same order in λ but have different constants. The relevent quantity enabling to compare eigenvector continuation and perturbation theory in the asymptotic regime is ξ_0 := 1 and for ℓ≥ 1, ξ_ℓ := λ→ 0ϕ_μ(λ) - _μ(λ)ϕ_μ(λ) - ψ_μ(λ) = ξ^_,ℓ,0ξ_,ℓ,0^ = ϕ^ℓ+1_μ1 + R_μ(0) H(0)^⊥ϕ^ℓ+1_μ, but one could also use 1 + R_μ(0) H(0)^⊥ϕ^ℓ+1_μe^-1ϕ^ℓ+1_μe, which is very close. This quantifies the acceleration that eigenvector continuation provides with respect to perturbation theory. The larger ξ_ℓ is, the most efficient is. We numerically found situations such that ξ_ℓ < 1 so eigenvector continuation is not necessarily better than perturbation theory, but in general we observe ξ_ℓ > 1. In our simulations, we will display the errors made by eigenvector continuation (plain lines) and the ones made by perturbation theory (dashed lines), at the level of eigenvectors and eigenvalues. It is as if the perturbative regime was attained sooner than with perturbation theory §.§ Varying ℓ In this section, we aim at making ℓ vary. We choose ^ℓ to be the orthogonal projection onto ^̣nλ̣^nϕ(λ)_ 1mu height 2ex2mu λ = 0, 0 ≤ n ≤ℓ where ϕ(λ) is the eigenvector corresponding to the lowest eigenvalue E(λ) of H(λ), and we denote by ψ^ℓ(λ) the eigenvector of lowest eigenvalue ^ℓ(λ) of ^ℓ H(λ) ^ℓ. We define the perturbative approximations ^ℓ(λ) := ∑_n=0^ℓλ^n ϕ^n∑_n=0^ℓλ^n ϕ^n and e^ℓ(λ) := ^ℓ(λ), H(λ) ^ℓ(λ). On Figure <ref>, we plot the errors against λ and near λ = 0. The asymptotic slopes correspond to (<ref>) and (<ref>). We see that the perturbation regime (the value of λ for which the asymptotic slopes of λ→ 0 are followed) for perturbation theory is precisely attained around λ≃ 1.5 for all values of ℓ. On the contrary, in the case of eigenvector continuation, it is not clear where the asymptotic regime starts. On Table <ref> and Figure <ref>, we display the acceleration constant ξ_ℓ with respect to ℓ. We also define ξ_ℓ^simple := ^⊥ϕ^ℓ+1_μ^-1ϕ^ℓ+1_μ to show in Figure <ref> that this simpler quantity is close to ξ_ℓ. We see on Figure <ref> that the asymptotic behaviors when ℓ→ +∞ are ξ^_,ℓ≃ c_pert s_pert^ℓ and ξ^_,ℓ≃ c_ec s_ec^ℓ with s_ec < s_pert. Hence we can conjecture that ϕ(λ) - ^ℓ(λ)≃ r_pertλ q_pert^ℓ, ϕ(λ) - ψ^ℓ(λ)≃ r_ecλ q_ec^ℓ with q_ec < q_pert, as if eigenvector continuation had the same error behavior as perturbation theory but where the perturbative regime is attained sooner than for perturbative theory. §.§ Competition between ℓ and ν We now investigate whether it is better to build a reduced space using the excited states at λ or using the perturbation vectors. We denote by ϕ_(μ)(λ) the μ^th eigenvector of H(λ), counted with multiplicity, so in particular ϕ_(0)(λ) is the eigenvector of lowest eigenvalue. Then we define the two reduced spaces _ν := ϕ_(μ)(λ) , 0 ≤μ≤ν, ^ℓ := ^̣nλ̣^nϕ_(0)(λ)_ 1mu height 2ex2mu λ = 0, 0 ≤ n ≤ℓ, ENLEVER CA which are dedicated to provide good models for ϕ(λ) := ϕ_(0)(λ). The RBM methods with _j and ^j are comparable because _j = ^j = j +1. We denote by _ν(λ) , ψ_ν(λ) the eigenmode of lowest eigenvalue of _ν H(λ) _ν and by ^ℓ(λ) , ψ^ℓ(λ) the eigenmode of lowest eigenvalue of ^ℓ H(λ) ^ℓ. The errors in eigenvectors are given in Figure <ref>, where we see that eigenvector continuation provides better approximation that building reduced spaces using excited states. § PROOF OF THEOREM <REF> We recall that for any self-adjoint operators B, C of , BC2≤BC2. Moreover, for any u,v ∈, we have | u > < v | 2 = | u > < v | = uv. §.§ Decomposition of the error We decompose the error Γ - Λ into several terms, which will be possible to handle individually. First, we have ϕ_α∈, so Λ = ∑_μ=1^ν| ϕ_α> < ϕ_α| = ∑_μ=1^ν| ϕ_α> < ϕ_α| = Λ, hence Λ = Λ = Λ, ^⊥Λ = Λ^⊥ = 0. Then we can decompose the error Γ - Λ in the following way Γ - Λ 1 = + ^⊥=^⊥Γ - Λ^⊥ + Γ - Λ^⊥ +^⊥Γ - Λ +Γ - Λ (<ref>)= ^⊥Γ^⊥ + Γ^⊥ +^⊥Γ +Γ - Λ. We will follow those steps : * in Section <ref> we show how to treat the first terms ^⊥Γ^⊥, Γ^⊥, and ^⊥Γ, * in Section <ref> we present a first way of treating the term Γ - Λ, which will be developed in Sections <ref>, <ref>, <ref> and <ref>, * in Section <ref> we present a second way of treating Γ - Λ, leading to a different kind of inequalities. §.§ Treating ^⊥Γ^⊥, Γ^⊥ and ^⊥Γ We start by treating the first terms of (<ref>). We have ^⊥Γ^⊥2,δΓ = Γ^2= A^δ^⊥Γ^2 ^⊥2(<ref>)≤ ^⊥Γ2,δ^⊥Γ = ^⊥Γ2,δA^-δ A^δ^⊥Γ≤ c_A^δ^⊥Γ2,δ^2. Then, A^δ^⊥Γ2 + A^δΓ^⊥2≤ c_A^δA^δ^⊥Γ A^δ2 + A^δΓ^⊥ A^δ2 = 2 c_A^δA^δ^⊥Γ A^δ2 = 2 c_A^δA^δ^⊥Γ^2 A^δ A^-δ A^δ2 ≤ 2 c_A^δA^δ^⊥Γ2Γ A^δA^-δ A^δ = 2 c_A c_A Γ^δ^⊥Γ2,δ. §.§ A first treatment of Γ - Λ We present a first treatment of Γ - Λ, based on the decomposition = (Λ + Λ^⊥) = Λ^⊥ + Λ. More precisely, Γ - Λ = Λ^⊥Γ - ΛΛ + ΛΓ - ΛΛ^⊥ + ΛΓ - ΛΛ + Λ^⊥Γ - ΛΛ^⊥ (<ref>)= Λ^⊥ΓΛ + ΛΓΛ^⊥ + ΛΓ - ΛΛ + Λ^⊥ΓΛ^⊥. In the following sections, we provide inequalities for each of those terms. §.§ Treating ΛΓ - ΛΛ On the first hand, ΛΓ - ΛΛ^2 = ΛΓΛ - Λ^2 = ΛΓΛΓΛ - 2ΛΓΛ+ Λ hence ΛΓ - ΛΛ2^2 =ΛΓ - ΛΛ^2 = ΓΛΓΛ - 2 ΓΛ + ν. On the other hand, Γ - Λ^4 = Γ + Λ - ΛΓ - ΓΛ^2 =Γ + Λ - ΛΓΛ - ΓΛΓ - ΓΛ - ΛΓ + ΓΛΓΛ + ΛΓΛΓ, so Γ - Λ^22^2 = 2 ΓΛΓΛ - 2 ΓΛ + ν. Then, we see that ΛΓ - ΛΛ2 = 1√(2)Γ - Λ^22≤1√(2)Γ - Λ2^2. Finally, ΛΓ - ΛΛ2,δ = A^δΛΓ - ΛΛ2 = A^δΛ^2 Γ - ΛΛ2 ≤A^δΛΛΓ - ΛΛ2 = A Λ^δΛΓ - ΛΛ2 (<ref>)≤ 2^- 12A Λ^δΓ - Λ2^2 ≤ 2^- 12c_A^2 A Λ^δΓ - Λ2,δ^2 ≤1 + c_A (1+c_A)A Λ^2δΓ - Λ2^2. §.§ Treating Λ^⊥ΓΛ^⊥ We have A Λ^⊥ A^-1 = A A^-11 - A Λ A^-1 ≤ c_1 + A Λ A^-1≤ c_1 + c_A A Λ. Now, we develop Λ^⊥ΓΛ^⊥2,δ≤ c_A^δA^δΛ^⊥Γ^2 Λ^⊥ A^δ2≤ c_A^δA^δΛ^⊥Γ2^2 = c_A^δA^δΛ^⊥Γ - Λ2^2 = c_A^δA^δΛ^⊥ A^-δ A^δΓ - Λ2^2 ≤ c_A^δA^δΛ^⊥ A^-δ^2 Γ - Λ2,δ^2 ≤ c_A^δAΛ^⊥ A^-1^2δΓ - Λ2,δ^2 (<ref>)≤ c_A^δ c_1 + c_A A Λ^2δΓ - Λ2,δ^2 ≤c_A c_^2^δ1 + c_A (1+c_A) A Λ^2δΓ - Λ2,δ^2. §.§ Definition and properties of partial inverses To prepare the next section, we need Liouvillian operators, which are standard tools to partially invert Hamiltonians acting on density matrices, see for instance <cit.> and <cit.>. We show several basic equations that will be used. We define R_μ := _μ - H ^-1_⊥, the super-operators and ^+ acting on by B ↦ B := [ H , B], B ↦^+ B := - ∑_μ=1^ν R_μ B P_ψ_μ and the subspaces _1 := { B ∈ , B = Λ^⊥ B ^⊥}, _2 := { B ∈ , B = ^⊥ B Λ^⊥}. By definition of R_μ we have _μ - H R_μ = Λ^⊥ = R_μ_μ - H . We compute, for any B ∈, ^+ B = - ∑_μ=1^ν R_μ [ H , B] P_ψ_μ H P_ψ_μ = _μ P_ψ_μ= ∑_μ=1^ν_μ R_μ B P_ψ_μ - R_μ H B P_ψ_μ =∑_μ=1^ν R_μ_μ - H B P_ψ_μ(<ref>)= Λ^⊥ B ∑_μ=1^ν P_ψ_μ = Λ^⊥ B Λ. We can show that ^+ is the orthogonal projection onto _1. We provide the details here as well for the sake of completeness. For any B ∈, we have ^+ B = H , - ∑_μ=1^ν R_μ B P_ψ_μ = ∑_μ=1^ν_μ R_μ B P_ψ_μ - H R_μ B P_ψ_μ = ∑_μ=1^ν_μ - H R_μ B P_ψ_μ = Λ^⊥ B Λ, hence ^+ = ^+. Moreover, for any F, B ∈, F, ^+ B = F^* Λ^⊥ B Λ = Λ F^* Λ^⊥ B = Λ^⊥ F Λ^* B = ^+ F, B thus (^+ )^* = ^+. Finally, ^+ ^2 B = ^+ Λ^⊥ B Λ = Λ^⊥Λ^⊥ B ΛΛ = Λ^⊥ B Λ = ^+ B, hence (^+ )^2 = ^+, and we can conclude that ^+ is the orthogonal projection onto _1. From (<ref>), we have that Λ and Λ^⊥ commute with and H, hence for Q, G ∈{Λ,Λ^⊥} and for any operator B ∈, Q B G = Q B G. §.§ Treating Λ^⊥ΓΛ and ΛΓΛ^⊥ We now use the Liouvillian operator to treat Λ^⊥ΓΛ. The Euler-Lagrange equation for Γ is [H,Γ] = 0 and can be verified by developping Γ into projectors. There holds Γ = [ H , Γ] = [H,Γ] - H [^⊥, Γ] + [^⊥, Γ] H [H,Γ] = 0= - H [^⊥, Γ] - [^⊥, Γ] H . Next, Λ^⊥ΓΛ = ^+ Γ(<ref>)= ∑_μ=1^ν R_μ H [^⊥, Γ] + [^⊥, Γ] H P_ψ_μ ^⊥ P_ψ_μ = 0 R_μ^⊥ = 0= ∑_μ=1^ν R_μ H ^⊥Γ - Γ^⊥ H P_ψ_μ. This part is to be associated with Λ^⊥ ^⊥Γ = ^⊥ΓΛ + ^⊥ΓΛ^⊥ = ^⊥Γ∑_μ=1^ν P_ψ_μ + ^⊥Γ - Λ^2 Λ^⊥, where we see that the last term is quadratic in Γ - Λ and hence will be negligible. Thus Λ^⊥ΓΛ + ^⊥Γ = ∑_μ=1^ν1 +R_μ H H ^⊥Γ P_ψ_μ - ∑_μ=1^ν R_μΓ^⊥ H P_ψ_μ + ^⊥Γ - Λ^2 Λ^⊥. Taking the adjoint operator of (<ref>) yields ΛΓΛ^⊥ = ∑_μ=1^ν P_ψ_μΓ^⊥ H - H ^⊥Γ R_μ. As for the bounds, we have (1+ R_μ H) ^⊥Γ P_ψ_μ2,δ + P_ψ_μΓ^⊥ (1+ H R_μ) 2,δ = A^δ (1+ R_μ H) ^⊥Γ P_ψ_μΛ A^δ A^-δ2 + A^δΛ P_ψ_μΓ^⊥ (1+ H R_μ) A^δ A^-δ2 ≤ 2 c_A^δA^δ (1+ R_μ H) ^⊥Γ P_ψ_μΛ A^δ2 = 2 c_A^δA^δ (1+ R_μ H) ^⊥ A^-δ A^δ^⊥Γ P_ψ_μΛ A^δ2 ≤ 2 c_AA Λ^δA^δ (1+ R_μ H) ^⊥ A^-δ^⊥Γ2,δ. Similarly, R_μΓ^⊥ H P_ψ_μe,2 + P_ψ_μ H ^⊥Γ R_μe,2≤ 2 c_A^δA^δ R_μΓ^⊥ H P_ψ_μ A^δ2 R_μΛ = 0= 2 c_A^δA^δ R_μΓ - ΛΓ^⊥^2 H Λ P_ψ_μΛ A^δ2 ≤ 2 c_A^δA^δ R_μΓ-ΛΓ^⊥2^⊥ H ΛΛ A^δ ≤ 2 c_A^2 AΛ^δA^δ R_μΓ-Λ^⊥ H Λ^⊥Γ2,δ. Finally, ^⊥ΓΛ^⊥2,δ + Λ^⊥Γ^⊥2,δ≤ 2 c_A^δA^δ^⊥ΓΓ - ΛΛ^⊥ A^δ2 = 2 c_A^δA^δ^⊥ΓΓ - Λ A^δ A^-δΛ^⊥ A^δ A^-δ A^δ2 ≤ 2 c_A c_1 + A Λ^δ^⊥Γ2,δΓ - Λ2,δ. §.§ First form Remark that in this form, gathering all the terms, we have Γ - Λ =^⊥Γ^⊥ + Γ^⊥ +^⊥Γ + ΛΓ - ΛΛ + Λ^⊥ΓΛ^⊥ +∑_μ=1^νP_ψ_μΓ^⊥ H - H ^⊥Γ R_μ - R_μΓ^⊥ H - H ^⊥Γ P_ψ_μ. Now using (<ref>) to associate ^⊥Γ with ΛΓΛ, we obtain (<ref>) where Ω = ^⊥Γ -Λ^2 ^⊥ + ΛΓ - ΛΛ + ^⊥Γ - Λ^2 Λ^⊥ + s.a + Λ^⊥Γ - Λ^2 Λ^⊥ - ∑_μ=1^νR_μΓ -Λ^2 ^⊥ H P_ψ_μ + . From (<ref>) we know that ΛΓ - ΛΛ is quadratic in Γ-Λ. Hence, we immediately see with this form (<ref>) that when Γ - Λ is small, the leading term is ∑_μ=1^ν(1 + H R_μ )^⊥Γ P_ψ_μ+ s.a, and Ω is quadratic in Γ - Λ, and thus much smaller. We obtain (<ref>) from the developed inequalities. §.§ Second form In this section we present another way of treating Γ - Λ. For any z ∈\σ( H )_ 1mu height 2ex2mu →, we define the partial inverse z - H ^-1_ := {[ z - H _ 1mu height 2ex2mu →^-1 ,; 0 ^⊥, ]. extended by linearity on . By the definition (<ref>), z - H z - H ^-1_ = {[ ∈,; 0 ∈^⊥, ]. hence z - H z - H ^-1_ = . Then (z-H)^-1 - (z- H )^-1_ = (z-H)^-1 - (z-H) (z- H )^-1_ = (z-H)^-1 - (z- H + H - H) (z- H )^-1_ (<ref>)= (z-H)^-1 (H - H ) (z- H )^-1_ = (z-H)^-1^⊥ H (z- H )^-1_, where we used that (z- H )^-1_ = (z- H )^-1_ in the last step. We now use that Γ (z-H)^-1 = ∑_μ=1^ν P_ϕ_μ (z-H)^-1 = ∑_μ=1^ν P_ϕ_μ (z-H)^-1 = ∑_μ=1^ν P_ϕ_μ (z-E_μ)^-1 to deduce Γ (z-H)^-1Γ^⊥ = Γ^⊥ (z-H)^-1Γ = 0 and Γ (z-H)^-1 = Γ (z-H)^-1Γ, so we can write (z-H)^-1 = Γ (z-H)^-1Γ + Γ^⊥ (z-H)^-1Γ + Γ (z-H)^-1Γ^⊥ + Γ^⊥ (z-H)^-1Γ^⊥ = (z-H)^-1_⊥ + ∑_μ=1^ν P_ϕ_μ (z-E_μ)^-1. Similarly, (z- H )^-1_ = Λ (z- H )^-1_ + Λ^⊥ (z- H )^-1_Λ^⊥ = (z- H )^-1_⊥ + ∑_μ=1^ν P_ψ_μ (z-_μ)^-1. The operators (z-H)^-1_⊥ and (z- H )^-1_⊥ are holomorphic in the interior of so they will “participate passively” to the Cauchy integral. Moreover, 12π i∮_ (z-E_μ)^-1 (z-_α)^-1ẓ = δ_E_μ≠_α(_α-E_μ)^-1 + (E_μ-_α)^-1 = 0. We are ready to compute Γ - Λ = 12π i∮_ (z-H)^-1 - (z- H )^-1_ẓ = 12π i∮_ (z-H)^-1^⊥ H (z- H )^-1_ẓ = ∑_μ=1^ν P_ϕ_μ^⊥ H E_μ - H _⊥^-1 + _μ - H_⊥^-1^⊥ H P_ψ_μ. As for inequalities, we have A^δ P_ϕ_μ^⊥ H E_μ - H _⊥^-12 =A^δ A^-δ A^δΓ P_ϕ_μΓ^⊥^⊥ H E_μ - H _⊥^-12 ≤A^δ A^-δ A^δΓP_ϕ_μ2Γ^⊥^⊥ H E_μ - H _⊥^-1 ≤c_A c_A Γ^δE_μ - H _⊥^-1 H ^⊥^⊥Γ2,δ, and A^δ_μ - H_⊥^-1^⊥ H P_ψ_μ2 = A^δ A^-δ A^δ_μ - H_⊥^-1^⊥ H Λ P_ψ_μ2 ≤A^δ A^-δ A^δ_μ - H_⊥^-1^⊥ H ΛP_ψ_μ2 ≤ c_^δ A^δ_μ - H_⊥^-1^⊥ H Λ, and also using the inequalities of Section <ref>, we can deduce (<ref>) of Proposition <ref>. § PROOF OF PROPOSITION <REF> We now treat the vector case and aim at showing (<ref>) and (<ref>). §.§ Equality on eigenvectors Let us keep ν∈ general first, we will assume ν = 1 later. Given the setting of Proposition <ref>, for any μ∈{1,…,ν}, assuming that ϕ_μ,ψ_μ∈, there holds Λ^⊥ + P_ψ_μϕ_μ - ψ_μ = 1 + R_μ H^⊥ϕ_μ - 12 ϕ_μ - ψ_μ^2 ψ_μ + _μ - E_μ R_μϕ_μ - ψ_μ. The remaining component of ϕ_μ - ψ_μ which is not taken into account in this lemma is Λ P_ψ_μ^⊥ (ϕ_μ - ψ_μ). We have H - E_μϕ_μ = 0 and H - _μψ_μ = 0, thus H - _μϕ_μ - ψ_μ = E_μ - _μϕ_μ. We first use [ H , Λ] = 0, hence H Λ^⊥ = Λ^⊥ H and applying Λ^⊥ on the left we obtain Λ^⊥ H Λ^⊥ = Λ^⊥ H, so Λ^⊥_μ - HΛ^⊥ϕ_μ - ψ_μ = Λ^⊥_μ - Hϕ_μ - ψ_μ = Λ^⊥_μ - Hϕ_μ - ψ_μ- Λ^⊥_μ - H^⊥ϕ_μ - ψ_μ (<ref>)= _μ - E_μΛ^⊥ϕ_μ + Λ^⊥ H ^⊥ϕ_μ. We apply R_μ and use (<ref>), R_μΛ^⊥ = R_μ and R_μψ_μ = 0 to obtain Λ^⊥ϕ_μ - ψ_μ = R_μ H ^⊥ϕ_μ + _μ - E_μ R_μϕ_μ - ψ_μ. Moreover, in a gauge where ψ_μ, ϕ_μ∈, ψ_μ, ϕ_μ = 1 - 12 ϕ_μ - ψ_μ^2 hence P_ψ_μϕ_μ - ψ_μ = ψ_μ, ϕ_μ - 1ψ_μ = - 12 ϕ_μ - ψ_μ^2 ψ_μ. Finally, Λ^⊥ + P_ψ_μ = ^⊥ + Λ^⊥ + P_ψ_μ and we obtain (<ref>) by summing (<ref>) and (<ref>) with ^⊥ϕ_μ - ψ_μ = ^⊥ϕ_μ. We obtain (<ref>) by applying this lemma to ν = 1, in which case Λ^⊥ + P_ψ_μ = 1. For ν≥ 2, this methods with vectors does not enable to obtain a bound on the remaining component Λ P_ψ_μ^⊥, that is why the previous density matrix approach is useful. §.§ Equality on eigenvalues Let us first present a well-known and basic estimate showing that the errors between eigenvalues can be expressed as the square of the error between eigenvectors. We give a proof for the sake of completeness. Take two self-adjoint operators A and H, assume that A^-1 <+∞ and c_H := A^-1 H A^-1 <+∞. Take in the form domain of H and ϕ in the domain of H, such that H ϕ = E ϕ, ψ = ϕ = 1, and define := ψ, H ψ. Then E - = ϕ -ψ , (E - H) ϕ- ψ, E - ≤A^-1 (H-E) A^-1θ∈ [0,2π[A(ϕ - e^iθψ)^2. Usually the bound (<ref>) is used as E - ≤A^-1 ^2 E + A^-1 H A^-1θ∈ [0,2π[A(ϕ - e^iθψ)^2. By using (E-H) ϕ = 0 and ψ=1, we have ϕ -ψ , (E-H) ϕ- ψ = -ϕ -ψ , (E-H) ψ = -(E-H) ϕ -ψ , ψ = (E-H) ψ , ψ = E -ψ , H ψ = E - . Then E - = A(ϕ -ψ) , A^-1 (E - H) A^-1 A ϕ- ψ ≤A^-1 (E-H) A^-1A(ϕ - ψ)^2. To conclude, we change → e^iθ. We now have ν = 1 and remove the subscript 1 everywhere. Let us now show (<ref>). First, R H-1 + R H^⊥ = R (H-) ^⊥ + R H- R H ^⊥ R ^⊥ = 0= R H ^⊥ + R H- R H ^⊥ R ( - H) = P_ψ^⊥= R H ^⊥ - P_ψ^⊥ R H ^⊥ P_ψ^⊥ R = R= 0. Moreover, using (<ref>) we have - Hϕ - ψ( H-) ψ = 0= - H1 + R H^⊥ϕ + - E R ϕ + 12 ϕ - ψ^2 ^⊥ H ψ = - H1 + R H^⊥ϕ + - E - H R ϕ + - E^⊥ - H R ϕ + 12 ϕ - ψ^2 ^⊥ H ψ = - H1 + R H^⊥ϕ + -E P_ψ^⊥ϕ + E - ^⊥ H R ϕ + 12 ϕ - ψ^2 ^⊥ H ψ. Similarly as in (<ref>), using (E-H) ϕ = 0 and ψ=1, we have E - = ϕ - ψ, (E - H) ϕ - ψ = ϕ - ψ, ( - H) ϕ - ψ + (E-) ϕ - ψ^2 (<ref>)= 1 + R H^⊥ϕ, - Hϕ - ψ - 12 ϕ - ψ^2 ψ , - Hϕ - ψ + - E R ϕ , - Hϕ - ψ+ (E-) ϕ - ψ^2. We now compute each of those terms. First, by (<ref>) we have 1 + R H^⊥ϕ , ( - H) (ϕ - ψ) =1 + R H^⊥ϕ , - H1 + R H^⊥ϕ + ( - E) 1 + R H^⊥ϕ , P_ψ^⊥ϕ + (E - ) 1 + R H^⊥ϕ, ^⊥ H R ϕ + 12 ϕ - ψ^2 1 + R H^⊥ϕ , ^⊥ H ψ. Then using R ^⊥ = 0 and (<ref>), we get 1 + R H^⊥ϕ , ( - H) (ϕ - ψ) =^⊥ϕ , - H1 + R H^⊥ϕ + ( - E) R H ^⊥ϕ , P_ψ^⊥ϕ + (E - ) ^⊥ϕ, ^⊥ H R ϕ + 12 ϕ - ψ^2 ^⊥ϕ , ^⊥ H ψ P_ψ^⊥ R = R= ^⊥ϕ , - H1 + R H^⊥ϕ + 12 ϕ - ψ^2 ^⊥ϕ , H ψ, giving the first term of (<ref>). Using (<ref>), the second term of (<ref>) comes from ψ, ( - H) (ϕ - ψ) = ψ, - H1 +RH^⊥ϕ = ψ , ( - H) ^⊥ϕ + ψ, ( - H) R H ^⊥ϕ ( - H) R = P_ψ^⊥= -H ψ, ^⊥ϕ. The third term of (<ref>) comes from R ϕ , - Hϕ - ψ(<ref>),(<ref>) R ^⊥ = 0= R ϕ , - E P_ψ^⊥ϕ R P_ψ^⊥ = R= - Eϕ, R ϕR ψ = 0= - Eϕ -ψ, R ϕ - ψ. Summing all the terms of (<ref>) yields E - = ^⊥ϕ, - H1 + R H^⊥ϕ + E-^2 ϕ - ψ, R ϕ - ψ + (E-) ϕ - ψ^2 + ϕ - ψ^2 ^⊥ϕ, H ψ. Moreover, ( - H)(1+ RH)^* = (1+HR)(-H) = ( - H)(1+ RH) is self-adjoint so ^⊥ϕ, - H1 + R H^⊥ϕ∈, ϕ - ψ, R ϕ - ψ∈. To conclude, we use that ^⊥ϕ, H ψ = -^⊥ϕ, (H-E) (ϕ - ψ). §.§ Inequalities (<ref>) and (<ref>) From (<ref>) we have ϕ - ψe≤A 1 + R H^⊥ϕ + 12 ϕ - ψ^2 ψe + - EAR ϕ - ψ ≤1 + A R A A^-1 H A^-1 A ^⊥ϕ + 12 c_Aϕ - ψϕ - ψeψe + A R - Eϕ - ψ ≤1 + ARA c_H^⊥ϕe + c_A 12 ϕ - ψψe + A R - Eϕ - ψe, and we obtain (<ref>) when c_A 12 ϕ - ψψe + A R - E≤ 12. Proving (<ref>) uses (<ref>). We can obtain (<ref>) with the same method, which also needs to use (<ref>). § BOUNDS ON THE RAYLEIGH-SCHRÖDINGER SERIES IN PERTURBATION THEORY In the proofs of the theorems of Section <ref>, we will need some general results about perturbation theory, which we show here. The main results are Lemma <ref> and Lemma <ref> on the boundedness of the Rayleigh-Schrödinger series E^n_μ and ϕ^n_μ. We take the context of an analytic and self-adjoint operator family H(λ), presented in Section <ref>. In particular, we consider a series of operators H(λ) = ∑_n=0^+∞λ^n H^n, and a cluster of eigenmodes E_μ(λ), ϕ_μ(λ)_μ=1^ν of H(λ), where all those maps are analytic in λ∈ ]-λ_0,λ_0[. We define respectively an energy norm and a parameter norm, for any operator B of , by respectively B2,ee := A B A2, Bp := A^-1 B A^-1. We have B2,1≤A^-1B2,ee, so the energy norm ·2,ee controls the energy norm ·2,1, defined in (<ref>). We use intermediate normalization, which is reviewed in Section <ref>. We set Φ_μ(λ) := ϕ_μ(λ)ϕ_μ^0,ϕ_μ(λ), ϕ_μ^n := 1n!^̣n λ̣^nϕ_μ(λ) _ 1mu height 2ex2mu λ = 0, Φ_μ^n := 1n!^̣n λ̣^nΦ_μ(λ) _ 1mu height 2ex2mu λ = 0, E_μ^n := 1n!^̣n λ̣^n E_μ(λ) _ 1mu height 2ex2mu λ = 0. §.§ A preliminary bound on “Cauchy squares” First we will need the following result, which is a bound on a series that we can call the “Cauchy square” series. Take α, β >0 and let us define x_1 := α and for any n ∈, n≥ 2, x_n := β∑_s=1^n-1 x_n-s x_s. Then for any n ∈, x_n ≤α2 ζ32αβ^n-1n^ 32. where ζ is Riemann's zeta function so 2 ζ32≃ 5.2248... We made a numerical study giving evidence that x_nαπ^- 12 (4 αβ)^n-1 n^- 32n → +∞⟶ 1. First, we show that proving the result with β = 1 enables to show it for any β >0. Take a general β > 0. By using y_n := β x_n, we have y_1 = αβ and y_n = ∑_s=1^n-1 y_n-s y_s. We use the result for β = 1 on y_n, which yields the claimed result for x_n. Hence without loss of generality we can take β = 1. Let us prove (<ref>) by induction. We define ξ := 2 ζ32, for any x ∈ ]0,n[ we define g(x) := x(n-x)^- 32, we extend it on \{0,n}, and we define S_n := ∑_s=1^n-1 g(s) for any n ≥ 2. For n=1 the right hand side of (<ref>) is α so the initial step is valid. Take n ∈, n ≥ 2, such that for any s ∈{1,…,n-1}, x_s ≤α (ξα)^s-1 s^- 32. Then x_n ≤α^2 ξα^n-2∑_s=1^n-11s(n-s)^ 32 = α^2 ξα^n-2 S_n. Defining G(x) := n-2x√(x(n-x)) we have G'(x) = -n^22 g(x) so ∫_1^n-1 g = 2n^2G(1) - G(n-1) = 4 (n-2)n^2√(n-1). Moreover, g(z) = g(z) and by the Abel-Plana formula, S_n = ∫_1^n-1g(s) ṣ+12g(1)+12g(n-1) - 2 ∫_0^∞g(1+i y)-g(n-1+i y)e^2 π y-1ỵ =5n^2-12n+8(n-1)^ 32 n^2-4 ∫_0^∞g(1+i y)e^2 π y-1ỵ hence n → +∞ n^ 32 S_n = 5 -4 ∫_0^∞(1+iy)^ 32e^2 π y-1ỵ = 2 ζ32 = ξ, where we used the Abel-Plana formula again. Moreover, (n^3/2 S_n)_n ≥ 2 is an increasing sequence, thus S_n ≤ξ/n^3/2. Then (<ref>) enables to conclude. §.§ Bound on the Rayleigh-Schrödinger series : the non-degenerate case We are now ready to obtain a bound on E^n_μ and Φ^n_μ. In particular, this provides a bound on the convergence radius of the perturbation series. We take the non-degenerate case, that is ν = 1, so μ = 1. For any m ∈ and any n ∈, we define h^m_μ := H^m-E^m_μ, Q^n_μ := h^n_μ + ∑_s=1^n-1 h^n-s_μ K_μ(0) Q^s_μ. Then, by a classical result which can be found in <cit.> for instance, we have Φ_μ^n = K_μ(0) Q^n_μΦ_μ^0, ∀ n ≥ 1, where the partial inverse K_μ was defined in(<ref>). Moreover, we can compute E^n_μ = Φ_μ^0, Q^n_μ + E^n_μΦ_μ^0. Let us consider the Hamiltonian family H(λ) = ∑_n=0^+∞λ^n H^n under the assumptions of Sections <ref> and <ref>, with ν = 1. The non-degenerate eigenmode is denote by (E_μ(λ),ϕ_μ(λ)), we fix the phasis of ϕ_μ(λ) such that ϕ_μ^0,ϕ_μ(λ)∈_+, the intermediate normalization eigenvector is Φ_μ(λ) := ϕ_μ(λ)ϕ_μ^0,ϕ_μ(λ) and the Taylor series are written E_μ^n := 1n!^̣n λ̣^n E_μ(λ) _ 1mu height 2ex2mu λ = 0, ϕ_μ^n := 1n!^̣n λ̣^nϕ_μ(λ) _ 1mu height 2ex2mu λ = 0, Φ_μ^n := 1n!^̣n λ̣^nΦ_μ(λ) _ 1mu height 2ex2mu λ = 0. Then for any n ∈, E^n_μ + ϕ_μ^ne + Φ_μ^ne ≤ a b^n, where a,b ∈_+ are independent of n. For clarity, we drop the subscripts 1, so E := E_1, ϕ := ϕ_1, Φ := Φ_1, K := K_1, Q := Q_μ, h := h_μ. We define c_H,∞ :=n ∈A^-1 H^n A^-1, c_K := A K A, c_A := A^-1 . For any n ∈ let us define q^n := H^n + ∑_s=1^n-1 h^n-sK Q^s. From (<ref>) we have Q^n = q^n - E^n, from (<ref>) we have E^n = Φ^0, q^n Φ^0, and we recall that h^n = H^n - E^n hence E^n = A Φ^0, A^-1 q^n A^-1 A Φ^0≤q^npΦ^0e^2, Q^np ≤q^np + c_A^2 E^n≤q^np1 + c_A^2 Φ^0e^2, h^np ≤ c_H,∞ + c_A^2 q^npΦ^0e^2. Thus from (<ref>) we have q^np ≤ c_H,∞ + c_K ∑_s=1^n-1h^n-spQ^sp ≤ c_H,∞ + c_K 1 + c_A^2 Φ^0e^2∑_s=1^n-1c_H,∞ + q^n-sp c_A^2 Φ^0e^2q^sp ≤ c_H,∞ + β∑_s=1^n-11 + q^n-spq^sp where β := c_K 1 + c_A^2 Φ^0e^2maxc_H,∞ ,c_A^2 Φ^0e^2. Defining y_n := q^np +1, we have y_n ≤ 1 + c_H,∞ + β∑_s=1^n-1 y_n-s (y_s -1) ≤ (1+c_H,∞ + β) ∑_s=1^n-1 y_n-s y_s. We now show the bound for ϕ^ne. For any n ∈ we define Y^n and X^n as in (<ref>), and u_n := maxY^n , X^n-1 , y_n . We have Y^n≤∑_s=1^n-1 u_n-s u_s and X^n-1= -∑_s=0^n-3 X^s Y^n-1-s = - ∑_s=1^n-2 X^s-1 Y^n-s so X^n-1≤∑_s=1^n-2 u_n-s u_s. We deduce that u_n ≤ (1+c_H,∞ + β) ∑_s=1^n-1 u_n-s u_s. Using Lemma <ref>, we deduce that there are a,b > 0 such that u_n ≤ a b^n for any n ∈. We can propagate this result for q^np, E^n, Q^np using (<ref>), for Φ^ne by using (<ref>), giving Φ^ne≤ c_K c_A^2 q^npΦ^0e1 + c_A^2 Φ^0e^2, and for ϕ^ne by using (<ref>). Defining the perturbation approximation in the intermediate normalization (λ) := ∑_n=0^ℓλ^n ϕ_μ^n, (<ref>) yields (λ)e≤ a ∑_n=0^ℓλb^n = a 1 - λb^ℓ +11 - λb≤a1 - λb, and the radius of convergence of the right-hand side is b^-1. Moreover, ϕ_μ(λ) - (λ) e≤a1 - λbλ b^ℓ +1. §.§ Bound on the Rayleigh-Schrödinger series : the degenerate case, when degeneracy is lifted at first order To show Theorem <ref>, we will need a similar lemma as Lemma <ref> but for the degenerate case. We consider a degenerate case, so for all α,β∈{1,…,ν}, E^0_α = E^0_β. As before, Γ^0 is the orthogonal projector onto ⊕_μ=1^νH^0 - E^0_μ. It is well-known that the ν eigenvalues of Γ^0 H^0 Γ^0, as an operator of Γ^0, are the E_μ^1 = λ̣ E_μ(λ) _ 1mu height 2ex2mu λ = 0 = ϕ^0_μ , H^1 ϕ^0_μ . Here we assume that degeneracy is lifted at first order for some μ∈{1,…,ν}, meaning that for any α∈{1,…,ν}\{μ}, E_α^1 ≠ E_μ^1. §.§.§ Pseudo-inverses In the degenerate case, we need to introduce two kinds of partial inverse operators. The “zeroth order” partial inverse K_μ(0) was defined in (<ref>), and we set K^0_μ := K_μ(0). We take some μ∈{1,…,ν} and assume that degeneracy is lifted at first order We also set K^1_μ := _μ (0), where _μ (0) was defined in (<ref>). §.§.§ Series We present degenerate perturbation theory as in the work of Hirschfelder <cit.>, in the case where all degeneracies are lifted at first order. We define h^n_μ := H^n-E^n_μ, the operators q^n_0,μ := H^n + ∑_s=1^n-1 h^s_μ K^0_μ Q^n-s_0,μ, q^n_1,μ := q^n_0,μ + ∑_s=2^n-1 Q^s_0,μ K^1_μ Q^n-s+1_1,μ and Q^n_i,μ = q^n_i,μ - E^n_μ for i ∈{0,1}. Then for any m ∈ and any n ∈, we have E^m_μ = ϕ^0_μ, q^m_1,μϕ^0_μ, Φ^n_μ = K^0_μ Q^n_1,μ + K^1_μ Q^n+1_1,μϕ^0_μ. Let us consider the Hamiltonian family H(λ) = ∑_n=0^+∞λ^n H^n under the assumptions of Sections <ref> and <ref>. We take μ∈{1,…,ν} and an eigenmode denoted by (E_μ(λ),ϕ_μ(λ)). We consider the degenerate case, where the degeneracy is lifted at first order, as described in Section <ref>, i.e. E^0_μ = E^0_α for all α∈{1,…,ν} and E^1_μ≠ E^1_α for all α∈{1,…,ν}\{μ}. We fix the phasis of ϕ_μ(λ) such that ϕ_μ^0,ϕ_μ(λ)∈_+, the intermediate normalization eigenvector is Φ_μ(λ) := ϕ_μ(λ)ϕ_μ^0,ϕ_μ(λ) and the Taylor series are written E_μ^n := 1n!^̣n λ̣^n E_μ(λ) _ 1mu height 2ex2mu λ = 0, ϕ_μ^n := 1n!^̣n λ̣^nϕ_μ(λ) _ 1mu height 2ex2mu λ = 0, Φ_μ^n := 1n!^̣n λ̣^nΦ_μ(λ) _ 1mu height 2ex2mu λ = 0. Then for any n ∈, E^n_μ + ϕ^n_μe + Φ^n_μe≤ a b^n, where a,b >0 are independent of n and μ, and depend polynomially on A K^0_μ A. We recall that c_H,∞ was defined in (<ref>). From (<ref>) we have E^n_μ ≤q^n_1,μpΦ^0_μe^2, h^n_μp ≤ c_H,∞ + c_A^2 E^n_μ≤ c_H,∞ + c_A^2 q^n_1,μpΦ^0_μe^2, and for i ∈{0,1}, Q^n_i,μp≤q^n_i,μp + q^n_1,μpΦ^0_μe^2. Next, q^n_0,μp ≤ c_H,∞ + K^0_μ∑_s=1^n-1h^s_μpQ^n-s_0,μp ≤ c_H,∞ + K^0_μ∑_s=1^n-1c_H,∞ + c_A^2 q^s_1,μpΦ^0_μe^2 ×q^n-s_0,μp + q^n-s_1,μpΦ^0_μe^2, We define C := maxc_H,∞,K^0_μ,K^1_μ, c_A^2 Φ^0_μe^2,1+Φ^0_μe^2, and we have q^n_0,μp≤ C + C^3 ∑_s=1^n-11 + q^s_1,μpq^n-s_0,μp + q^n-s_1,μp. Moreover, q^n_1,μp≤q^n_0,μp + K^1_μ∑_s=2^n-1Q^s_0,μpQ^n-s+1_1,μp ≤q^n_0,μp +1 + Φ^0_μe^2K^1_μ∑_s=2^n-1q^s_0,μp + q^s_1,μpΦ^0_μe^2q^n-s+1_1,μp ≤q^n_0,μp + C^3 ∑_s=2^n-1q^s_0,μp + q^s_1,μpq^n-s+1_1,μp. We define x_n := q^n_0,μp + q^n_1,μp + 1 and estimate x_n ≤ 2C + 2C^3 ∑_s=1^n-11 + q^s_1,μpq^n-s_0,μp + q^n-s_1,μp + C^3 ∑_s=2^n-1q^s_0,μp + q^s_1,μpq^n-s+1_1,μp ≤ 2C1 + C^2∑_s=1^n-1 x_s x_n-s + ∑_s=2^n-1 x_s x_n-s+1 = 2C(1+C^2) x_1 x_n-1 + ∑_s=1^n-2x_s + x_s+1 x_n-s ≤ 4C(1+C^2) ∑_s=1^n-2x_s + x_s+1 x_n-s. Then with y_n := x_n + x_n+1, we have y_n ≤ x_n + 4C(1+C^2) ∑_s=1^n-1x_s + x_s+1 x_n+1-s ≤ y_n-1 + 4C(1+C^2) ∑_s=1^n-1 y_s y_n-s≤ (1 + 4C(1+C^2)) ∑_s=1^n-1 y_s y_n-s, where we used that 1/y_1 ≤ 1 in the last inequality. Using Lemma <ref>, we deduce that there are a,b > 0 such that for any n ∈, y_n ≤ a b^n, and then q^n_i,μp≤ ab^n for i ∈{0,1}. We propagate this property for E^n_μ, Q^n_i,μp using (<ref>) and (<ref>) and for Φ^n_μe by using Φ^ne≤K^0_μQ^n_1,μp + K^1_μQ^n+1_1,μpΦ^0_μe. As we see in C, we can bound a and b using polynomials in c_H,∞,K^0_μ, K^1_μ, Φ^0_μe, c_A. The bound on ϕ^n_μe can be deduced with the same method as in Lemma <ref>. § COEFFICIENTS IN DENSITY MATRIX PERTURBATION THEORY In this section we present how to compute the coefficients of the perturbative series in density matrix perturbation theory. We use the Liouvillian operator and its partial inverse, a classical too in perturbation theory, see <cit.>, used in <cit.>, with a detailed exposition in <cit.>. See also for instance <cit.>. §.§ Definitions We choose the same context and notations as in Section <ref>, in particular, we consider a series of operators H(λ) = ∑_n=0^+∞λ^n H^n. We consider a cluster of eigenmodes E_μ(λ), ϕ_μ(λ)_μ=1^ν of H(λ), where all those maps are analytic in λ∈ ]-λ_0,λ_0[. Let us take λ_0 small enough so that there is κ_H >0, independent of λ, such that for any λ∈ ]-λ_0,λ_0[, σ(H) \{E_μ(λ)}_μ=1^ν∩∪_μ=1^ν ]E_μ(λ) - κ_H , E_μ(λ) + κ_H [ = ∅. Take (_μ(λ))_μ=1^ν∈^ν such that _μ(λ) , _α(λ) = δ_μα, _μ(λ) ∈H(λ) - E_μ(λ), the density matrix corresponding to those eigenmodes is Γ(λ) := ∑_μ=1^ν_μ(λ) = ∑_n=0^+∞λ^n Γ^n, where Γ^n := 1n!^̣n λ̣^nΓ(λ) _ 1mu height 2ex2mu λ = 0, and is independent of the choice of the frame _μ(λ), as long as it respects (<ref>). §.§ Statement Let us take (_μ)_μ=1^ν∈^ν such that _μ∈H(0) - E_μ(0) and _μ,_α = δ_μα for any μ,α∈{1,…,ν}, so Γ^0 = ∑_μ=1^ν_μ. Let us define A_0 := Γ^0, B_0 := 0, C_0 := 0 and for any n ∈, n ≥ 1, A_n := - ∑_k=1^n-1A_n-k A_k + B_n-k^* B_k, C_n := ∑_k=1^n-1C_n-k C_k + B_n-k B_k^* b_n := Γ^0^⊥∑_k=0^n-1H^n-kA_k + B_k - B_k^* + C_k H^n-kΓ^0 B_n := ∑_μ =1^ν K_μ(0) b_n P__μ, where ∑_m=a^b := 0 if b < a, and K_μ(0) is defined in (<ref>). We see that A_n and C_n are self-adjoint. The following result is classical and comes from <cit.>. See also <cit.> for other methods of computing Γ^n. Let us consider a Hilbert space , a self-adjoint energy operator A, an analytic family of self-adjoint operators H(λ) = ∑_n=0^+∞λ^n H^n, we make the assumptions of Sections <ref> and <ref>. Take ν∈, consider a set of eigenmodes E_μ(λ), ϕ_μ(λ)_μ =1^ν, analytic in λ, and the corresponding density matrix Γ(λ) := ∑_μ=1^νϕ_μ(λ) = ∑_n=0^+∞λ^n Γ^n. Then for any n ∈, Γ^n = A_n + B_n + B_n^* + C_n, where the involved operators are define in (<ref>). For the sake of completeness, we give a more mathematical proof in Section <ref>. Remark that Γ^n is invariant under the gauge change (_μ)_μ=1^ν =: → U, for any unitary U ∈_ν. Hence A_n, B_n and C_n are also invariant under this transformation. So one does not need to compute the exact ϕ_μ(0), which are notoriously hard to obtain. Moreover, Γ^0 being the active space, we have A_n = Γ^0 Γ^n Γ^0, B_n = (Γ^0)^⊥Γ^n Γ^0, B_n^* = Γ^0 Γ^n (Γ^0)^⊥, C_n = (Γ^0)^⊥Γ^n (Γ^0)^⊥, and we can rewrite A_n = - Γ^0 ∑_k=1^n-1Γ^n-kΓ^k Γ^0, C_n = Γ_0^⊥∑_k=1^n-1Γ^n-kΓ^k Γ_0^⊥ b_n = Γ^0^⊥∑_k=0^n-1 [H^n-k,Γ^k] Γ^0, B_n = ∑_μ =1^ν K_μ(0) b_n P__μ. Then to prove (<ref>) we will need to following bound on the Γ^n series. There exist a,b > 0, independent of n ∈, such that for any n ∈, A Γ^n A2≤ a b^n. §.§ Proof of Proposition <ref> §.§.§ First relations The Euler-Lagrange equation [H(λ),Γ(λ)] = 0 gives that for any n ∈, ∑_k=0^nH^n-k, Γ^k = 0. Moreover, Γ(λ)^* = Γ(λ) and Γ(λ)^2 = Γ(λ) so for any n ∈, Γ^n^* = Γ^n and ∑_k=0^nΓ^n-kΓ^k = Γ^n. §.§.§ Decomposition of the projection We define P := Γ^0, P^⊥ := 1 - Γ^0 and _n := P Γ^n P, _n := P^⊥Γ^n P, _n := P^⊥Γ^n P^⊥ so _n^* = _n and _n^* = _n and Γ^n = _n + _n + _n^* + _n so we want to compute the series _n, _n, _n. §.§.§ Formulas for _n and _n We define the Liouvillian := H^0, ·, and denotes the space of Hilbert-Schmidt operators, defined in (<ref>). For any B,F ∈, we compute ( B, F)_2 = B^* F = [H^0,B]^* F = B^* H^0 F - H^0 B^* F = B^* H^0F - B^* F H^0 = B^* [H^0,F] = B^* F = (B, F)_2, hence is self-adjoint, or in other words, ^* =. We define X_n := -∑_k=0^n-1H^n-k, Γ^k, Y_n := ∑_k=1^n-1Γ^n-kΓ^k. Then (<ref>) transforms to Γ^n = X_n and (<ref>) transforms to Γ^n P + P Γ^n - Γ^n = -Y_n. From (<ref>) we can compute Γ^n P + P Γ^n - Γ^n = _n - _n so (<ref>) implies _n - _n = - Y_n and _n = - P Y_n P _n = P^⊥ Y_n P^⊥. so we can develop Y_n (<ref>) (<ref>)= ∑_k=1^n-1_n-k + _n-k + _n-k^* + _n-k_k + _k + _k^* + _k = ∑_k=1^n-1_n-k_k + _k^* + _n-k_k + _k^* + _n-k^* _k + _k + _n-k_k + _k. Applying P on the left and on the right, and applying P^⊥ on the left and on the right, together with (<ref>) we obtain _n = -∑_k=1^n-1_n-k_k + ^*_n-k_k, _n = ∑_k=1^n-1_n-k_k + _n-k_k^*. §.§.§ _n We have H^0 P = P H^0 = PH^0 P and H^0 P^⊥ = P^⊥ H^0 = P^⊥ H^0 P^⊥ hence for Q, G ∈{P,P^⊥} and for any operator F ∈, Q F G = Q F G. By taking F = X_n, Q = P^⊥, G = P, we have _n = P^⊥ X_n P (<ref>) (<ref>)= ∑_k=0^n-1_k^* + _k H^n-k P - P^⊥ H^n-k_k + _k. §.§.§ Partial inverse of the Liouvillian We define := {L ∈ | L = P^⊥ L P}. Let us take (_μ)_μ=1^ν such that _μ∈H(0) - E_μ(0) and _μ,_α = δ_μα for any μ,α∈{1,…,ν} and the operator of ^+ : [ ⟶ ; F ⟼ - ∑_μ=1^ν K_μ(0) F P__μ. ] For any F ∈, we compute ^+ F = -∑_μ =1^ν K_μ(0) [H^0,F] P__μ= ∑_μ =1^ν K_μ(0) F H^0 P__μ- K_μ(0) H^0 F P__μ = ∑_μ =1^ν E_μ(0) K_μ(0) F P__μ + P^⊥ F P__μ - E_μ(0) K_μ(0) F P__μ = P^⊥ F P, where we used that K_μ(0) H^0 - E_μ(0) = - P^⊥. So if F ∈, then ^+ F = F. By a similar computation, we have ^+ = ^+. Moreover, (^+ )^2 = ^+ and (^+ )^* = ^+, where the dual operator is taken with respect to the scalar product ·,·. Hence ^+ is the orthogonal projection onto , and ^+ is a partial inverse. §.§.§ Formula for _n Since _n ∈, and since ^+ is the orthogonal projection onto , we have _n = ^+ _n (<ref>)= ∑_k=0^n-1^+ _k^* + _k H^n-k P - ^+ P^⊥ H^n-k_k + _k (<ref>)= ∑_μ =1^ν∑_k=0^n-1 K_μ(0) H^n-k_k + _k - _k^* + _k H^n-k P__μ. §.§.§ Conclusion The recursive relations (<ref>) and (<ref>) respected by _n, _n and _n are the same as the ones respected by A_n, B_n and C_n. Thus from _0 = A_0, _0 = B_0, _0 = C_0, we conclude that _n = A_n, _n = B_n and _n = C_n for any n ∈. §.§ Proof of Proposition <ref> We recall that B2,ee := A B A2, c_K := 1 ≤μ≤νA KA, c_H,∞ := n ∈A^-1 H^n A^-1. For any n ∈, we define v_n := maxA_n2,ee, B_n2,ee, C_n2,ee. Let us take n ∈ and. We have A B_n A = ∑_μ =1^ν∑_k=0^n-1 A K_μ(0) A ×A^-1H^n-k A^-1 A A_k + B_k A^-1- A^-1B_k^* + C_k A A^-1 H^n-k A^-1 × A P__μ A. Moreover, for any k ∈ and any L ∈{A_k,B_k,C_k}, A L A^-1 = A L A A^-2≤ c_A^2 ALA≤ c_A^2 L2,ee≤ c_A^2 v_k, and A P__μ A2 = A Γ^0 P__μΓ^0 A2≤A Γ^0P__μ2Γ^0A = A Γ^0^2, hence B_n2,ee≤ 4 ν c_K c_H,∞ c_A^2 A Γ^0^2 ∑_k=0^n-1 v_k. For any k ∈ we define u_k := v_k +1, we have v_k ≤ u_k ≤ u_k u_n-k so for any n ≥ 1, B_n2,ee + 1 ≤1+4 ν c_K c_H,∞ c_A^2 A Γ^0^2 ∑_k=0^n-1 u_k u_n-k. Similarly, we have A_n2,ee≤ 2 c_A^2 ∑_k=1^n-1 v_k v_n-k, C_n2,ee≤ 2 c_A^2 ∑_k=1^n-1 v_k v_n-k, so for any n ≥ 2, A_n2,ee + 1 ≤1 + 2 c_A^2∑_k=1^n-1 u_k u_n-k, C_n2,ee + 1 ≤1 + 2 c_A^2∑_k=1^n-1 u_k u_n-k and we can conclude that u_n ≤1+ 2 c_A^2 max1, 2 ν c_K c_H,∞ c_A^2 A Γ^0^2∑_k=1^n-1 u_k u_n-k. We obtain (<ref>) by applying Lemma <ref> to u_n. § PROOF OF COROLLARIES <REF> AND <REF> We only give a proof of Corollary <ref> in detail, because the proof of Corollary <ref> uses the exact same method. To apply Rellich's theorem, we remark that we automatically have n ∈A^-1 H^n A^-1 < +∞ because A^-1 H^n A^-1≤ c_^2 A^-1 H^n A^-1, which was already assumed to be bounded. §.§ Proof of Corollary <ref> The proof of Corollary <ref> uses Proposition <ref>, and Lemma <ref>. Defining c_H,∞ := n ∈A^-1 H^n A^-1 < +∞, for any λ < 1 we have A^-1 H(λ) A^-1≤ c_H,∞1 - λ^-1 so for any λ < 1/2, A^-1 H(λ) A^-1≤ 2c_H,∞. We have - 12 ϕ(λ) - ψ(λ)^2 ψ(λ) + (λ) - E(λ) R(λ) ϕ(λ) - ψ(λ)e ≤ 12 c_A^2 ϕ(λ) - ψ(λ)e^2 ψ(λ)e^2 + c_A^2 c_R E(λ) - (λ) ϕ(λ) - ψ(λ)e (<ref>) (<ref>)≤ c_A^2 ϕ(λ) - ψ(λ)e^2 12 ψ(λ)e^2 + c_R c_A^2 E(λ) +2c_H,∞ϕ(λ) - ψ(λ)e. Since ϕ(0) = ψ(0), and by continuity of the maps λ↦ϕ(λ) and λ↦ψ(λ), we have ϕ(λ) - ψ(λ)e→ 0 as λ→ 0, and then we can take λ_0 small enough such that for any λ∈ ]-λ_0, λ_0[, ϕ(λ) - ψ(λ)e ≤ 12 c_A^-2 12 ψ(λ)e^2 + c_R c_A^2 E(λ) + 2 c_H,∞ϕ(λ) - ψ(λ)e^-1. We use ϕ(λ) , ψ(λ)∈, to apply Proposition <ref> at each λ. Thus from (<ref>) and (<ref>) (see also (<ref>)) we obtain that for any λ∈ ]-λ_0,λ_0[, ϕ(λ) - ψ(λ)e≤ 2 1 + c_A ARA^⊥ϕ(λ)e. We recall from Appendix <ref> that Φ(λ) := ϕ(λ)ϕ^0, ϕ(λ), Φ^n := 1n!^̣nλ̣^nΦ(λ)_ 1mu height 2ex2mu λ = 0, ϕ(λ) = Φ(λ)Φ(λ). so ^⊥ϕ(λ) = Φ(λ)^-1^⊥Φ(λ)  (<ref>)= Φ(λ)^-1∑_n=ℓ +1^+∞λ^n Φ^n. We use the phasis gauge ϕ^0 , ϕ(λ)∈_+ to obtain the bounds on the derivatives (<ref>), and thus there is c,b > 0 independent of ℓ and λ such that ϕ(λ) - ψ(λ)e≤ c λ b^ℓ +1. From this, we deduce that ϕ^n = ψ^n for any n ∈{0,…,ℓ}. From this last inequality and (<ref>) we also obtain that for some c,b > 0 independent of ℓ and λ, ϕ(λ) - ψ(λ)^2 ≤ c λ b^2(ℓ +1), (λ) - E(λ)≤ c λ b^2(ℓ +1). Using (<ref>) once more, we have ϕ(λ) - ψ(λ) - λ^ℓ +1 1 + R(0)H(0)^⊥ϕ^ℓ +1 = 1 + R(λ)H(λ)^⊥ϕ(λ) - λ^ℓ +1 ϕ^ℓ +1 + λ^ℓ +1 R(λ) H(λ) - R(0)H(0)^⊥ϕ^ℓ +1 - 12 ϕ(λ) - ψ(λ)^2 ψ(λ) + (λ) - E(λ) R(λ) ϕ(λ). We now seek to bound each of those terms. First, ^⊥ϕ(λ) - λ^ℓ +1 ϕ^ℓ +1e = ∑_n=ℓ +2^+∞λ^n ϕ^ne(<ref>)≤ c λ b^ℓ + 2 for some c,b > 0 independent of ℓ and λ. Then, by analyticity of λ↦ R(λ) H(λ), at λ = 0, we have A R(λ) H(λ) - R(0)H(0) A^-1≤ c λ, where c does not depend on λ. We can reproduce the same reasoning for the norm ·. Finally, also using (<ref>),  (<ref>) yields, for δ∈{0,1}, ϕ(λ) - ψ(λ)e,δ - λ^ℓ +1ξ^_,ℓ,δ ≤ϕ(λ) - ψ(λ) - λ^ℓ +1 1 + R(0)H(0)^⊥ϕ^ℓ +1e,δ ≤ c λ b^ℓ + 2. The proof of the eigenvalue bound (<ref>) is similar. §.§ Proof of Corollary <ref> We remark directly from (<ref>) that the leading order of Γ(λ) - Λ(λ) is ∑_μ=1^ν1 + R_μ(λ)H(λ)^⊥Γ(λ) P_ψ_μ(λ) + s.a = λ^ℓ +1∑_μ=1^ν1 + R_μ(0)H(0)^⊥Γ^ℓ +1 P_ϕ_μ(0) + s.a + O(λ^ℓ +2). We used that Γ(λ) ^⊥ = ∑_n=ℓ+1^+∞λ^n Γ^n ^⊥, because ^⊥Γ^k = 0 for all k ∈{0,…,ℓ} by the assumption (<ref>) stating that Γ^k ⊂. The bounds (<ref>) and (<ref>) are obtained by using similar arguments, and follow the same steps. We need the bound (<ref>) on the derivatives Γ^n for showing (<ref>). §.§ Proof of Lemma <ref> We have ϕ_μ(λ) = ∑_n=0^+∞λ^n ϕ_μ^n and Γ(λ) = ∑_μ=1^νϕ_μ(λ) = ∑_0 ≤ k,p < +∞ 1 ≤μ≤νλ^k+p|ϕ_μ^k⟩⟨ϕ_μ^p| hence identifying the coefficients of λ^n gives Γ^n = ∑_k=0^n∑_μ=1^ν|ϕ_μ^n-k⟩⟨ϕ_μ^k|. From this we see that ⊕_n=0^ℓΓ^n = ϕ^n_μ | 0≤ n ≤ℓ, 1 ≤μ≤ν. Moreover, take μ∈{1,…,ν}, then Γ^n _μ = ∑_α=1^νϕ^n_αϕ_α^0 , _μ + ∑_1 ≤α≤ν 0 ≤ k ≤ n-1ϕ_α^n-kϕ^k_α, _μ. Since (_μ)_μ=1^ν is a basis of (ϕ^0_μ)_μ=1^ν, this relation enables to show recursively the following proposition for any n ∈, (n) : Γ^k _α | 0≤ k ≤ n, 1 ≤α≤ν = ϕ_α^k | 0≤ k ≤ n, 1 ≤α≤ν. § PROOF OF THEOREM <REF> We consider Appendix <ref> for intermediate normalization. Let us recall that Φ_μ(λ) := ϕ_μ(λ)ϕ_μ^0,ϕ_μ(λ), Ψ_μ(λ) := ψ_μ(λ)ϕ_μ^0,ψ_μ(λ), ϕ_μ^n := 1n!^̣n λ̣^nϕ_μ(λ) _ 1mu height 2ex2mu λ = 0, Φ_μ^n := 1n!^̣n λ̣^nΦ_μ(λ) _ 1mu height 2ex2mu λ = 0, Ψ_μ^n := 1n!^̣n λ̣^nΨ_μ(λ) _ 1mu height 2ex2mu λ = 0, _μ^n := 1n!^̣n λ̣^n_μ(λ) _ 1mu height 2ex2mu λ = 0. The proof of this result is different from the proof of Corollaries <ref> and <ref>. In particular it does not use the results of Section <ref>. §.§ Core lemma Before starting the proof, we show the following lemma, giving the error at order n+1 when the previous orders are equal. Take n ∈. If for all k ∈{0,…,n}, Φ^k_μ = Ψ_μ^k and E^k_μ = _μ^k, then E^n+1_μ = ^n+1_μ and Φ^n+1_μ - Ψ^n +1_μ = 1+_μ(0) H^11 + R_μ(0) H^0^⊥Φ^n+1_μ. We have ϕ^0_α∈ for any α∈{1,…,ν} so Γ^0 = Γ^0 = , hence Γ^0^⊥ = Γ^0^⊥, we have 1 = ^⊥ + Γ^0^⊥ + Γ^0 P_ϕ^0_μ^⊥ + P_ϕ^0_μ and we will split = ^⊥⊕Γ^0^⊥⊕Γ^0 P_ϕ^0_μ^⊥⊕ P_ϕ^0_μ. We will compute Φ^n+1_μ - Ψ^n +1_μ on each of those subspaces. We define ξ^q_μ := Φ^q_μ - Ψ^q_μ for any q ∈. For any q ∈, Φ^q_μ⊥Φ^0_μ and Ψ^q_μ⊥Φ^0_μ, hence P_ϕ^0_μξ^q_μ = 0. We define w^k_μ := H^k - _μ^k, h^k_μ := H^k - E_μ^k and w_μ(λ) := H(λ) - _μ(λ) = ∑_k=0^+∞λ^k w^k_μ, h_μ(λ) := H(λ) - E_μ(λ) = ∑_k=0^+∞λ^k h^k_μ. Since Ψ_μ(λ) is en eigenvector of H(λ) → with eigenvalue _μ(λ), w_μ(λ) Ψ_μ(λ) = 0, so identifying the different factors of λ^q of the last equation, for any q ∈ we have that ∑_k=0^q w^q-k_μΨ^k_μ = 0. Since E_μ(λ),Φ_μ(λ) is an eigenmode of H(λ), h_μ(λ) Φ_μ(λ) = 0, and this yields that for any q ∈, ∑_k=0^q h^q-k_μΦ^k_μ = 0. Applying to (<ref>) and substracting (<ref>) yields 0 = ∑_k=0^qh^q-k_μΦ^k_μ - w^q-k_μΨ^k_μ = ∑_k=0^qh^q-k_μξ^k_μ + ^q-k_μ - E^q-k_μΨ^k_μ =∑_k=0^qh^q-k_μξ^k_μ + ^k_μ - E^k_μΨ^q-k_μ. We know that ^k_μ - E^k_μ = 0 and ξ_μ^k = 0 for all k∈{0,…,n}. So using (<ref>) with q=n+1 gives 0 = h^0_μξ_μ^n+1 + ^n+1_μ - E^n+1_μΦ^0_μ. Taking the scalar product with Φ^0_μ gives ^n+1_μ = E^n+1_μ and applying P_ϕ^0_μ^⊥ gives h^0_μξ^n+1_μ = 0, so h^0_μξ^n+1_μ = - h^0_μ^⊥ξ^n+1_μ = - H^0 ^⊥Φ^n+1_μ and applying R_μ(0) yields Γ^0^⊥ξ^n+1_μ = R_μ(0) H^0 ^⊥Φ^n+1_μ. Next, applying (<ref>) with q=n+2 gives 0 = h^1_μξ^n+1_μ + h^0_μξ^n+2_μ + ^n+2_μ - E^n+2_μΦ^0_μ. Applying Γ^0 P_ϕ^0_μ^⊥ and using (<ref>) gives 0 = Γ^0 P_ϕ^0_μ^⊥ h^1_μξ^n+1_μ = Γ^0 P_ϕ^0_μ^⊥ h^1_μ^⊥ + Γ^0^⊥ + Γ^0 P_ϕ^0_μ^⊥ + P_ϕ^0_μξ^n+1_μ P_ϕ^0_μξ^q_μ = 0 (<ref>)= Γ^0 P_ϕ^0_μ^⊥ h^1_μΓ^0 P_ϕ^0_μ^⊥ξ^n+1_μ + Γ^0 P_ϕ^0_μ^⊥ h^1_μ1 + R_μ(0) H^0^⊥Φ^n+1_μ. We now apply _μ(0), being such that _μ(0) Γ^0 P_ϕ^0_μ^⊥ h^1_μΓ^0 P_ϕ^0_μ^⊥ = -Γ^0 P_ϕ^0_μ^⊥, which gives Γ^0 P_ϕ^0_μ^⊥ξ^n+1_μ = _μ(0) h^1_μ1 + R_μ(0) H^0^⊥Φ^n+1_μ. Finally, using it, together with (<ref>) and P_ϕ^0_μξ^q_μ = 0 yields ξ^n+1_μ = ^⊥ + Γ^0^⊥ + Γ^0 P_ϕ^0_μ^⊥ + P_ϕ^0_μξ^n+1_μ = 1+_μ(0) h^1_μ1 + R_μ(0) H^0^⊥Φ^n+1_μ = 1+_μ(0) H^1 1 + R_μ(0) H^0^⊥Φ^n+1_μ, where we used that Γ(0) R_μ(0) = 0, and hence _μ(0) R_μ(0)= 0, in the last line. We then transform the last result into a result on the intermediate normalization series. Take n ∈. If for all k ∈{0,…,n}, Φ^k_μ = Ψ_μ^k, then for all k ∈{0,…,n}, ϕ_μ^k = ψ_μ^k, and ϕ_μ^n+1 - ψ_μ^n+1 = Φ_μ^n+1 - Ψ_μ^n+1. As in Lemma <ref>, for Θ∈{Φ, Ψ}, we define Y_Θ^0 := 1, Y_Θ^1 := 0, and for any q ∈, Y_Θ^q := 12 ∑_k=1^q-1Θ_μ^q-k, Θ_μ^k - Y_Θ^q-k Y_Θ^k, and we have ϕ_μ^q = Φ_μ^q - ∑_k=0^q-2 Y^q-k_Φϕ_μ^k, ψ_μ^q = Ψ_μ^q - ∑_k=0^q-2 Y^q-k_Ψψ_μ^k. Since for any k ∈{0,…,n}, Φ_μ^k = Ψ_μ^k, then one can prove by induction that Y^k_Φ= Y^k_Ψ for any k ∈{0,…,n +1}, then ϕ_μ^k = ψ_μ^k for any k ∈{0,…,n} and ϕ_μ^n +1 - ψ_μ^n +1 = Φ_μ^n +1 - Ψ_μ^n +1. §.§ Proof of (<ref>) We are now ready to prove (<ref>). §.§.§ From n=0 to n=ℓ We make a recursive proof on n ∈{0,…,ℓ} of the proposition (n) : ∀ k ∈{0,…,n}, p ∈{0,…,2n }, Φ^k_μ = Ψ_μ^k, ϕ^k_μ = ψ^k_μ and E^p_μ = _μ^p. We have Φ^0_μ = Ψ_μ^0 = ϕ^0_μ = ϕ_μ(0) and E^0_μ = _μ^0 = E_μ(0), proving (0). Let us now take n ∈{0,…,ℓ-1}, assume (n) and we want to show (n+1), that is we want to show that Φ^n+1_μ = Ψ_μ^n+1, ϕ^n+1_μ = ψ_μ^n+1,and that E^p_μ = _μ^p for p∈{2n+1,2n+2}. Since ^⊥Φ^n+1_μ = 0, applying Lemma <ref> yields Φ^n+1_μ = Ψ_μ^n+1 and applying Lemma <ref> yields ϕ^n+1_μ = ψ_μ^n+1. Then we have ϕ_μ(λ) - ψ_μ(λ) = ∑_k=n +2 ^+∞λ^k ϕ_μ^k - ψ_μ^k. We use Lemma <ref>, i.e. that ϕ_μ^ke+ ψ_μ^ke≤ ab^k for any k ∈, some a,b >0. We have ϕ_μ(λ) - ψ_μ(λ)e ≤∑_k=n+2^+∞λ^k ϕ_μ^ke + ψ_μ^ke ≤2a1 - λbλ b^n+2≤ c λ b^n+2, for some constant c > 0 independent of λ and n. Applying it with (<ref>) gives E_μ(λ) - _μ(λ)≤ c λ b^2n+4 where c is independent of λ and n. Letting λ→ 0 gives E^p_μ = _μ^p for p∈{2n+1,2n+2} as expected, and this concludes the induction, showing (n) for all n ∈{0,…,ℓ}. §.§.§ n=ℓ and the conclusion By Lemma <ref>, we have ϕ_μ^ℓ+1 - ψ_μ^ℓ+1 = Φ_μ^ℓ+1 - Ψ_μ^ℓ+1. Applying ^⊥ yields ^⊥ϕ_μ^n +1 = ^⊥ϕ_μ^n +1 and thus with (<ref>), ϕ_μ^ℓ+1 - ψ_μ^ℓ+1 = 1 + _μ(0) H^11 + R_μ(0) H^0^⊥ϕ_μ^ℓ+1. Returning to the series, ϕ_μ(λ) - ψ_μ(λ) = λ^ℓ +1ϕ^ℓ+1_μ - ψ^ℓ+1_μ + ∑_n=ℓ +2^+∞λ^n ϕ^n_μ - ψ^n_μ. We obtain (<ref>) by using the same reasoning as in Section <ref>, and we need Lemma <ref>. § PROOF OF LEMMA <REF> We have ϕ_μ(λ) - _μ(λ) = ϕ_μ(λ) - ∑_n=0^ℓλ^n ϕ_μ^n + ∑_n=0^ℓλ^n ϕ_μ^n - _μ(λ) = ∑_n=ℓ +1^+∞λ^n ϕ_μ^n + 1 - ∑_n=0^ℓλ^n ϕ_μ^n^-1∑_n=0^ℓλ^n ϕ_μ^n. Then we write 1 = ∑_n=0^+∞λ^n ϕ_μ^n^-1 and use that for any u,v >0, u^-1 - v^-1≤u - v u^-1 v^-1 so ϕ_μ(λ) - _μ(λ)e,δ≤∑_n=ℓ +1^+∞λ^n ϕ_μ^ne,δ + ∑_n=0^+∞λ^n ϕ_μ^n - ∑_n=0^ℓλ^n ϕ_μ^n∑_n=0^ℓλ^n ϕ_μ^n^-1∑_n=0^ℓλ^n ϕ_μ^ne,δ ≤∑_n=ℓ +1^+∞λ^n ϕ_μ^ne,δ1 + A^-1^δ∑_n=0^ℓλ^n ϕ_μ^n^-1∑_n=0^ℓλ^n ϕ_μ^ne,δ. Finally, ∑_n=ℓ +1^+∞λ^n ϕ_μ^ne,δ≤∑_n=ℓ +1^+∞λ^n ϕ_μ^ne,δ and we apply Lemma <ref>. The bound en eigenvalues can be deduced from the previous one. §.§ Acknowledgement We warmly thank Long Meng for a useful discussion. § INTERMEDIATE NORMALIZATION In this section, we show several results about intermediate normalization, which is aimed to be applied to Rayleigh-Schrödinger series eigenvectors in another part of this document, for both degenerate and non-degenerate cases. §.§ Unit normalization We consider a Hilbert space with scalar product ·,· and norm ·, and a map ϕ : → depending on one real parameter λ. We consider that ϕ(λ) = 1 for any λ∈, which is called unit normalization. We assume that ϕ is analytic at 0 so we can expand it ϕ^n := 1n!^̣nλ̣^nϕ(λ)_ 1mu height 2ex2mu λ = 0, ϕ(λ) = ∑_n=0^+∞λ^n ϕ^n. §.§ Definition of intermediate normalization Let us define Φ(λ) := ϕ(λ)ϕ^0, ϕ(λ), Φ^n := 1n!^̣nλ̣^nΦ(λ)_ 1mu height 2ex2mu λ = 0. We then define Z(λ) := 1ϕ^0, ϕ(λ). Let us denote by P_ϕ^0 the projector onto ϕ^0, so P_ϕ^0ϕ(λ) = ϕ^0,ϕ(λ)ϕ^0. Then we have Φ(λ) = Z(λ) P_ϕ^0ϕ(λ) + 1-P_ϕ^0ϕ(λ) = ϕ^0 + 1-P_ϕ^0 Z(λ)ϕ(λ), Thus Φ^0 = ϕ^0 = Φ(0) = ϕ(0), and for any n ∈, Φ^n ∈1-P_ϕ^0 = {Φ^0}^⊥. We conclude that Φ^n ⊥Φ^0, ∀ n ≥ 1. The normalization of Φ(λ) is called the intermediate normalization. It is not a unit vector for all λ≠ 0 in general, but has the convenient property (<ref>). For instance in the case of families of eigenvectors, it is computable as recalled in Section <ref>. For this reason, this is usually the one that is computer first in eigenvalue problems depending on one parameter. §.§ From standard normalization to unit normalization Once Φ^n is computed, or once one has proved properties on it, one can need to work with ϕ^n again. One way of going from intermediate normalization to unit normalization is to fix the phasis gauge of ϕ(λ) such that ϕ^0, ϕ(λ)∈_+. Then Φ(λ)Φ(λ) = Φ(λ)ϕ^0, ϕ(λ)^-1ϕ(λ) so Φ(λ)Φ(λ) and ϕ(λ) have the same phasis, and since they both have unit normalization they are equal, ϕ(λ) = Φ(λ)Φ(λ). The next result shows how to obtain the series ϕ^n from the Φ^n's. We define Y^0 := X^0 := 1, Y^1 := X^1 := 0 and, recursively, for any n ≥ 2, Y^n := 12 ∑_k=1^n-1Φ^n-k, Φ^k - Y^n-k Y^k, X^n := - ∑_k=0^n-2 X^k Y^n-k. Then ϕ^0 = Φ^0, ϕ^1 = Φ^1 and for any n ≥ 2, ϕ^n = Φ^n + ∑_k=0^n-2 X^n-kΦ^k. We remark that ϕ^2 = Φ^2 - 12 ϕ^1^2 ϕ^0. We define y(λ) := Φ(λ) and consider its Taylor series y^n := 1n!^̣nλ̣^n y(λ)_ 1mu height 2ex2mu λ = 0, the relation y(λ)^2 = Φ(λ)^2 gives, for any n ∈, ∑_k=0^n y^n-k y^k = ∑_k=0^nΦ^n-k, Φ^k hence, using y^0 = ϕ^0^-2 = 1 and (<ref>), we get a recursive way of obtaining the y^n's, which is y^1 = 0 and for any n ≥ 2, via y^n = 12 ∑_k=1^n-1Φ^n-k, Φ^k - y^n-k y^k, so y^n = Y^n for any n ∈. We then define x(λ) := 1/y(λ), and its Taylor series x^n := 1n!^̣nλ̣^n x(λ)_ 1mu height 2ex2mu λ = 0. The relation x(λ) y(λ) = 1 gives ∑_k=0^n x^k y^n-k = δ_n for any n ∈, yielding x^0 = 1, x^1 = 0 and for any n ≥ 2, x^n = - ∑_k=0^n-2 x^k y^n-k, so x^n = X^n for any n ∈. Finally, from (<ref>) we have ϕ(λ) = Φ(λ) x(λ) hence we deduce (<ref>). § ERROR BOUNDS BETWEEN EIGENVECTORS, DENSITY MATRICES AND EIGENVALUES Eigenvectors are controled by eigen-density matrices, so it is equivalent to obtain bounds using eigenvectors or bounds using density matrices. This is the object of this appendix, and it enables to provide precisions on how to derive the bounds (<ref>) and (<ref>). For any set of eigenvalues := _α_α =1^ν∈^ν, we define the norm ^2 := ∑_μ=1^ν_μ^2. The following Lemma is well-known, see <cit.> and <cit.>. Take two self-adjoint operators A and H acting on a Hilbert space , assume that there exists a ∈ such that 0 is in the resolvent set of H + a. Take an orthogonal projection , consider := ϕ_α_α =1^ν∈^ν and := ψ_α_α =1^ν∈^ν such that E_α, ϕ_α_α =1^ν are eigenmodes of H and _α, ψ_α_α =1^ν are eigenmodes of H. Define the density matrices Γ :=, Λ :=, define U^, as one of the optimizer(s) of the problem U ∈_ν - U and define ^ := U^,. Then we have A - ^≤A A^-1 ×1 + 14 H+a^- 12^2Γ - Λ2^2 1 ≤α≤νE_α + a^ 12A Γ - Λ2, and ∑_α=1^νE_α - _α≤A^-1 H A^-1+ A^-1 ^2 1 ≤α≤νE_αA - ^^2. We provide a proof in our context for the sake of completeness. It closely follows <cit.> and <cit.>. First, 2^- 12Γ - Λ2≤ - ^≤Γ - Λ2 is obtained from <cit.> and <cit.>. In <cit.> and <cit.> it is proved for orthogonal matrices, i.e. in the real case, but the proof extends naturally to the complex case. Defining the ν×ν matrix M by M_α,μ := ψ^_α, ϕ_μ for any α, μ∈{1,…,ν}, by <cit.>, M is hermitian (again, we apply the results to the complex case), so ϕ_α , ϕ_μ - ψ^_μ - 12 ϕ_α - ψ^_α , ϕ_μ - ψ^_μ = 12 ϕ_α , ϕ_μ - ψ^_μ + 12 ψ^_α , ϕ_μ - ψ^_μ = 12 ψ^_α , ϕ_μ - ϕ_α , ψ^_μ = 0 and for any α, μ∈{1,…,ν} we have ϕ_α , ϕ_μ - ψ^_μ = 12 ϕ_α - ψ^_α , ϕ_μ - ψ^_μ. Then H + a^ 12Λ2^2 =∑_α=1^νψ_α , H + aψ_α = ∑_α=1^ν_α + a H + a^ 12Γ2^2 = ∑_α=1^νE_α + a, and H + a^ 12Λ , H + a^ 12Γ = ΛH + aΓ= ∑_α=1^νΛH + aϕ_α = ∑_α=1^νE_α + aϕ_α, Λϕ_αΛ = Λ^2= ∑_α=1^νE_α + aΛϕ_α^2 = ∑_α=1^νE_α + a1 - Λ^⊥ϕ_α^2. We can hence compute Γ - Λ2^2 = H + a^ 12Γ2^2+ H + a^ 12Λ2^2 - 2 H + a^ 12Λ , H + a^ 12Γ =∑_α=1^ν_α + a - E_α + a + 2E_α +aΛ^⊥ϕ_α^2. Then, - ^^2 =∑_α=1^νϕ_α - ψ^_α^2 = ∑_α=1^νϕ_α^2 + ψ^_α^2 - 2H + aϕ_α , ψ^_α = ∑_α=1^ν_α + a + E_α + a - 2 E_α +aϕ_α , ψ^_α = ∑_α=1^ν_α + a - E_α + a + E_α +aϕ_α - ψ^_α^2. where we used ϕ_α , ψ^_α = 1 - 12 ϕ_α - ψ^_α^2 in the last equality, which comes from (<ref>). We define λ_ := 1 ≤α≤νE_α + a and following <cit.>, we have - ^^2 - Γ - Λ2^2 = ∑_α=1^νE_α +aϕ_α - ψ^_α^2 - 2 Λ^⊥ϕ_α - ψ^_α^2 ≤∑_α=1^νE_α +aϕ_α - ψ^_α^2 - Λ^⊥ϕ_α - ψ^_α^2 = ∑_α=1^νE_α +aΛϕ_α - ψ^_α^2 = ∑_1 ≤α, μ≤νE_α + aϕ_α - ψ^_α , ϕ_μ^2 (<ref>)≤ 14λ_∑_1 ≤α, μ≤νϕ_α - ψ^_α , ϕ_μ - ψ^_μ^2 ≤ 14 λ_ - ^2^4 (<ref>)≤ 14 λ_Γ - Λ2^4 ≤ 14 λ_H+a^- 12^2 Γ - Λ2^2 Γ - Λ2^2. Then - ^≤1 + 14 λ_H+a^- 12^2 Γ - Λ2^2^ 12 ×Γ - Λ2. Next, A - ^≤A - ^ ≤A 1 + 14 λ_H+a^- 12^2 Γ - Λ2^2^ 12 ×Γ - Λ2, and we deduce (<ref>) by using (<ref>). Let us now show (<ref>). For any U ∈_ν, we have ∑_α=1^νU ψ_α , H U ψ_α = ∑_1 ≤α, μ, β≤νU_αμ U_αβψ_μ, H ψ_β = ∑_1 ≤α, μ, β≤νU_αμ U_αβ_βψ_μ, ψ_β = ∑_1 ≤α, μ, β≤νU_αμ U_αβ_βδ_μ-β = ∑_1 ≤α, μ≤νU_αμ U_αμ_μ = ∑_μ=1^ν_μ∑_α=1^ν U^*_μα U_αμ = ∑_μ=1^ν_μU^* U_μμ = ∑_μ=1^ν_μ. Hence similarly as in (<ref>), ∑_α=1^νϕ_α - ψ^_α , E_α - Hϕ_α - ψ^_α = ∑_α=1^νψ^_α , E_α - Hψ^_α =∑_α=1^ν E_α - U^,ψ_α, H U^,ψ_α =∑_α=1^ν E_α - _α. Thus ∑_α=1^νE_α - _α≤∑_α=1^νA ϕ_α - ψ^_α , A^-1E_α - HA^-1 A ϕ_α - ψ^_α ≤1 ≤α≤νA^-1E_α - HA^-1∑_α=1^νϕ_α - ψ^_αe^2 ≤c_H + c_A^2 1 ≤α≤νE_αA - ^^2. siam
http://arxiv.org/abs/2408.12107v1
20240822034419
Extrinsic Fluctuations in the p53 Cycle
[ "Manuel Eduardo Hernández-García", "Mariana Gómez-Schiavon", "Jorge Velázquez-Castro" ]
q-bio.MN
[ "q-bio.MN", "92-10, 62L20, 92C45", "G.3" ]
On the Credibility of Backdoor Attacks Against Object Detectors in the Physical World Damith C. Ranasinghe August 26, 2024 ===================================================================================== § ABSTRACT Fluctuations are inherent to biological systems, arising from the stochastic nature of molecular interactions, and influence various aspects of system behavior, stability, and robustness. These fluctuations can be categorized as intrinsic, stemming from the system's inherent structure and dynamics, and extrinsic, arising from external factors, such as temperature variations. Understanding the interplay between these fluctuations is crucial for obtaining a comprehensive understanding of biological phenomena. However, studying these effects poses significant computational challenges. In this study, we used an underexplored methodology to analyze the effect of extrinsic fluctuations in stochastic systems using ordinary differential equations instead of solving the Master Equation with stochastic parameters. By incorporating temperature fluctuations into reaction rates, we explored the impact of extrinsic factors on system dynamics. We constructed a master equation and calculated the equations for the dynamics of the first two moments, offering computational efficiency compared with directly solving the chemical master equation. We applied this approach to analyze a biological oscillator, focusing on the p53 model and its response to temperature-induced extrinsic fluctuations. Our findings underscore the impact of extrinsic fluctuations on the nature of oscillations in biological systems, with alterations in oscillatory behavior depending on the characteristics of extrinsic fluctuations. We observed an increased oscillation amplitude and frequency of the p53 concentration cycle. This study provides valuable insights into the effects of extrinsic fluctuations on biological oscillations and highlights the importance of considering them in more complex systems to prevent unwanted scenarios related to health issues. Keywords: Biological oscillator, Extrinsic fluctuations, Intrinsic fluctuations, Deterministic approximation, Stochastic processes. § INTRODUCTION Fluctuations are inherent to biological systems, arising from the stochastic nature of molecular interactions, and influence various aspects of system behavior, stability, and robustness. The fluctuations play an even more pivotal role, necessitating thorough consideration in comprehensive analyses <cit.>. These fluctuations can be broadly categorized as intrinsic and extrinsic, with each contributing distinctively to system dynamics. Intrinsic fluctuations stem from the inherent structure and dynamics of the system, reflecting the probabilistic nature of molecular interactions within biological networks <cit.>. These fluctuations manifest as variabilities in molecular concentrations influencing the emergence of spontaneous oscillations, bistability, and other behaviors observed in biological systems <cit.>. In contrast, extrinsic fluctuations arise from external factors, such as variations in temperature, pH, or nutrient availability, which can significantly affect system dynamics <cit.>. Although intrinsic fluctuations have received considerable attention in previous research, the importance of extrinsic fluctuations has garnered increasing recognition. Understanding the interplay between intrinsic and extrinsic fluctuations is crucial for obtaining a comprehensive understanding of biological phenomena, because both types of fluctuations can coalesce to shape system behavior in experimental setups and natural environments <cit.>. Despite the significance of the fluctuations, studying their effects poses significant computational challenges. Existing algorithms and analytical methods may struggle to capture the complex interactions between intrinsic and extrinsic fluctuations accurately. Moreover, computationally intensive approaches are often required to discern the effects of extrinsic fluctuations, further complicating analyses <cit.>. Therefore, alternative analytical approaches are required to address these challenges. We adopted an approach based on the calculation of the master equation with discrete quantities, considering both the molecules involved in the reactions and the parameters that influence biochemical reaction rates. This implies that each molecule and parameter is treated as stochastic variables in the proposed model, extending prior methodologies to encompass a broader range of scenarios <cit.>. By approximating the master equation, we derive a set of differential equations that model mesoscopic system dynamics <cit.>; in particular, in scenarios involving small fluctuations and/or first-order reactions, this approximation becomes exact. This approach offers computational efficiency compared with traditional stochastic simulations, such as the Gillespie algorithm <cit.>. The generation of biological oscillators, such as circadian rhythms, is a complex process involving the expression and cyclic regulation of specific genes, as well as the feedback mechanism between proteins <cit.>. These mechanisms govern diverse gene regulatory networks and other biological processes <cit.>. Here, we focused on assessing the robustness of biological oscillators under extrinsic fluctuations with specific attention to the p53 system. As a critical regulator of cellular stress responses and the maintenance of genomic integrity, the p53 system's oscillatory behavior enhances response precision <cit.>. Examining the influence of temperature fluctuations, which are known to significantly affect biochemical reactions <cit.>, we must consider that such fluctuations are present in the p53 system. Therefore, it is essential to assess how the system functions in the presence of both intrinsic and extrinsic fluctuations. In this work, we aim to determine the sensitivity of the system dynamics to fluctuations. With this, we aim to gain a deeper understanding of the effects of fluctuations on the biological oscillations that occur in living organisms. The remainder of this paper is organized as follows. Section <ref> outlines the derivation of the master equation considering extrinsic fluctuations. In Section <ref>, we employ a deterministic approximation to derive differential equations that describe the system dynamics in terms of concentrations and their covariances. Section <ref> focuses on modeling the p53 system and evaluating the effects of temperature fluctuations on its dynamics. Finally, in Section <ref> the results and conclusions are presented. § MASTER CHEMICAL EQUATION WITH EXTRINSIC COMPONENTS The master chemical equation of a biochemical system is responsible for modeling processes involving interactions between various biochemical species and their fluctuations. However, in some cases, these interactions are influenced by the surrounding environment (e.g., temperature and pressure) and their fluctuations, which are referred to as extrinsic fluctuations. Initially, it is essential to consider the chemical equation for a system that remains uninfluenced by external elements <cit.>. This equation, referred to as the master chemical equation, describes chemical reactions that occur within a system. To achieve this, we assume the presence of N species, 𝒮_l (l ∈ {1,2,...,N}), and M reactions ℛ_j (j ∈ {1,2,...,M}) in which the species are transformed as follows, ℛ_j : ∑_l=1^Nα_jl𝒮_l []k_j→∑_l=1^Nβ_jl𝒮_l. Here, coefficients α_jl and β_jl are positive integers and k_j are the reaction constants (or parameters). We assume that the reactions follow mass action kinetics. These equations allow for the determination of the stoichiometric matrix of the system Γ_lj= β_jl - α_jl. Through the collisions (or interactions) of the different elements they are transformed, so the propensity rates are given as follows t_j(𝐒) = k_j∏_l^N S_l !/Ω ^α_jl(S_l- α_jl )!, (the index j is the same as that of reaction ℛ_j), which are the transition probabilities between different states of the system. S_l corresponds to the number of molecules of the chemical species 𝒮_l and 𝐒= (S_1,S_2,...,S_N), Ω is related to the size of the system and has units of molecules between moles. Based on this background, the following equation can be derived, d P(𝐒,t)/dt = Ω∑_j^M( t_j(𝐒-Γ_j) P(𝐒-Γ_j,t) - t_j(𝐒) P(𝐒,t) ), this is the chemical master equation, which describes the evolution of the probability distribution of states in a system. We can make the dependence of P(𝐒,t) on the reaction constants 𝐊=(K_1,K_2,...,K_M) explicit by writing (<ref>) as d P(𝐒,t|𝐊)/dt = Ω∑_j^M ( T_j(𝐒-Γ_j,K_j) P(𝐒-Γ_j,t|𝐊) - T_j(𝐒,K_j) P(𝐒,t|𝐊) ), where T_j(𝐒, K_j) = K_j∏_l^N S_l !/Ω ^α_jl(S_l- α_jl )!. Furthermore, reaction constants K_j can be time-dependent, K_j=K_j(t), and follow stochastic dynamics. We considered these fluctuations by following a procedure similar to that in <cit.>. §.§ Fluctuations in Reaction Constants Fluctuations in reaction rates can be driven by extrinsic factors such as temperature or density fluctuations because the system is not isolated, and thus in thermodynamic contact with the environment. Let's consider M time-dependent reaction rates, K_j(t) = k_j(t) + η_k_j(t), (j ∈ (1,2,...,M)). These are expressed as the mean dynamics k_j and η_k_j is a stochastic term representing the fluctuations around this mean (for more details, see Appendix <ref>). Thus, we have k_j(t) = ⟨K_j(t)|,⟩ C^2_l,j =⟨(K_l(t)- k_l(t)) (K_j(t)- k_j(t) )|=⟩⟨η_k_lη_k_j|.⟩ Additionally, to describe the dynamics of the reaction constants, we use the following Fokker-Planck equation ∂ p(𝐊, t)/∂ t = -∑_j^M ∂/∂ K_jg_j(𝐊,t)p(𝐊, t) + 1/2∑_j^M ∂^2/∂ K_j K_l G_lj(𝐊,t)p(𝐊, t), where p(𝐊, t) describes the probability distribution of 𝐊, g_j are the drift coefficients and G_ji are the diffusion coefficients. As 𝐊 is now a stochastic variable, we can now define the joint probability density P(𝐒,𝐊,t)=P(𝐒,t|𝐊)p(𝐊,t), then ∂ P(𝐒,𝐊,t)/∂ t = Ω∑_j^M ( T_j(𝐒-Γ_j, K_j(t)) P(𝐒-Γ_j,𝐊,t) - T_j(𝐒, K_j(t)) P(𝐒,𝐊,t) ) -∑_j^M ∂/∂ K_jg_j(𝐊,t)P(𝐒,𝐊, t) + 1/2∑_j^M ∂^2/∂ K_j K_l G_lj(𝐊,t)P(𝐒,𝐊, t) we obtain a master chemical equation that includes the Fokker-Planck equation, which is equivalent to that derived in <cit.>. Equation (<ref>) models the distribution of a system's states under the influence of external stimuli, allowing for fluctuations (extrinsic fluctuations). This approach enhances the precision of biological system studies by aligning them more closely with the fluctuating nature of the systems. § APPROXIMATION WITH EXTRINSIC FLUCTUATIONS A system with intrinsic and extrinsic fluctuations can be analyzed directly using the master equation (<ref>), for this the Gillespie algorithm is used <cit.> and/or solving the Fokker-Planck equation. However, it is more practical and computationally efficient to use an approximation than to use the master equation directly. For example, the approximation presented in <cit.> can be used. This approximation transforms the master equation into a set of differential equations of moments, specifically the means and covariances, and assumes that higher-order moments are zero because they do not significantly contribute to system dynamics <cit.>. However, we propose another interpretation in which the fluctuations of the variables are sufficiently small; if we have a function of the variables of the system, a second-order expansion around their mean values is sufficient to capture the effects of the fluctuations. In this case, averaging was performed for a function of the variables S and K; therefore, we used 𝐗={𝐒, 𝐊}, and X̂_j=⟨X_j|$⟩ (X_jis thejelement of the vector𝐗). To obtain the differential equations, we will use the following approach, ⟨f(𝐗)|_⟩𝐗≈ ⟨ f(⟨𝐗|)⟩ + ∑_j_1 (⟨X_j_1|-⟩X_j_1)∂ f(𝐗̂)/∂X̂_j_1 + ∑_j_1∑_j_21/2 (⟨X_j_1|-⟩X_j_1)(⟨X_j_2|-⟩X_j_2) ∂^2 f(𝐗̂)/∂X̂_j_1∂X̂_j_2⟩_𝐗 = f(𝐗̂) + ∑_j_1∑_j_2C(X_j_1,X_j_2)/2∂^2 f(𝐗̂)/∂X̂_j_1∂X̂_j_2. Where theX̂_i's are related with deterministic quantities. Note that the average is taken for all system variables, whereas𝐂_j_1,j_2denotes the covariance between the different variables. Note that iff(𝐗)is a second-order polynomial, then the approximation becomes exact. This expansion around the mean value is of great usefulness when applied in conjunction with the master equation. For example, we can use (<ref>) to obtain an expression for the evolution ofs_l = ⟨S_l|⟩/Ωandk_j= ⟨K_j|$⟩ and then we use the expansion (<ref>) to obtain the following equations, d s_l/dt= ∑_j^MΓ_lj( R_j^D(𝐬) + ∑_l_1^N∑_l_2^Nσ^2_l_1,l_2/2∂^2 R_j^D(𝐬)/∂s_l_1∂s_l_2 + ∑_l_1^NC^1_l_1,j∂ R_j^D(𝐬)/∂s_l_1), d k_j/dt= g_j(𝐤,t) + ∑_j_1^M∑_j_2^MC^2_j_1,j_2/2∂^2 g_j(𝐤,t)/∂k_j_1∂k_j_2 , where l= {1,2,..., N}, j= {1,2,...,M}, and R_j^D(𝐬)= k_j ∏_l^N s_l^α_jl is the deterministic reaction rate assuming mass action, and the covariances between variables σ^2_l_1,l_2= ⟨(S_l_1-s_l_1)(S_l_2-s_l_2)|⟩/Ω^2, C^2_j_1,j_2= ⟨(K_j_1-k_j_1)(K_j_2-k_j_2)|$⟩,C^1_l_1,j= ⟨(S_l_1-s_l_1)(K_j-k_j)|⟩/Ω,𝐬=(s_1,s_2,...,s_N),𝐤=(k_1,k_2,...,k_M). In contrast with the case without extrinsic fluctuations or constant reaction ratesk_j<cit.>, the evolution of the mean concentrationss_j(in what follows we will refer only how concentration) is influenced by the correlations between theK_j's and the concentrations themselves as can be seen from the underlined terms of equation (<ref>). With a similar procedure, we can now calculate a set of equations for the evolution of the covariances between variabless_lof the system, d σ^2_l_1,l_2/dt = ∑_j^M( Γ_l_1 jΓ_l_2j/Ω( R_j^D(𝐬) + ∑_l_3^N∑_l_4^Nσ^2_l_3,l_4/2∂^2 R_j^D(𝐬)/∂s_1_3∂s_l_4 + ∑_l_3^NC^1_l_3,j/k_j∂ R_j^D(𝐬)/∂s_l_3) . + ∑_l_3^N( σ^2_l_1,l_3Γ_l_2 j∂ R_j^D(𝐬)/∂s_l_3 +σ^2_l_3,l_2Γ_l_1 j∂ R_j^D(𝐬)/∂s_l_3) + . Γ_l_1 j(C^1_l_2, j/k_j R_j^D(𝐬) ) + Γ_l_2 j(C^1_l_1, j/k_j R_j^D(𝐬) )) , wherel_1, l_2={1,2,..., N}. To highlight the effect of extrinsic fluctuations we have underlined the terms that are contributing to the dynamics of the correlation due to the fluctuating environment described by the fluctuation rate constants. This kind of effect in the dynamics of the correlations was already observed in <cit.>. To solve the previous differential equations we need to add equations that describe the evolution ofC^1_l,jandC^2_i,j, these are obtained in a similar way that Equation (<ref>) d C^1_l,j/d t = ∑_i^MΓ_li( C^2_i,j/k_i R_i^D(𝐬) + ∑_l_1^N C^1_l_1,j∂ R_i^D(𝐬)/∂s_l_1) + ∑_j_1^M C^1_l,j_1∂ g_j(𝐤,t)/∂ k_j_1 , d C^2_i,j/dt= C^2_i,j( ∂ g_i(𝐤,t) /∂ k_i + ∂ g_j(𝐤,t) /∂ k_j)+ ( G_ij(𝐤,t) + ∑_j_1^M∑_j_2^MC^2_j_1,j_2/2∂^2 G_ij(𝐤, t) /∂ k_j_1 k_j_2) . Wherei={1,2,...,M}. Solving the set of differential equations (<ref>)–(<ref>) allows us to describe the dynamics of the mean and variance of species concentration in the presence of both intrinsic and extrinsic fluctuations. Using equations (<ref>)–(<ref>) for a particular case, as presented in <cit.>, we can reproduce their results (see Appendix <ref>) while also providing new insights and extending their work by applying it to another type of extrinsic fluctuations. It is worth noting that, in contrast to intrinsic fluctuations, for which the magnitude decreases with system sizeΩ, the magnitude of extrinsic fluctuations depends only on external factors; thus, for large systems, when intrinsic fluctuations become negligible, the stochastic behavior of the species concentrationss_jis solely due to extrinsic fluctuations. In the following section, we use differential equations (<ref>)–(<ref>) to analyze a particular system, representing a model of p53. § MODEL OF P53 The protein p53, which is crucial for cancer prevention, plays a vital role in maintaining genomic integrity and regulating cellular fate under normal conditions. However, its dysfunction, which is common in many cancers, can have serious consequences. When p53 fails to detect and repair DNA damage, it promotes the accumulation of mutations and tumor progression by allowing the survival of damaged cells <cit.>. Oscillations in p53 activity are relevant because they potentially fine-tune cellular processes such as the cell cycle, suggesting a significant function in coordinating the cellular response to stress and DNA damage. This enables a more adaptive and efficient response to changing conditions <cit.>. Characterizing these oscillations is crucial for understanding cancer suppression mechanisms and developing new therapies. Therefore, we analyzed the effect of body temperature fluctuations on the period and amplitude of these oscillations. We adopted a model presented in <cit.> which describes the dynamics of the biochemical species concentrations. This particular model is a simplified representation of p53 dynamics, as it does not consider the presence of other reactions <cit.>. We chose this model because its dynamic is not affected byΩ, a factor that influences some other models (see Appendix <ref>). To induce oscillations in the system, we consider the parameters and initial conditions provided in <cit.>. The model, depicted in Figure <ref>, which contains three types of molecules: p53, Mdm2-precursor, and Mdm2. p53 promotes the synthesis of Mdm2-precursor, which in turn promotes the synthesis of Mdm2. However, Mdm2 promotes the degradation of p53. This interaction is modeled through a Hill-type function, Mdm2 binds to p53, leading to its degradation. Additionally, both p53 and Mdm2 are subject to degradation. The system is described by the following reactions, For this system, we have the next stoichiometric matrix Γ_ij = [ 1 -1 -1 0 0 0; 0 0 0 1 -1 0; 0 0 0 0 1 -1 ]. Here,x_1,x_2, andx_3represent the concentrations of p53, Mdm2-precursor, and Mdm2, respectively.k_3^*=k_3 ( 1/A_1 + x_1 ),A_1is the p53 threshold for degradation by Mdm2.k_1is the p53 synthesis rate,k_2is the p53 degradation rate,k_3is the saturated p53 degradation rate,k_4is the p53-dependent Mdm2 production rate,k_5is the Mdm2 maturation rate andk_6is the Mdm2 degradation rate. The reaction rates of the system are R_1^D = k_1 , R_2^D= k_2 x_1, R_3^D = k_3 x_3 x_1 ( 1/A_1 + x_1) , R_4^D= k_4 x_1, R_5^D = k_5 x_2 , R_6^D= k_6 x_3. A Hill-type function appears in the reaction rates (<ref>) associated with the degradation of p53, when such functions appear, the result in <cit.> is used to aid in modeling stochastic systems, because in this work is derived a stochastic Hill function, that considers and incorporates the effects of the fluctuations directly in the stochastic Hill function. §.§ Case I: Stationary extrinsic fluctuations In this case, it is assumed that thek_j's and the magnitude of their fluctuations do not change over time. In other words, we have stationary extrinsic fluctuations. Using the Arrhenius equation <cit.> it is possible to relateC^2_i,jwith the variance of the temperatureσ^2_Tas follows C^2_i,j= (k_i/10)(k_j/10) σ^2_T , fori,j={1,2,3,4,5,6} (see Appendix <ref> for further details). We solved the system numerically, the results are shown in Figure <ref>. The left panel shows the temporal evolution of p53 concentration when only intrinsic fluctuations are considered and when intrinsic and stationary extrinsic fluctuations are taken into account. The right panel shows the temporal evolution of the p53 variance in both cases. In <cit.> a similar model was analyzed, but the oscillations dampened and the variance increased indefinitely; however, in Figure <ref>, we can observe that in both cases (only intrinsic fluctuations and intrinsic and extrinsic fluctuations), the oscillations reach a constant amplitude because a stochastic Hill function is used. Another observation is that the introduction of extrinsic fluctuations causes a change in the period of the concentration of p53 and amplifies oscillations in the variance of p53, as shown in Fig. <ref>. We now quantify the effects of extrinsic fluctuations in the system, focusing on the amplitude and period of the oscillations, see Figure <ref>. This system has only one steady state in which the concentrations are positive; therefore, we used the same initial conditions as those used in Figure <ref> without loss of generality for this analysis. We focus on calculating the average amplitudes and periods for different values ofσ^2_Tat interval of 100-200 h. The increase inσ^2_Tis accompanied by a corresponding increase in the average amplitude of p53 oscillations. In particular, whenσ^2_Tincreases from zero to one, the average amplitude of p53 concentration experiences a modest increase of 0.22%. Moreover, the average amplitude of the variance of p53 exhibited a notable increase withσ^2_T; specifically, we observed an increase of 490% when moving fromσ^2_T=0toσ^2_T=1. Similarly, the average period of p53 concentration tended to increase withσ^2_T. With the same increase inσ^2_T, the average period showed an increase of approximately 0.93%. This trend is mirrored in the average period of p53 variance, with an increase of approximately 0.063 whenσ^2_Tincreases from zero to one, reflecting a comparable increase of 0.93 %. Notably, these findings highlight a consistent pattern wherein both the average period of p53 concentration and the average period of p53 variance experienced similar increases. §.§ Case II: Time-dependent extrinsic fluctuations We extended the p53 model to incorporate time-dependent (TD) extrinsic fluctuations. To build this model, we used the Arrhenius equation again, but now we considered temperature oscillations over time, assuming that they mimic those observed in the human body <cit.>, then we get, k_i(t)= k^0_ie^1/40( cos( π/12 t )), wherei,j={1,2,3,4,5,6} andk_i^0denote the reaction constants evaluated atT=309.65K. To model the system we need to add a set of differential equations forC^1_l,j(l={1,2,3}) andC^2_i,j, for this, we used (<ref>) and (<ref>), first we calculateg_i(k_i)for this we taking the time derivative ofk_i(t), g_i(k_i,t)= - k_i(t) π/480sin( π/12 t ), we expect that whend C^2_i,j /dt=0Equation (<ref>) is recovered, then we propose G_ij(k,t) = (k_i(t)/10) (k_j(t)/10) ( σ^2_T/e^1/20( cos( π/12 t )) + σ^2_T/100) π/240 sin( π/12 t ),σ^2_Trepresents the variance in temperature and is considered to be constant, as described in the previous subsection. Then we get the next equations forC^2_i,j(for more details, see Appendix <ref>) d C^2_i,j/dt= ( - C^2_i,j + ( k_i(t) k_j(t) + C^2_i,j) ( σ^2_T/100 e^1/20( cos( π/12 t ))+ σ^2_T) ) π/240 sin( π/12 t ), In Figure <ref>, we numerically solved the system for three cases: intrinsic fluctuations, intrinsic and stationary extrinsic fluctuations, and intrinsic and TD-extrinsic fluctuations. We observed that oscillations were present even when TD-extrinsic fluctuations were introduced. However, p53 concentrations also exhibited a change in their period. Another observation is that the amplitude of the variance of p53 oscillates with respect to other cases becausek_i(t)oscillates. We focus on the interval of 100-200 h, and then calculate the average of the amplitudes and periods for p53 concentration and their variance, similarly to the previous section. Simulations were performed for different values ofσ^2_T, and the parameters used are listed in Table <ref>. The results of the simulations are shown in Figure <ref>, where we observe that asσ^2_Tincreases, the amplitude of the p53 oscillations increases. Whenσ^2_Tincreased from zero to one, the increment was approximately 1.97%. Additionally, the amplitude of the variance followed a similar pattern, expanding asσ^2_Tincreased, using the same increase inσ^2_T, which increases by 525 %. Similarly, the p53 period tended to elongate with increasingσ^2_T; whenσ^2_Tincreased from zero to one, this elongation increased by 1.12 %. Moreover, the variability in the variance of p53 increased whenσ^2_Tincreased from zero to one, reflecting an increase of approximately 1.12%. A significant difference was observed compared with stationary extrinsic fluctuations. With time-dependent (TD) extrinsic fluctuations, the period of p53 concentration, as well as its variance, exhibits a considerable standard deviation. Figure <ref> shows the distributions of these periods, analyzing their distributions at interval of 100-200 h. In this figure, we observe that at each value ofσ^2_T, there is a distribution of periods and the center of the distribution increases asσ^2_Tincreases. These distributions arise because the p53 concentration (and variance) exhibits oscillating periods owing to the oscillation of the variablesk_i(t). § RESULTS AND CONCLUSIONS This study presents an approach based on a set of ordinary differential equations using the first two moments to examine extrinsic fluctuations in stochastic systems. This approach offers computational efficiency compared to the direct implementation of the chemical master equation, as the system is transformed into a set of ordinary differential equations. The derived differential equations provide information about the dynamics of the system considering two types of fluctuations: intrinsic fluctuations, which are inversely proportional to the system sizeΩ, and extrinsic fluctuations, which are equivalent to fluctuations in the parameters (rate constants). The accuracy of this approach is limited to systems with first-order reactions and/or small fluctuations ( intrinsic and extrinsic). For systems that do not have these characteristics, incorporating higher-order moments into the approximation can improve accuracy <cit.>. Although this expansion increases the number of differential equations, the use of ordinary differential equations is computationally more efficient. Reproducing the results of previous research <cit.>, we applied this approach to analyze a biological oscillator, focusing on the p53 model and its response to temperature-induced extrinsic fluctuations. Although it is reasonable to expect drastic changes in system dynamics with the addition of extrinsic fluctuations, the oscillatory nature of the system persisted. Our findings underscore the impact of extrinsic fluctuations on the nature of oscillations in biological systems such as biological clocks. Notably, alterations in oscillatory behavior or dynamics depend on the characteristics of extrinsic fluctuations, emphasizing the importance of considering them in biological system modeling In particular, we found that extrinsic fluctuations increased the oscillation amplitude, with the most significant change observed in the amplitude of p53 variance. This indicates that the concentration of p53 becomes more variable. Both the p53 concentration and variance increased when extrinsic fluctuations were introduced. Although the increase in each period was small, it could have a significant long-term impact, and thus, two potential health consequences can arise. First, if the slowing of dynamics becomes significant, the suppression of DNA damage can be delayed, causing its accumulation. Second, the increased variability in p53 concentration could result in insufficient levels of p53. Although the analyzed model is highly simplified, it highlights the factors that could contribute to cancer proliferation. Therefore, it is essential to study extrinsic fluctuations in more complex networks to analyze whether amplification or stabilization of the effects of extrinsic fluctuations occurs as the number of interactions in the system increases. Such studies will enhance the development of alternative mechanisms for preventing unwanted scenarios related to health issues. Moreover, the proposed method of expanding moments around the mean holds promise for applications in diverse systems, as it allows the analysis of systems with many components and interactions by transforming the problem of solving the Master Chemical Equation into a problem of solving a system of ordinary differential equations. This is advantageous because computers do not have problems solving high-dimensional systems of ordinary differential equations while solving a high-dimensional master equation is not yet practical due to the memory needs. § ACKNOWLEDGMENTS Manuel E. Hernández-García acknowledges the financial support of CONAHCYT through the program "Becas Nacionales 2023". This work was supported by Programa de Apoyo a Proyectos de Investigación e Innovación Tecnológica (PAPIIT UNAM; IA203524 to M.G.-S), and by ANID—Millennium Science Initiative Program—Millennium Institute for Integrative Biology (iBio; ICN17_022 to M.G.-S). Additionally, the authors want to thank for their support to Luis A. Aguilar from the Laboratorio Nacional de Visualización Científica Avanzada and Jair Santiago García Sotelo, Iliana Martínez, Rebeca Mucino, and Eglee Lomelín from the Laboratorio Internacional de Investigación sobre el Genoma Humano, Universidad Nacional Autónoma de México, Santiago de Querétaro, México. Jorge Velázquez-Castro acknowledges Benemérita Universidad Autónoma de Puebla-VIEP financial support through project 00398-PV/2024. § DECLARATIONS The authors declare that there is no conflict of interest regarding the publication of this article. All data generated or analyzed during this study are included in this published article. § EXOGENOUS COX INGERSOLL ROSS MODEL The derived equations (<ref>)–(<ref>) were applied to one of the systems presented in <cit.> for validation purposes. This is a model of RNA dynamics in which a function describes RNA synthesis in the systemK(t)(which is stochastic and varies with time) because it may be complicated to capture the transcription in detail. RNA subsequently matures and degrades as shown by the following reactions: 0 []K(t)→ N []β→ M []γ→ 0, whereNdenotes nascent RNA,Mdenotes mature RNA,βis the synthesis rate,γis the degradation rate, andK(=K(t)) (we omit to write explicit time dependence) is a function that follows the Cox–Ingersoll–Ross process. This is described by the following Langevin equation d K/dt= k_0 -K + √(k_0 K/a)ϵ(t), wherek_0is the deterministic constant rate,ais a parameter that adjusts the intensity of the fluctuations, andϵ(t)is a white Gaussian noise variable. Its corresponding Fokker-Plank equation is, d P(K,t)/dt= - ∂/∂ K[(k_0- K)P(K,t)] + ∂^2/∂ K^2[( k_0 K/a) P(K,t) ], First, we determine the matrix of the stoichiometric coefficientsα_ijandβ_ij, and the stoichiometric matrixΓ_ijfrom reactions (<ref>): α_jl = [ 0 0; 1 0; 0 1 ] , β_jl = [ 1 0; 0 1; 0 0 ], Γ_lj = [ 1 -1 0; 0 1 -1 ], using this, we calculated the reaction rates of the system as follows: R_1^D = k, R_2^D = β n , R_3^D = γ m , wherenandmrepresent the concentrations ofNandM, respectively, andk(=k(t)) is the mean ofK(t), we omit to write explicit time dependence. From (<ref>), we find thatg(k)=k_0 - kandG(k)=2 k_0 k/a. Using these elements and following the procedure developed in Section <ref>, we derive a set of differential equations that describe the system. We have the following equations forn,mandkd n/dt= k - β n, d m/dt= β n - γ m, d k/dt= k_0 - k. We now derive the covariances and correlations between system variables, d σ^2_n,n/dt= β n/Ω - 2 βσ^2_n,n + k/Ω + 2 C^1_n,k, d σ^2_m,m/dt= β n + γ m/Ω + 2 βσ^2_m,n - 2 γσ^2_m,m, d σ^2_n,m/dt= -β n/Ω - (β + γ) σ^2_n,m + βσ^2_n,n + C^1_m,k, d C^1_n,k/dt= -β C^1_n,k + C^2_k,k - C^1_n,k, d C^1_m,k/dt= β C^1_n,k - γ C^1_m,k - C^1_m,k, d C^2_k,k/dt= -2 C^2_k,k + 2 k_0 k/a. These results are similar to those derived by <cit.>. However, in this case, we directly substitute (<ref>),g(k)andG(k)into the set of differential equations derived in section <ref>, thereby allowing for a more straightforward derivation. The reactions in this model are first-order, then we have an exact description of the system. Next, we simulate the differential equations of the proposed system. Figure <ref> shows the effects of the extrinsic fluctuations on the system. The left panel shows the variance of the system variables, indicating a significant increase in the size of fluctuations when both intrinsic and extrinsic fluctuations are considered (purple) compared with when only intrinsic fluctuations are present (green). The panel on the right shows the averages of the involved species and the range of values where fluctuations are observed, with only intrinsic (green) and intrinsic and extrinsic (purple) fluctuations demonstrating a substantial increase in the fluctuation size when extrinsic fluctuations are also present. § SIZE-DEPENDENT MODEL OF P53 We also analyzed an alternative model presented in <cit.>. However, upon conducting a system analysis using our differential equation framework, we observed that if we choose the parameter set outlined in <cit.>, the stability of the system dynamics becomes size-dependent, as we will see next. For example, whenΩ=500(moleculesmol^-1), the concentration of p53 approached zero, as shown in Figure <ref>. In this system, we have the following reactions. We have the next stoichiometric matrix Γ_lj = [ 1 -1 0 0 0; 0 0 1 -1 0; 0 0 0 1 -1 ], the reaction rates of the system, R_1^D = k_1 x_1, R_2^D= k_2 x_1x_3, R_3^D = k_3 x_1 , R_4^D= k_4 x_2, R_5^D = k_5 x_3, wherex_1,x_2, andx_3represent the concentrations of p53, Mdm2 precursor, and Mdm2, respectively.k_1is the p53 synthesis rate,k_2is the p53 degradation rate by Mdm2,k_3is the p53-dependent Mdm2 production rate,k_4is the Mdm2 maturation rate, andk_5is the Mdm2 degradation rate. We conducted a modeling exercise and the results are shown in Figure <ref>. This figure distinctly illustrates the disparities when considering intrinsic fluctuations alone versus when incorporating both intrinsic and stationary extrinsic fluctuations. From this figure, we can see that the concentration of p53 approaches zero whenΩ=500(moleculesmol^-1). In contrast, whenΩ=5000(moleculesmol^-1), the concentrations of the molecules exhibit damped oscillations, eventually reaching a steady state distinct from zero. This behavior persists even when extrinsic fluctuations are included, indicating thatΩis the principal factor that influences the stability of the system. Consequently, this model is unsuitable for analyzing systems with a low number of molecules. It is important to study the phase space to understand the dynamics of this system thoroughly. However, we did not perform this analysis because it was beyond the scope of this study. § FLUCTUATIONS BY TEMPERATURE IN REACTION CONSTANTS §.§ Case I: Stationary To introduce fluctuations in the reaction rate constants of the system, we assumed that reaction rates could be affected by the temperature. To do this, we will assume that they follow the Arrhenius equation <cit.>, k(T)=A e^-E_a/RT, wherekis the reaction rate constant,Ais the frequency factor (collision frequency between molecules),E_ais the activation energy,Ris the ideal gas constant, andTis the temperature in Kelvins. Therefore, if we assume that the temperature fluctuates, then Equation (<ref>) becomes k(T + δ T)=A e^-E_a/R(T+δ T), whereδTdenotes the temperature fluctuation. Assuming the fluctuation is small, we can perform a first-order expansion k(T + δ T)≈ A e^-E_a/RT( 1 + E_a/R T^2δ T ) = k(T) ( 1 + E_a/R T^2δ T ) = k(T) + η_k(T), (η_k(T)=k(T) ( E_a/R T^2δT )), fluctuations can be introduced into the system, considering that the temperature fluctuates. From this same equation, we will calculate the covariances ofk, so we have C^2_k,k= ⟨(η_k(T))(η_k(T))|=⟩( E_a/R T^2 k )^2 σ^2_T. (σ^2_T= ⟨(δT)(δT)|$⟩) With this formula, our system depends only on the covariance of temperature. Assuming that the temperature at which these experiments are conducted is around 309.65 K, which is approximately body temperature, as well as R=8.314 J/mol K, and taking E_a=80 kJ/mol as the approximate activation energy for protein synthesis and degradation (E_a is a different quantity for each particular system) <cit.>, now the covariance of the reaction rates will be finally C^2_k,k≈(k/10)^2 σ^2_T. Thus, the covariances of the reaction rate constants depend only on the temperature variance. §.§ Case II: Time-depend Temperature can be time-dependent; for example, in the human body, the temperature fluctuates. To describe this, we rely on <cit.> to provide a formula for the temperature behavior in the human body, and also we considered the results in <cit.> for give the amplitude of the oscillations, then we get T(t)= T_max + 1/4 cos( π/12 t ), where T(t) is the temperature in Kelvin and T_max=309.65 K indicating that the body temperature oscillates around it within a period of 24 h, and we have an amplitude of 0.25 K. Now we substitute (<ref>) into (<ref>), from which we obtain, k(t)=A e^( -E_a/R( T_max + 1/4 cos( π/12 t ) )), we redefine this expression because this quantity oscillates around a value, which is k^0= A e^-E_a/RT_max, with this condition we get, k(t) ≈ k^0 e^(1/40 cos( π/12 t ) ). In this way, it will be easier to employ this function, and now all our quantities depend solely on time. We differentiate this function with respect to time, d k(t)/dt= -k(t) π/480 sin( π/12 t ), with this result and from section <ref> we can find that g(k,t)= -k(t) π/480 sin( π/12 t ). For modeling our system we need the next equation d C^2_k,k/dt= 2 C^2_k,k( ∂ g(k,t) /∂ k)+ ( G(k, t) + C^2_k,k/2d^2 G(k, t) / d k^ 2) , we expect that, when d C^2_k,k/dt=0 Equation (<ref>) is recovered. Therefore, we propose G(k,t)= (k(t)/10)^2 ( σ^2_T/e^(1/20 cos( π/12 t ) )+ σ^2_T/100) π/240 sin( π/12 t ), σ^2_T is a constant that represents the value of the variance of the temperature and is the same that appears in Equation (<ref>), with this, we get d C^2_k,k/dt= ( - C^2_k,k + ( k^2(t) + C^2_k,k) ( σ^2_T/100e^(1/20 cos( π/12 t ) )+ σ^2_T) ) π/240 sin( π/12 t ) . Now, we can model our system considering that the variable k varies and fluctuates over time. § PARAMETERS AND INITIAL CONDITIONS Tables <ref>, <ref>, and <ref> show the parameters and initial conditions used in each case: intrinsic fluctuations, intrinsic fluctuations and stationary extrinsic fluctuations, and intrinsic fluctuations and time-dependent extrinsic fluctuations, respectively. XX Samo Samoilov, M. S., Price, G., & Arkin, A. P. (2006). From fluctuations to phenotypes: the physiology of noise. Science's STKE, 2006(366), re17-re17. Alon Alon, U. (2019). An introduction to systems biology: design principles of biological circuits. Chapman and Hall/CRC. Vechio Del Vecchio, D., & Murray, R. M. (2015). Biomolecular feedback systems (p. NJ). Princeton, NJ: Princeton University Press. GarGardiner, C. (2009). Stochastic methods (Vol. 4). Berlin: Springer. Jong De Jong, H. (2002). Modeling and simulation of genetic regulatory systems: a literature review. Journal of computational biology, 9(1), 67-103. GomezGomez-Uribe, C. A., & Verghese, G. C. (2007). Mass fluctuation kinetics: Capturing stochastic effects in systems of chemical reactions through coupled mean-variance computations. The Journal of Chemical Physics, 126(2), 024109 Manu Hernández-García, M. E., & Velázquez-Castro, J. (2023). Corrected Hill Function in Stochastic Gene Regulatory Networks. arXiv preprint arXiv:2307.03057. SimpsonSimpson, M. L., Cox, C. D., Allen, M. S., McCollum, J. M., Dar, R. D., Karig, D. K., & Cooke, J. F. (2009). Noise in biological circuits. Wiley Interdisciplinary Reviews: Nanomedicine and Nanobiotechnology, 1(2), 214-225. HilfiHilfinger, A., & Paulsson, J. (2011). Separating intrinsic from extrinsic fluctuations in dynamic biological systems. Proceedings of the National Academy of Sciences, 108(29), 12167-12172. Arriaga Arriaga, E. A. (2009). Determining biological noise via single cell analysis. Analytical and bioanalytical chemistry, 393, 73-80. Vastola Gorin, G., Vastola, J. J., Fang, M., & Pachter, L. (2022). Interpretable and tractable models of transcriptional noise for the rational design of single-molecule quantification experiments. Nature Communications, 13(1), 7620. Gillespie Gillespie, D. T. (1977). Exact stochastic simulation of coupled chemical reactions. The journal of physical chemistry, 81(25), 2340-2361. VolVoliotis, M., Thomas, P., Grima, R., & Bowsher, C. G. (2016). Stochastic simulation of biomolecular networks in dynamic environments. PLoS Computational Biology, 12(6), e1004923. ZechZechner, C., & Koeppl, H. (2014). Uncoupled analysis of stochastic reaction networks in fluctuating environments. PLoS Computational Biology, 10(12), e1003942. HamHam, L., Coomer, M. A., & Stumpf, M. P. (2022). The chemical Langevin equation for biochemical systems in dynamic environments. The Journal of Chemical Physics, 157(9), 094105 ToniToni, T., & Tidor, B. (2013). Combined model of intrinsic and extrinsic variability for computational network design with application to synthetic biology. PLoS Computational Biology, 9(3), e1002960. CaraCaravagna, G., Mauri, G., & d'Onofrio, A. (2013). The interplay of intrinsic and extrinsic bounded noises in biomolecular networks. PLoS one, 8(2), e51174. SwainSwain, P. S., Elowitz, M. B., & Siggia, E. D. (2002). Intrinsic and extrinsic contributions to stochasticity in gene expression. Proceedings of the National Academy of Sciences, 99(20), 12795-12800. NovaNovák, B., & Tyson, J. J. (2008). Design principles of biochemical oscillators. Nature reviews Molecular cell biology, 9(12), 981-991. RoeRoenneberg, T., Kuehnle, T., Juda, M., Kantermann, T., Allebrandt, K., Gordijn, M., & Merrow, M. (2007). Epidemiology of the human circadian clock. Sleep medicine reviews, 11(6), 429-438. VecchioDel Vecchio, D., Ninfa, A. J., & Sontag, E. D. (2008). Modular cell biology: retroactivity and insulation. Molecular systems biology, 4(1), 161. Briat Briat, C., & Khammash, M. (2023). Noise in biomolecular systems: Modeling, analysis, and control implications. Annual Review of Control, Robotics, and Autonomous Systems, 6, 283-311. Vou Vousden, K. H. (2000). p53: Death Star. Cell, 103(5), 691-694. Brew Brewer, D. S. (2006). Modelling the p53 gene regulatory network. University of London, University College London (United Kingdom). Zato Geva‐Zatorsky, N., Rosenfeld, N., Itzkovitz, S., Milo, R., Sigal, A., Dekel, E., ... & Alon, U. (2006). Oscillations and variability in the p53 system. Molecular systems biology, 2(1), 2006-0033. Batch Batchelor, E., Loewer, A., & Lahav, G. (2009). The ups and downs of p53: understanding protein dynamics in single cells. Nature Reviews Cancer, 9(5), 371-377. Ko Kőrös, E. M., Orbán, M., & Nagy, Z. (1973). Periodic heat evolution during temporal chemical oscillations. Nature Physical Science, 242(115), 30-31. BusseBusse, H. G. (1971). Temperature variations in an oscillating chemical reaction. Nature Physical Science, 233(42), 137-138. Dere Derevich, I. V. (2010). Effect of temperature fluctuations of fluid on thermal stability of particles with exothermic chemical reaction. International journal of heat and mass transfer, 53(25-26), 5920-5932. Pil Pilditch, C. A., & Grant, J. (1999). Effect of temperature fluctuations and food supply on the growth and metabolism of juvenile sea scallops (Placopecten magellanicus). Marine Biology, 134, 235-248. Stri Stricker, J., Cookson, S., Bennett, M. R., Mather, W. H., Tsimring, L. S., & Hasty, J. (2008). A fast, robust and tunable synthetic gene oscillator. Nature, 456(7221), 516-519. Eloe Swain, P. S., Elowitz, M. B., & Siggia, E. D. (2002). Intrinsic and extrinsic contributions to stochasticity in gene expression. Proceedings of the National Academy of Sciences, 99(20), 12795-12800. HunHunziker, A., Jensen, M. H., & Krishna, S. (2010). Research article Stress-specific response of the p53-Mdm2 feedback loop. Lakatos Lakatos, E., Ale, A., Kirk, P. D., & Stumpf, M. P. (2015). Multivariate moment closure techniques for stochastic kinetic models. The Journal of Chemical Physics, 143(9). Atkin Atkins, P., De Paula, J., & Keeler, J. (2023). Atkins' physical chemistry. Oxford University Press. Brown Brown, E. N., Choe, Y., Luithardt, H., & Czeisler, C. A. (2000). A statistical model of the human core-temperature circadian rhythm. American Journal of Physiology-Endocrinology and Metabolism, 279(3), E669-E683. Daka Dakappa, P. H., & Mahabala, C. (2015). Analysis of long-term temperature variations in the human body. Critical Reviews™ in Biomedical Engineering, 43(5-6). VoetVoet, D., Voet, J. G., & Pratt, C. W. (2016). Fundamentals of biochemistry: life at the molecular level. John Wiley & Sons.
http://arxiv.org/abs/2408.11367v1
20240821063849
Towards Probabilistic Inductive Logic Programming with Neurosymbolic Inference and Relaxation
[ "Fieke Hillerstrom", "Gertjan Burghouts" ]
cs.AI
[ "cs.AI", "cs.LG" ]
Hillerström and Burghouts 116 2024 10.1017/xxxxx Probabilistic Inductive Logic Programming]Towards Probabilistic Inductive Logic Programming with Neurosymbolic Inference and Relaxation TNO, The Netherlands [ F. Hillerström and G.J. Burghouts August 26, 2024 ===================================== § ABSTRACT Many inductive logic programming (ILP) methods are incapable of learning programs from probabilistic background knowledge, e.g. coming from sensory data or neural networks with probabilities. We propose Propper, which handles flawed and probabilistic background knowledge by extending ILP with a combination of neurosymbolic inference, a continuous criterion for hypothesis selection (BCE) and a relaxation of the hypothesis constrainer (NoisyCombo). For relational patterns in noisy images, Propper can learn programs from as few as 8 examples. It outperforms binary ILP and statistical models such as a Graph Neural Network. Inductive Logic Programming, Neurosymbolic inference, Probabilistic background knowledge, Relational patterns, Sensory data. § INTRODUCTION Inductive logic programming (ILP) <cit.> learns a logic program from labeled examples and background knowledge (e.g. relations between entities). Due to the strong inductive bias imposed by the background knowledge, ILP methods can generalize from small numbers of examples <cit.>. Other advantages are the ability to learn complex relations between the entities, the expressiveness of first-order logic, and the resulting program can be understood and transferred easily because it is in symbolic form <cit.>. This makes ILP an attractive alternative methodology besides statistical learning methods. For many real-world applications, dealing with noise is essential. Mislabeled samples are one source of noise. To learn from noisy labels, various ILP methods have been proposed to generalize a subset of the samples <cit.>. To advance methods to learn recursive programs and predicate invention, <cit.> proposed a method that searches for small programs that generalize subsets of the samples and are combined, as in <cit.>, under relaxed conditions to allow for mislabeled samples, while trading off program complexity for training accuracy. These methods are dealing with noisy labels, but do not explicitly take into account errors in the background knowledge, nor are they designed to deal with probabilistic background knowledge. Most ILP methods take as a starting point the inputs in symbolic declarative form <cit.>. Real-world data often does not come in such a form. A predicate p(.), detected in real-world data, is neither binary or perfect. The assessment of the predicate can be uncertain, resulting in a non-binary, probabilistic predicate. Or the assessment can be wrong, leading to imperfect predicates. Dealing with noisy and probabilistic background knowledge is relevant for learning from sources that exhibit uncertainties. A probabilistic source can be a human who needs to make judgements at an indicated level of confidence. A source can also be a sensor measurement with some confidence. For example, an image is described by the objects that are detected in it, by a deep learning model. Such a model predicts locations in the image where objects may be, at some level of confidence. Some objects are detected with a lower confidence than others, e.g. if the object is partially observable or lacks distinctive visual features. The deep learning model implements a probabilistic predicate that a particular image region may contain a particular object, e.g. 0.7 :: vehicle(x). Given that most object detection models are imperfect in practice, it is impossible to determine a threshold that distinguishes the correct and incorrect detections. In <cit.> it was shown that two common ILP frameworks, Aleph <cit.> and Popper <cit.>, typically fail to find the correct programs when dealing with predicted objects in images; even with a state-of-the-art object detection model, and after advanced preprocessing of said detections. In the absence of an ideal binarization of probabilities, most ILP methods are not applicable to probabilistic sources <cit.>. We propose a method towards probabilistic ILP. At a high level, ILP methods typically induce a logical program that entails many positive and few negative samples, by searching the hypothesis space, and subsequently testing how well the current hypothesis fits the training samples <cit.>. One such method is Popper, which learns from failures <cit.> (LFF), in an iterative cycle of generating hypotheses, testing them and constraining the hypothesis search. Our proposal is to introduce a probabilistic extension to LFF at the level of hypothesis testing. For that purpose, we consider neurosymbolic AI <cit.>. Within neurosymbolic AI a neural network predicts the probability for a predicate. For example a neural network for object detection, which outputs a probability for a particular object being present in an image region, e.g., 0.7 :: vehicle(x). Neurosymbolic AI connects this neural network with knowledge represented in a symbolic form, to perform reasoning over the probabilistic predicates predicted by the neural network. With this combination of a neural network and symbolic reasoning, neurosymbolic AI can reason over unstructured inputs, such as images. We leverage neurosymbolic programming and connect it to the tester within the hypothesis search. One strength of neurosymbolic programming is that it can deal with uncertainty and imperfect information <cit.>, in our case the probablistic background knowledge. We propose to use neurosymbolic inference as tester in the test-phase of the LFF cycle. Neurosymbolic reasoning calculates an output probability for a logical query being true, for every input sample. The input samples are the set of positive and negative examples, together with their probabilistic background knowledge. The logical query evaluated within the neurosymbolic reasoning is the hypothesis generated in the generate-phase of the LFF cycle, which is a first-order-logic program. With the predicted probability of the hypothesis being true per sample, it becomes possible to compute how well the hypothesis fits the training samples. That is used to continue the LFF cycle and generate new constraints based on the failures. Our contribution is a step towards probabilistic ILP by proposing a method called Propper. It builds on an ILP framework that is already equipped to deal with noisy labels, Popper-MaxSynth <cit.>, which we extend with neurosymbolic inference which is able to process probabilistic facts, i.e. uncertain and imperfect background knowledge. Our additional contributions are a continuous criterion for hypothesis selection, that can deal with probabilities, and a relaxed formulation for constraining the hypothesis space. Propper and the three contributions are outlined in Figure <ref>. We compare Popper and Propper with statistical ML models (SVM and Graph Neural Network) for the real-life task of finding relational patterns in satellite images based on objects predicted by an imperfect deep learning model. We validate the learning robustness and efficiency of the various models. We analyze the learned logical programs and discuss the cases which are hard to predict. § RELATED WORK For the interpretation of images based on imperfect object predictions, ILP methods such as Aleph <cit.> and Popper <cit.> proved to be vulnerable and lead to incorrect programs or not returning a program at all <cit.>. Solutions to handle observational noise were proposed <cit.> for small binary images. In <cit.> images were analyzed via physical properties. This method could estimate the direction of the light source or the position of a ball from images in very specific conditions or without clutter or distractors. In <cit.>, neural networks were learned jointly with induction of recursive first-order logic theories with predicate invention. This was demonstrated on small binary images of digits. Real-life images are more complex and cluttered. We aim to extend these works to realistic samples, e.g. large color images that contain many objects under partial visiblity and in the midst of clutter, causing uncertainties. Contrary to <cit.>, we take pretrained models as a starting point, as they are often already very good at their task of analyzing images. Our focus is on extending ILP to handle probabilistic background knowledge. In statistical relational artificial intelligence (StarAI) <cit.> the rationale is to directly integrate probabilities into logical models. StarAI addresses a different learning task than ILP: it learns the probabilistic parameters of a given program, whereas ILP learns the program <cit.>. Probabilities have been integrated into ILP previously. Aleph <cit.> was used by <cit.> to find interesting clauses and then learn the corresponding weights. ProbFOIL <cit.> and SLIPCOVER <cit.> search for programs with probabilities associated to the clauses, to deal with the probabilistic nature of the background knowledge. SLIPCOVER searches the space of probabilistic clauses using beam search. The clauses come from Progol <cit.>. Theories are searched using greedy search, where refinement is achieved by adding a clauses for a target predicate. As guidance the log likelihood of the data is considered. SLIPCOVER operates in a probabilistic manner on binary background knowledge, where our goal is to involve the probabilities associated explicitly the background knowledge. How to combine these probabilistic methods with recent ILP frameworks is unclear. In our view, it is not trivial and possibly incompatible. Our work focuses on integrating a probabilistic method into a modern ILP framework, in a simple yet elegant manner. We replace the binary hypothesis tester of Popper <cit.> by a neurosymbolic program that can operate on probabilistic and imperfect background knowledge <cit.>. Rather than advanced learning of both the knowledge and the program, e.g. <cit.>, we take the current program as the starting point. Instead of learning parameters, e.g. <cit.>, we use the neurosymbolic program for inference given the program and probabilistic background knowledge. Real-life samples may convey large amounts of background knowledge, e.g. images with many objects and relations between them. Therefore, scalability is essential. Scallop <cit.> improved the scalability over earlier neurosymbolic frameworks such as DeepProbLog <cit.>. Scallop introduced a tunable parameter k to restrain the validation of hypotheses by analyzing the top-k proofs. They asymptotically reduced the computational cost while providing relative accuracy guarantees. This is beneficial for our purpose. By replacing only the hypothesis tester, the strengths of ILP (i.e. hypothesis search) are combined with the strengths of neurosymbolic inference (i.e. probabilistic hypothesis testing). § PROPPER ALGORITHM To allow ILP on flawed and probabilistic background knowledge, we extend modern ILP (Section <ref>) with neurosymbolic inference (<ref>) and coin our method Propper. The neurosymbolic inference requires program conversion by grammar functions (<ref>), and we added a continuous criterion for hypothesis selection (<ref>), and a relaxation of the hypothesis constrainer (<ref>). §.§ ILP: Popper Popper represents the hypothesis space as a constraint satisfaction problem and generates constraints based on the performance of earlier tested hypotheses. It works by learning from failures (LFF) <cit.>. Given background knowledge B, represented as a logic program, positive examples E^+ and negative examples E^-, it searches for a hypothesis H that is complete (∀ e ∈ E^+, H ∪ B e) and consistent (∀ e ∈ E^-, H ∪ B e). The algorithm consists of three main stages (see Figure <ref>, left). First a hypothesis in the form of a logical program is generated, given the known predicates and constraints on the hypothesis space. The Test stage tests the generated logical program against the provided background knowledge and examples, using Prolog for inference. It evaluates whether the examples are entailed by the logical program and background knowledge. From this information, failures that are made when applying the current hypothesis, can be identified. These failures are used to constrain the hypothesis space, by removing specializations or generalizations from the hypothesis space. In the original Popper implementation <cit.>, this cycle is repeated until an optimal solution is found; the smallest program that covers all positives and no negative examples (see <cit.> for a formal definition). Its extension Combo combines small programs that do not entail any negative example <cit.>. When no optimal solution is found, Combo returns the obtained best solution. The Popper variant MaxSynth does allow noise in the examples and generates constraints based on a minimum description length cost function, by comparing the length of a hypothesis with the possible gain in wrongly classified examples <cit.>. §.§ Neurosymbolic Inference: Scallop Scallop is a language for neurosymbolic programming which integrates deep learning with logical reasoning <cit.>. Scallop reasons over continuous, probabilistic inputs and results in a probabilistic output confidence. It consists of two parts: a neural model that outputs the confidence for a specific concept occurring in the data and a reasoning model that evaluates the probability for the query of interest being true, given the input. It uses provenance frameworks <cit.> to approximate exact probabilistic inference, where the AND operator is evaluated as a multiplication (AND(x, y) = x * y), the OR as a minimization (OR(x, y) = min(1, x + y)) and the NOT as a 1-x. Other, more advanced formulations are possible, e.g. noisy-OR(x, y) = 1-(1-a)(1-b) for enhanced performance. For ease of integration, we considered this basic provenance. To improve the speed of the inference, only the most likely top-k hypotheses are processed, during the intermediate steps of computing the probabilities for the set of hypotheses. §.§ Connecting ILP and Neurosymbolic Inference Propper changes the Test stage of the Popper algorithm (see Figure <ref>): the binary Prolog reasoner is replaced by the neurosymbolic inference using Scallop, operating on probabilistic background knowledge (instead of binary), yielding a probability for each sample given the logical program. The background knowledge is extended with a probability value before each first-order-logic statement, e.g. 0.7 :: vehicle(x). The Generate step yields a logic program in Prolog syntax. The program can cover multiple clauses, that can be understood as OR as one needs to be satisfied. Each clause is a function of predicates, with input arguments. The predicate arguments can differ between the clauses within the logic program. This is different from Scallop, where every clause in the logic program is assumed to be a function of the same set of arguments. As a consequence, the Prolog program requires syntax rewriting to arrive at an equivalent Scallop program. This rewriting involves three steps by consecutive grammar functions, which we illustrate with an example. Take the Prolog program: = = The bodies of are extracted by: b() = {[], []}. The sets of arguments of are extracted by: v() = {{}, {}}. For a Scallop program, the clauses in the logic program need to be functions of the same argument set. Currently the sets are not the same: {} vs. {}. Function e(·) adds a dummy predicate for all non-used arguments, i.e. in the first clause, such that all clauses operate on the same set, i.e. {}: e([, ], {}) = After applying grammar functions b(·), v(·) and e(·), the Prolog program becomes the equivalent Scallop program : = = = §.§ Selecting the Best Hypothesis MaxSynth uses a minimum-description-length (MDL) cost to select the best solution: MDL_B,E = size(h) + fn_B,E(h) + fp_B,E(h) <cit.>. The MDL cost compares the number of correctly classified examples with the number of literals in the program. This makes the cost dependent on the dataset size and requires binary predictions in order to determine the number of correctly classified examples. Furthermore, it is doubtful whether the number of correctly classified examples can be compared directly with the rule size, since it makes the selection of the rule size dependent on the dataset size again. Propper uses the Binary Cross Entropy (BCE) loss to compare the performance of hypotheses, as it is a more continuous measure than MDL. The neurosymbolic inference predicts an output confidence for an example being entailed by the hypothesis. The BCE-cost compares this predicted confidence with the groundtruth (one or zero). For y_i being the groundtruth label and p_i the confidence predicted via neurosymbolic inference for example i, the BCE cost for N examples becomes: BCE = 1/N∑_i=1^N(y_i * log(p_i) + (1-y_i) * log(1-p_i)). Scallop reasoning automatically avoids overfitting, by punishing the size of the program, because when adding more or longer clauses the probability becomes lower by design. The more ANDs in the program, the lower the output confidence of the Scallop reasoning, due to the multiplication of the probabilities. Therefore, making a program more specific will result in a higher BCE-cost, unless the specification is beneficial to remove FPs. Making the program more generic will cover more samples (due to the addition operator for the OR). However the confidences for the negative samples will increase as well, which will increase the BCE-cost again. The BCE-cost is purely calculated on the predictions itself, and thereby removes the dependency on the dataset size and the comparison between number of samples and program length. §.§ Constraining on Inferred Probabilities Whereas Combo <cit.> and MaxSynth <cit.> yield optimal programs given perfect background knowledge, with imperfect and probabilistic background knowledge no such guarantees can be provided. The probabilistic outputs of Scallop are converted into positives and negatives before constraining. The optimal threshold is chosen by testing 15 threshold values, evenly spaced between 0 and 1 and selecting the threshold resulting in the most highest true positives plus true negatives on the training samples. MaxSynth generates constraints based on the MDL loss <cit.>, making the constraints dependent on the size of the dataset. To avoid this dependency, we introduce the NoisyCombo constrainer. Combo generates constraints once a false positive (FP) or negative (FN) is detected. ∃ e ∈ E^-, H ∪ B e: prune generalisations. ∃ e ∈ E^+, H ∪ B e or ∀ e ∈ E^-, H ∪ B e: prune specialisations. NoisyCombo relaxes this condition and allows a few FPs and FNs to exist, depending on an expected noise level, inspired by <cit.>. This parameter defines a percentage of the examples that could be imperfect, from which the allowed number of FPs and FNs is calculated. ∑(e ∈ E^-, H ∪ B e) > noise_level * N_negatives: prune generalisations. ∀ e ∈ E^-, H ∪ B e: prune specialisations. The positives are not thresholded by the noise level, since programs that cover at least one positive sample are added to the combiner. § ANALYSES We validate Propper on a real-life task of finding relational patterns in satellite images, based on flawed and probabilistic background knowledge about the objects in the images, which are predicted by an imperfect deep learning model. We analyze the learning robustness under various degrees of flaws in the background knowledge. We do this for various models, including Popper (on which Propper is based) and statistical ML models. In addition, we establish the learning efficiency for very low amounts of training data, as ILP is expected to provide an advantage because it has the inductive bias of background knowledge. We analyze the learned logical programs, to compare them qualitatively against the target program. Finally, we discuss the cases that are hard to predict. §.§ First Dataset The DOTA dataset <cit.> contains many satellite images. This dataset is very challenging, because the objects are small, and therefore visual details are lacking. Moreover, some images are very cluttered by sometimes more than 100 objects. For the background knowledge, we leverage a pretrained model <cit.> to predict the objects in the images, with for each object a label, location (bounding box) and a probability (confidence value). For each image, the respective predictions are added to the background knowledge, as a predicate with a confidence, e.g. 0.7 :: vehicle(x). The locations of the objects are used to calculate a confidence for two relations: and . This information is added to the background knowledge as well. Figure <ref> shows various images from the dataset, including zoomed versions to reveal some more details and to highlight the small size of the objects. Figure <ref> shows an image with many objects. The relational patterns of interest is `vehicle on bridge'. For this pattern, there are 11 positive test images and 297 negative test images. Figure <ref> shows both a positive (left) and negative image (right). To make the task realistic, both sets contain images with vehicles, bridges and roundabouts, so the model cannot distinguish the positives and negatives by purely finding the right sets of objects; the model really needs to find the right pattern between the right objects. Out of the negative images, 17 are designated as hard, due to incorrect groundtruths (2 images) and incorrect detections (15 images). These hard cases are shown in Figure <ref>. §.§ Experimental Setup The dataset is categorized into three subsets that are increasingly harder in terms of flaws in the background knowledge. Easy: This smallest subset excludes the incorrect groundtruths, a manual check that most object predictions are reasonable, i.e. images with many predicted objects are withheld (this includes images with many false positives). Intermediate: This subset excludes the incorrect groundtruths. Compared to Easy, this subset adds all images with many object predictions. Hard: This is the full set, which includes all images, also the ones with incorrect groundtruths. We are curious whether ILP methods can indeed generalize from small numbers of examples, as is hypothesized <cit.>. Many datasets used in ILP are using training data with tens to hundreds (sometimes thousands) of labeled samples, see e.g. <cit.> and <cit.>. We investigate the performance for as few as {1, 2, 4, 8} labels for respectively the positive and negative set, as this is common in practical settings. Moreover, common ILP datasets are about binary background knowledge, without associated probabilities, see e.g. <cit.> and <cit.>. In contrast, we consider probabilistic background knowledge. From the Easy subset we construct an Easy-1.0 set by thresholding the background knowledge with a manually chosen optimal threshold, which results in an almost noiseless dataset and shows the complexity of the logical rule to learn. All experiments are repeated 5 times, randomly selecting the training samples from the dataset and using the rest of the data set as test set. §.§ Model Variants and Baselines We compare Propper with Popper (on which it builds), to validate the merit of integrating the neurosymbolic inference and the continuous cost function BCE. Moreover, we compare these ILP models with statistical ML models: the Support Vector Machine <cit.> (SVM) because it is used so often in practice; a Graph Neural Network <cit.> (GNN) because it is also relational by design which makes it a reasonable candidate for the task at hand i.e. finding a relational pattern between objects. All methods except the SVM are relational and permutation invariant. The objects are unordered and the models should therefore represent them in an orderless manner. The SVM is not permutation invariant, as objects and their features have some arbitrary but designated position in its feature vectors. All methods except Popper are probabilistic. All methods except the most basic Popper variant, can handle some degree of noise. The expected noise level for NoisyCombo is set at 0.15. The tested models are characterized in Table <ref>. For a valid comparison, we increase the SVM's robustness against arbitrary object order. With prior knowledge about the relevant objects for the pattern at hand, these objects can be placed in front of the feature vector. This preprocessing step makes the SVM model less dependent on the arbitrary order of objects. In the remainder of the analyses, we call this variant `SVM ordered'. To binarize the probabilistic background knowledge as input for Popper, the detections are thresholded with the general value of 0.5. §.§ Increasing Noise in Background Knowledge We are interested in how the robustness of model learning for increasing difficulty of the dataset. Here we investigate the performance on the three subsets from Section <ref>: Easy, Intermediate and Hard. Figure <ref> shows the performance for various models for increasing difficulty. The four subplots show the various types of models. For a reference, the best performing model is indicated by an asterisk (*) in all subplots. It is clear that for increasing difficulty, all models struggle. The statistical ML models struggle the most: the performance of the GNN drops to zero on the Hard set. The SVMs are a bit more robust but the performance on the Hard set is very low. The most basic variant of Popper also drops to zero. The noise-tolerant Popper variants (Noisy-Combo and MaxSynth) perform similarly to the SVMs. Propper outperforms all models. This finding holds for all Propper variants (Combo, Noisy-Combo and MaxSynth). Using BCE as a cost function yields a small but negligible advantage over MDL. §.§ Learning Efficiency with Few Labels We are curious how the models perform with as few as {1, 2, 4, 8} labels for respectively the positive and negative set. The performance is measured on the Hard set. Figure <ref> shows the performance for various models for increasing training set size. The four subplots show the various types of models. Again, for reference, the best performing model is indicated by an asterisk (*) in all subplots. The upper left shows the statistical ML models. They do perform better with more training samples, but the performance is inferior to the ILP model variants. The Propper variant with Scallop and Noisy-Combo and BCE is the best performer. BCE does not improve significantly over MDL. MaxSynth has an optimization criterion that cannot operate with less than three training samples. The main improvement by Propper is observed when switching from Combo to Noisy-Combo and switching from Prolog to Scallop (i.e. neurosymbolic inference). §.§ Second Dataset We are interested how the methods perform on a different dataset. The MS-COCO dataset <cit.> contains a broad variety of images of everyday scenes. This dataset is challenging, because there are many different objects in a wide range of settings. Similar to the previous experiment, the background knowledge is acquired by the predictions of a pretrained model <cit.> which are used to extract the same two relations. Figure <ref> shows some examples. The pattern of interest is `person next to a car'. We consider all images that have a maximum of two persons and two cars, yielding 1728 images. We use random 8 positive and 8 negative images for training, which is repeated 5 times. We test both ILP variants, Popper and Propper, for the MaxSynth constrainer, because the Combo constrainer regularly did not return a solution. We validate Popper with various thresholds to be included as background knowledge. Propper does not need such a threshold beforehand, as all background knowledge is considered in a probabilistic manner. The results are shown in Table <ref>. Propper is the best performer, achieving f1 = 0.947. This is significantly better than the alternatives: SVM achieves f1 = 0.668 (-0.279) and Popper achieves f1 = 0.596 (-0.351). Adding probabilistic behavior to ILP is helpful for challenging datasets. Table <ref> shows the learned programs, how often each program was predicted across the experimental repetitions, and the respective resulting f1 scores. The best program is that there is a person on a car. Popper yields the same program, however, with a lower f1-score, since the background knowledge is thresholded before learning the program, removing important data from the background knowledge. This confirms that in practice it is intractable to set a perfect threshold on the background knowledge. It is beneficial to use Propper which avoids such prior thresholding. § DISCUSSION AND CONCLUSIONS We proposed Propper, which handles flawed and probabilistic background knowledge by extending ILP with a combination of neurosymbolic inference, a continuous criterion for hypothesis selection (BCE), and a relaxation of the hypothesis constrainer (NoisyCombo). Neurosymbolic inference has a significant impact on the results. Its advantage is that it does not need prior thresholding on the probabilistic background knowledge (BK), which is needed for binary ILP and is always imperfect. NoisyCombo has a small yet positive effect. It provides a parameter for the level of noise in BK, which can be tailored to the dataset at hand. The BCE has little impact. Propper is able to learn a logic program about a relational pattern that distinguishes between two sets of images, even if the background knowledge is provided by an imperfect neural network that predicts concepts in the images with some confidence. With as few as a handful of examples, Propper learns effective programs and outperforms statistical ML methods such as a GNN. Although we evaluated Propper on two common datasets with different recording conditions, a broader evaluation of Propper across various domains and datasets to confirm its generalizability and robustness for various (especially non-image) use cases, is interesting. The proposed framework of integrated components allows for an easy setup of the system and simple adaptation to new developments/algorithms within the separate components. However, the integration as is performed now could be non-optimal in terms of computational efficiency. For example the output of the hypothesis generation is an answer set, which in Popper is converted to Prolog syntax. Propper converts this Prolog syntax to Scallop syntax. Developing a direct conversion from the answer sets to the Scallop syntax is recommended. We favored modularization over full integration and computational efficiency, in order to facilitate the methodological configuration and comparison of the various components. It is interesting to investigate whether a redesign of the whole system with the components integrated will lead to a better system. To make the step to fully probabilistic ILP, the allowance of probabilistic rules should be added to the system as well, for example by integration of StarAI methods <cit.>. apalike
http://arxiv.org/abs/2408.11121v1
20240820182338
DOMBA: Double Model Balancing for Access-Controlled Language Models via Minimum-Bounded Aggregation
[ "Tom Segal", "Asaf Shabtai", "Yuval Elovici" ]
cs.LG
[ "cs.LG", "cs.AI", "cs.CL", "cs.CR" ]
[ [ Received September 15, 1996; accepted March 16, 1997 ======================================================== § ABSTRACT The utility of large language models (LLMs) depends heavily on the quality and quantity of their training data. Many organizations possess large data corpora that could be leveraged to train or fine-tune LLMs tailored to their specific needs. However, these datasets often come with access restrictions that are based on user privileges and enforced by access control mechanisms. Training LLMs on such datasets could result in exposure of sensitive information to unauthorized users. A straightforward approach for preventing such exposure is to train a separate model for each access level. This, however, may result in low utility models due to the limited amount of training data per model compared to the amount in the entire organizational corpus. Another approach is to train a single LLM on all the data while limiting the exposure of unauthorized information. However, current exposure-limiting methods for LLMs are ineffective for access-controlled data, where sensitive information appears frequently across many training examples. We propose – double model balancing – a simple approach for training and deploying LLMs that provides high utility and access-control functionality with security guarantees. aggregates the probability distributions of two models, each trained on documents with (potentially many) different access levels, using a “min-bounded" average function (a function that is bounded by the smaller value, e.g., harmonic mean). A detailed mathematical analysis and extensive evaluation show that safeguards restricted information while offering utility comparable to non-secure models. § INTRODUCTION Organizations can benefit greatly from training dedicated LLMs, such as coding assistants, email writers or question-answering models, on their data <cit.>. While the benefits can be substantial, such data often contains restricted information, and an access-control mechanism ensuring users can only access information according to their access rights is usually in place. However, LLMs inherently lack such access-control mechanisms, which can lead to the exposure of sensitive information to unauthorized users <cit.>. A basic approach for introducing access control to LLMs is to train a separate LLM for each access level <cit.>. However, as our experiments show, this approach can substantially reduce model utility, since the amount of data for each access level is limited. For example, training a model on emails from only one department in an organization may be insufficient for constructing effective organizational emails. To overcome this limitation, sufficient data (including restricted data) must be included in the model's training set. This means that any secure and high utility method should limit the exposure of the training data to users of the model (according to their access rights). In this paper, we propose , a method for training and deploying LLMs that incorporates an access-control mechanism while maintaining high utility. An overview of is presented in Figure <ref>. To protect sensitive information, “balances" two submodels (trained on two different data partitions, each including different access levels) during inference, using a min-bounded average function; intuitively, each submodel “knows" different restricted information. During text generation, the min-bounded function makes it unlikely for information known to just one submodel to be generated. Assuming that restricted information is not shared between the two partitions, this ensures that no restricted information is likely to be generated. We note that it is important that the division of access levels into partitions is done such that access levels with shared sensitive information are assigned to the same partition. To analyze the privacy protection provided by , we formalize the notion of “exposure of secrets." This notion is based on the change in probability of a token (relative to other tokens) between two language models. The greater the change in a token's probability (relative to other tokens) when using one model over the other, the greater the exposure of that token. In our approach, the exposure of over both submodels is bounded by the best possible value (i.e., replacing by any other model would not improve the bound). This means that DOMBA limits the exposure of “secrets" that appear in just one partition (since one of the submodels does not “know" them). Evaluating sensitive information exposure is challenging due to the fact that defining what constitutes sensitive information is a complex and context-dependent task, varying based on organizational policy and other factors <cit.>. Measuring average-case sensitive information exposure is inadequate, since an adversary could devise a prompt that causes the model to substantially deviate from average behavior <cit.>. To address these challenges, we introduce three new security metrics for evaluating sensitive information exposure. The first metric assesses worst-case and “extreme-case" exposure across data records. In the second metric, we evaluate the probabilities assigned to certain tokens that should not be exposed by a secure model. The third new metric involves a “secret inference attack" (SIA) that is based on membership inference attack (MIA) techniques. In addition to evaluating on the three new metrics, we also evaluate it using the canary technique <cit.>, in which specific phrases (canaries) are injected into the training data, and the model's inclination to generate them compared to similar phrases not included in the training data is measured. The contributions of this work are as follows: * Providing the first practical and comprehensive solution for access control in LLMs, a solution that provides high utility and security by safely utilizing restricted data. * Developing a mathematical framework to model language models' exposure of sensitive information and establishing bounds for 's exposure. * Creating three novel empirical evaluation metrics for assessing sensitive information exposure and employing these metrics in our evaluation of . § RELATED WORK Very few studies have addressed the use of LLMs in the access-control scenario: tiwari2023information proposed using mixture of experts (MoE) in conjunction with training a separate model for each access level in order to support users with multiple access rights. However, this approach relies solely on non-restricted documents, which substantially reduces its utility. wutschitz2023rethinkingprivacymachinelearning proposed using retrieval-augmented generation (RAG) with access rights, which prevents the retrieval of unauthorized documents. As illustrated by tiwari2023information, using RAG by itself, without training the model on the access-controlled data, may be insufficient to achieve a substantial adjustment of LLM behaviors, such as altering writing styles or tones and incorporating new domain-specific knowledge and terminology. Therefore, securely using LLMs trained on access-controlled data, which is the purpose of our study, is essential. Several studies have explored the use of differential private training algorithms for deep learning <cit.>. These algorithms are designed to provide privacy guarantees when each “secret" appears in only one or a few data records. Scaling these algorithms to protect privacy when hundreds of documents contain the same sensitive information results in a substantial degradation in utility <cit.>. Differencial privacy (DP) federated learning <cit.> is a specific application of DP in deep learning in which contributions from different clients are aggregated during training, and noise is added to maintain DP. While federated learning might seem promising for the access-control scenario (with each access level treated as a client), 9069945 demonstrated that performance drops substantially when using a small amount of clients (access levels). In contrast, inherently does not depend on the number of access levels. PATE proposed PATE, a differential private framework for training machine learning models for classification tasks. PATE is not suitable when there is a large amount of classes, and it requires an unlabeled non-private dataset. Given that all next-token prediction datasets are naturally labeled, and the number of tokens (which could be seen as classification classes) is very large for modern LLMs, PATE is not applicable in our case. ginart2022submix introduced SUBMIX, an inference-time partition-level differential private model ensemble, however their ensemble requires many models to provide meaningful privacy guarantees, resulting in both costly text generation and a degradation in utility. In addition to DP-based approaches, sanitization methods have been proposed to empirically reduce sensitive information exposure by anonymizing names, numbers, etc. <cit.>. Sanitization may be insufficient for the protection of organizational data which can include sensitive text that does not contain a specific name or number (e.g., “the product performs worse than expected" may be considered sensitive text). One might consider using alignment methods for protecting sensitive information. Such methods include prompt engineering <cit.> and reinforcement learning from human feedback (RLHF) <cit.>, which is based on instructing the model to behave in a certain way or fine-tuning it to alter its behavior. These methods have been shown to be vulnerable to data extraction attacks and “jailbreaking" <cit.>; in addition, theoretical work by wolf2024fundamental suggests that an adversary can bypass model alignment methods with a long enough query. Another approach for mitigating sensitive information exposure is using a privacy regularized loss function <cit.>, however this approach does not provide theoretical guarantees. Unlike previous methods, DOMBA addresses the problem by directly managing probability distributions without relying on the sensitive data type or depending on regularization. § METHODOLOGY In this section, we define the concept of exposing a secret and describe 's training and aggregation processes. Using formal mathematical language, we establish a bound to 's exposure of secrets. We also show that no other aggregation method could ever achieve a better bound. §.§ Training DOMBA-INIT: Let d_1,...,d_k be the datasets corresponding to access levels 1,...,k. We randomly assign each access level to one of two data partitions and train a submodel on each partition separately, denoted as M_1 and M_2. DOMBA-FT: For each access level AL, let M_1 and M_2 be the resulting submodels of DOMBA-INIT. If AL was assigned to M_1 during DOMBA-INIT, we fine-tune M_2 on d_AL. Otherwise, we fine-tune M_1 on d_AL. We then save the states of M_1 and M_2, which will be used during inference for users with access level AL. If a user has multiple access rights, an MoE can be used as demonstrated by tiwari2023information (this scenario is not explored in this study). We note that PEFT (parameter-efficient fine-tuning) methods such as LORA <cit.> can be used to efficiently train and store the different states of M_1 and M_2. §.§ Preliminaries Let Σ be a set of tokens, and let n=|Σ|. We use t to refer to a token and c to refer to a context (i.e., a sequence of tokens preceding t). We use M, M_1, M_2 to refer to next-token prediction language models and denote the probability assigned by M to token t given context c as p_M(t|c) . We use ∑ (sum) without specification to indicate summation over all tokens (i.e., ∑_t ∈Σ). We note that the theorems in the subsections that follow commonly refer to arbitrary language models M_1 and M_2, but it may be helpful to think of M_1 and M_2 as the outputs of DOMBA-INIT or DOMBA-FT. §.§ Exposure of Secrets We begin by defining exposing a secret in a formal sense. As highlighted by 10.1145/3531146.3534642, secrecy is relative – something is deemed secret if it is known by some but unknown to others. Therefore, our concept of secrecy involves comparing probabilities assigned to a token by two models. One possible approach is to use the ratio of the probabilities assigned by the models to assess secrecy. However, this method has drawbacks. Consider the following probability distributions over the tokens a,b,c,d: p_1=(0.7,0.1,0.1,0.1) and p_2=(0.97,0.01,0.01,0.01). The probabilities' ratios (p_1/p_2) are (0.72, 10, 10, 10). This implies that tokens b,c, and d are “secret" in p_1 compared to p_2. However, it seems more appropriate to consider a as secret in p_2 compared to p_1, because p_2 assigns a a probability that is 97 higher than all other tokens, whereas p_1 assigns it a probability that is only seven times higher. To address this, we compare the probability ratio of a token t (between two models) to a “typical probability ratio" (TPR). Let f: Σ→ℝ^+. The geometric mean of f is GM(f(t)) := exp(1/n∑log(f(t))). Let c be a context, and let M_1,M_2 be language models. We define the “TPR at c" of M_1, M_2 as tpr_c(M_1, M_2)=GM(p_M_1(t|c)/p_M_2(t|c)). Let c be a context, t be a token, and M_1, M_2 be language models. We call t “α-exposed by M_1 over M_2 at c" if p_M_1(t|c)/p_M_2(t|c) · tpr_c(M_1, M_2) = α. We also say that t is “≤α-exposed" if t is β-exposed for some β≤α. In other words, instead of directly dividing p_M_1(t|c) by p_M_2(t|c), we adjust p_M_2(t|c) by multiplying it by the TPR. In the example discussed, the TPR is 5.18, which results in the following exposures of tokens a,b,c,d: M_1 over M_2: (0.14, 1.93, 1.93, 1.93) and M_2 over M_1: (7.19, 0.52, 0.52, 0.52). These values better reflect our intuition that a is secret and not b,c, and / or d. §.§ Exposure Properties In this subsection, we explore certain properties of exposure that are essential for later discussions. Let c be a context, and let M be a language model. We define the “typical probability at c" of M as tp_c(M)=GM(p_M(t|c)). Let t be a token, we further define the “relative probability of t at c by M" as rp_M(t|c) := p_M(t|c)/tp_c(M). tpr_c(M_1, M_2) = tp_c(M_1)/tp_c(M_2). tpr_c(M_1, M_2) = exp(1/n∑log(p_M_1(t|c)/p_M_2(t|c))) = exp(1/n∑log(p_M_1(t|c)))/exp(1/n∑log(p_M_2(t|c))) = tp_c(M_1)/tp_c(M_2). By this lemma, α-exposed is equivalent to rp_M_1(t|c)/rp_M_2(t|c) = α. If t is α-exposed by M_1 over M_2 at c and β-exposed by M_2 over M_3 at c, then t is αβ-exposed by M_1 over M_3 at c. rp_M_1(t|c)/rp_M_3(t|c) = rp_M_1(t|c)/rp_M_2(t|c)·rp_M_2(t|c)/rp_M_3(t|c) = αβ. §.§ Aggregation In this subsection, we provide a formal definition of the notion of a min-bounded function and describe how DOMBA aggregates two submodels. Let f: ℝ^+^2 →ℝ^+. We call f a proper-avg function if ∀x,y: min(x,y) ≤ f(x,y) ≤max(x,y). Let f be a proper-avg function. we call f min-bounded if ∀x,y, f(x,y) ≤λ_fmin(x,y) for some constant λ_f. In practice we use the generalized mean <cit.> with α < 0 for min-bounded functions, that is, f(x,y)=(1/2(x^α + y^α))^1/α, λ_f = 2^-1/α. Two special cases are: * α→ -∞ (Minimum): f(x,y) = min(x,y), λ_f=1. * α = -1 (Harmonic mean): f(x,y) = 2/x + 2/y, λ_f=2. Note that the arithmetic mean (x+y/2) is not min-bounded. Let M_1, M_2 be language models, and let f be a min-bounded function. We define DAGG_f(M_1, M_2) (denoted as M) as a model that assigns probabilities as follows: p_M(t|c) = M(t|c)/∑_t' ∈Σ M(t'|c), where M(t|c) = f(rp_M_1(t|c), rp_M_2(t|c)). We note that DOMBA uses f to average the relative probabilities. In contrast, averaging the probabilities would lead to inferior bounds in the subsequent subsection. §.§ Bounding the Exposure of DOMBA In this subsection, we establish the bounds on DOMBA's exposure over both submodels (Theorem <ref>) as well as over any other model (Corollary <ref>). We begin by introducing several definitions and lemmas that will be used for proving the main theorem later on. Let M_1, M_2 be language models, and let f be a min-bounded function. Let M = DAGG_f(M_1, M_2). We define f_c(M_1, M_2) = GM(M(t|c)^-1). While it might be unclear how to interpret f_c(M_1, M_2), it is related to a notion of “mean exposure" between M_1 and M_2: Let c be a context and M_1, M_2 be language models. We define the “mean absolute exposure between M_1 and M_2 at c" as MAE_c(M_1, M_2) = GM(max(rp_M_1(t|c)/rp_M_2(t|c), rp_M_2(t|c)/rp_M_1(t|c))). f_c(M_1, M_2) ≤√(MAE_c(M_1, M_2)). Let x := ∑log(min(rp_M_1(t|c), rp_M_2(t|c))), y:=∑log(max(rp_M_1(t|c), rp_M_2(t|c))). We observe that x+y = ∑log(rp_M_1(t|c)) + ∑log(rp_M_2(t|c)) = 0 + 0 = 0. by definition, y - x = n ·log(MAE_c(M_1,M_2)), which implies, x = -n/2log(MAE_c(M_1,M_2)), we conclude that f_c(M_1, M_2) ≤exp(-x/n) = √(MAE_c(M_1, M_2)). rp_M(t|c) = M(t|c) ·f_c(M_1, M_2). rp_M(t|c) = p_M(t|c)/tp_c(M) = p_M(t|c)/exp(1/n∑log(p_M(t|c))) = M(t|c)/exp(1/n∑log(M(t|c))) = M(t|c) ·f_c(M_1, M_2) In the following theorem, we provide a lower bound to the minimum token exposure achievable over two models. Let c be a context and M, M_1, M_2 be language models. There exists a token t that is ≥√(MAE_c(M_1, M_2))-exposed by M over either M_1 or M_2. By the proof of lemma <ref>: √(MAE_c(M_1, M_2)) = exp(-x/n) = GM(rp_M(t|c)/min(rp_M_1(t|c), rp_M_2(t|c))). The right end side is an average over tokens. Therefore there exists a token for which the term inside is greater than or equal to the left end side, which finishes the proof. In the following theorem, we demonstrate that using DOMBA provides a bound on the exposure that is a constant multiple of the best possible bound (Theorem <ref>). This constant is solely dependent on f and can even reach a value of 1 (f=Minimum). Let f be a min-bounded function, t a token and M = DAGG_f(M_1,M_2). t is ≤γ-exposed over both M_1 and M_2 for γ = λ_f f_c(M_1, M_2) ≤λ_f √(MAE_c(M_1, M_2)). rp_M(t|c)/rp_M_1(t|c) = M(t|c) ·f_c(M_1, M_2)/rp_M_1(t|c) ≤λ_f min(rp_M_1(t|c), rp_M_2(t|c)) ·f_c(M_1, M_2)/rp_M_1(t|c)≤λ_f f_c(M_1, M_2). We note that by assuming M_1 and M_2 assign similar relative probabilities to most tokens, we can anticipate the mean absolute exposure to be low. Essentially, we achieve average case behavior for all tokens. In the following corollary, we informally think of M_b as our base model (although the corollary holds in general). Let c be a context, and let M_1, M_2, M_b be language models. Let t be a token that is α-exposed by M_1 over M_b at c and β-exposed by M_2 over M_b at c. Let M:=DAGG_f(M_1, M_2). Then t is ≤γmin(α, β)-exposed by M over M_b at c for γ as in theorem <ref>. Follows directly from lemma <ref> and theorem <ref>. Stating the corollary in other words, if we fix a context c, for any token t, the exposure of the aggregated model M over any model M_b is bounded by the minimum of the exposures of the submodels M_1 and M_2 over M_b, multiplied by a small value. This implies that if the exposure of a token t by either submodel over M_b is small (i.e., t is not substantially exposed by at least one submodel), then the exposure by the aggregated model over M_b cannot be too large (i.e., t will not be substantially exposed by the aggregated model). Given that in , each submodel is trained on separate access levels, and assuming that access levels with shared secrets are assigned to the same partition, it is expected that each secret will not be substantially exposed by at least one of the submodels, and thus, will provide a defense against the exposure of these secrets. § EVALUATION §.§ Datasets Since access-controlled datasets are not publicly available, we required datasets that mimic the access-control scenario. These datasets need to be divided into different topics (which serve as access levels), with many data records per topic. Additionally, to use two of our security evaluation metrics, the data records should contain phrases that we refer to as “sensitive-mimicking phrases" – phrases unique to the topic that could be considered sensitive / secret. §.§.§ Movie Reviews The first dataset we utilized is the IMDB Spoiler reviews dataset <cit.>. We randomly selected 50 reviews of different movies released after 2015 and considered the movie of each review selected as an access level. Then, we collected all of the reviews for each of the 50 movies. We note that some reviews contain details about the movie's plot, cast members, or characters, which mimic sensitive information. We utilized the Movies Metadata dataset <cit.> to retrieve cast members' names and used them as sensitive-mimicking phrases. The number of reviews totaled 22,742, with 10% of each movie's reviews set aside for evaluation. The number of reviews per movie ranged between 160 and 751. §.§.§ Recipes The second dataset used is the Food.com Recipes and Interactions dataset <cit.>. We utilized class labels of the Food-101 dataset <cit.> to partition the recipes into multiple sets. Each set includes recipes with titles containing a specific class label (e.g., pizza). We selected the 10 most frequent classes as the access levels. We note that the recipes include specific details about the process of creating each dish, which can mimic, for example, sensitive detailed descriptions of product manufacturing processes. We use the ingredients of each recipe as sensitive-mimicking phrases. However, since some ingredients are common among many classes, we only consider ingredients that appear in recipes of a certain class with a frequency at least 10 times greater than the frequency in all of the recipes. The number of recipes totaled 10,829, with 10% of each class put aside for evaluation. The number of recipes per class ranged between 408 and 2283. §.§ Training For training we used LORA <cit.>. LORA is a fine-tuning technique that uses a small number of trainable parameters. Training is relatively fast with LORA, and the resulting model requires minimal storage space. These qualities were crucial for our experiments, as we conducted numerous trials with limited computational resources. However, it is important to note that the theoretical analysis is not dependent on the training method, and we anticipate that the experiments can be replicated using other training techniques as well. The base model used was OpenAI-GPT <cit.> which has 117 million parameters and a vocabulary size of 40,478. This model's original training data is a books dataset from 2015 <cit.>. This limits the prior knowledge the model possesses regarding movies and recipes. Since recent LLMs are trained on more recent and diverse datasets, evaluating them on sensitive information from the movie and recipe datasets would be challenging, as the models are probably familiar with some of the information. We note that although recent LLMs are larger and perform better than OpenAI-GPT, many are still based on the same underlying principles. Our theoretical analysis and proposed approach generalizes to any language model based on next-token prediction and does not rely on the specifics of any particular architecture. Regarding training parameters, we conducted experiments with varying numbers of training epochs (1, 2, and 4). The hyperparameters for LORA were set to default values and were not explored: r=64, lora_alpha=32, lora_dropout=0.05, optimizer=paged_adamw_32bit, learning rate=5e-4, and warmup_ratio = 0.03. All experiments were conducted using an NVIDIA A100-SXM4-40GB GPU. §.§ Compared Models §.§.§ Non-secure models (NSec) In these models, which serve as baselines, no attempt is made to secure sensitive information. FT-ALL: OpenAI-GPT fine-tuned on the entire training dataset. AGG-A: Similar to DOMBA-INIT, but using arithmetic mean, a non-min-bounded function, during aggregation. §.§.§ Secure models (Sec) While these models are trained on all the data with an effort made to secure sensitive information, they do not include an access-control mechanism. SUBMIX: An aggregated model constructed using the method of ginart2022submix, with three submodels (two parts + the base model). For a meaningful comparison, we tuned the privacy parameter β to 0.3, which resulted in utility comparable to DOMBA on the movies dataset. D-I-H: DOMBA-INIT (without DOMBA-FT), using harmonic mean for aggregation. D-I-M: DOMBA-INIT (without DOMBA-FT), using minimum for aggregation. §.§.§ Access-controlled models (AC) These models are designed to secure sensitive information while providing an access-control mechanism. Per-AL: A separate model for each access level, achieved by fine-tuning OpenAI-GPT only on data records of that access level. DOMBA: Our full method, using the minimum function for aggregation. §.§ Metrics In this section, we describe the metrics used to evaluate the models' utility and security. For utility we use perplexity, which measures the model's ability to predict the next token in a text. For security we use four different metrics: exposure, secret perplexity, a secret inference attack AUC-ROC, and the canary technique score <cit.>. We note that for access-controlled models, we evaluate the security of each variant (corresponding to an access level) using data with a different access level than the one that the variant was trained for. §.§.§ Utility Evaluation We evaluate utility in terms of perplexity on two evaluation sets as follows: 1. HOPPL: perplexity on held out data with access levels that were not used for training. This metric provides a “fair" way of comparing secure and non-secure models, as the non-secure models are not expected to gain by “knowing" restricted information. 2. AUPPL: perplexity on held out data of the access levels used for training (for access-controlled models - the corresponding variant is used for each access level). The main purpose of this metric is to compare the utility of secure and access-controlled models. We expect the access-controlled models to gain utility by “knowing" authorized restricted information. For both metrics above, we calculate the perplexity as: perp_M(D_e) = exp(1/|D_e|∑_r ∈ D_e∑_i-log(p_M(r_i|r_<i)), where |D_e| is the amount of tokens in D_e, r is a data record, r_i is the i'th token in the record, r_<i are the tokens preceding it, and p_M is the probability assigned by the model. §.§.§ Exposure (EXP) In our theoretical analysis (Theorem <ref>), we established that the exposure of M=DAGG_f(M_1,M_2) over both M_1 and M_2 is bounded for any token by λ_f f_c(M_1, M_2) ≤λ_f √(MAE_c(M_1, M_2)). To validate this, we measure “extreme case" exposure of M over M_1 and M_2. We report the maximum and 99th percentile exposure (= rp_M(t|c)/min(rp_M_1(t|c), rp_M_2(t|c))) for all tokens observed in the data, given the previous tokens as context. §.§.§ Secret Perplexity (SPPL) One way of measuring the model's ability to handle sensitive information is by evaluating perplexity specifically on sensitive-mimicking phrases. Given a model M, we measure the perplexity of each instance of a sensitive-mimicking phrase in the evaluation dataset. Specifically, let x:=x_1,...,x_k be the token representations of a sensitive-mimicking phrase and c be the tokens preceding this phrase, we measure perp_M(x|c) = exp(1/k∑_i-log(p_M(x_i|c,x_1,...,x_i-1))). We report the average of the mean perplexity of each access level. This metric aims to provide a basic, rough evaluation of a model's ability to handle sensitive information. §.§.§ Secret Inference Attack (SIA) This attack is based on a membership inference attack with a reference model <cit.>. The original attack works as follows: Given a reference model M_b, a target model M, and a potential training data record r of M, measure the log ratio of the probabilities of r according to M and M_b, that is log(p_M(r)/p_M_b(r)). If this value is above a certain threshold, consider r as belonging to the training data of M. In our scenario, instead of inferring the membership of any data record, the adversary tries to infer secrets. Therefore, we only consider probabilities assigned to sensitive-mimicking phrases: cast members' names for the movie review dataset and secret ingredients for the recipe dataset. The attack dataset consists of tuples (c, t, label), where c is a context, t is a phrase, and label is true if t is sensitive and false otherwise. To obtain data points labeled false, we replace each sensitive-mimicking phrase t by t', which is another phrase of the same type (cast member name or ingredient) that is not a sensitive-mimicking phrase. For every data point (c, t, true), we have a data point (c, t', false). We report the AUC-ROC of the attack. §.§.§ The Canary Technique (CAN) We adapt the attack proposed by 236216 to the access-control scenario. For each access level, we insert 30 repetitions of a phrase (canary) consisting of seven randomly chosen words into the training set for that access level (the number of repetitions and phrase length were selected arbitrarily). This canary mimics sensitive information for the access level. We report the median attack score across access levels. An attack score of s means that only (1/2)^s of phrases of the same length have a higher probability of being generated by the model. A score near one suggests that the model did not memorize the canary. § RESULTS The results with two epochs of training are presented in Table <ref>. In terms of utility, FT-ALL performed the best across both metrics, as expected. Among the secure and access-controlled models, D-I-H achieved the highest utility on the HOPPL metric, while DOMBA achieved the highest utility on the AUPPL metric, with a substantial gap compared to secure models, highlighting the importance of the DOMBA-FT step. Comparing access-controlled models, DOMBA exhibited substantially better utility than PER-AL across both metrics and datasets. Regarding security, non-secure models performed substantially worse, compared to secure models, on all metrics. Among the secure and access-controlled models, SUBMIX obtained the worst values for all metrics and datasets, D-I-M and DOMBA obtained the best values, and D-I-H was slightly worse. Although secure models provide substantially better security compared to non-secure models, they are not perfect. For instance, a perfectly secure model would score 0.5 on SIA and one on CAN. This does not imply that secure models are not actually secure. For example, the values obtained by all secure and access-controlled models for the canary technique metric are considered impractical for extracting useful information <cit.>. Figure <ref> shows the worst-case and 99th percentile exposure of models employing different aggregation methods on the recipe and movie review datasets for 1, 2, and 4 training epochs. The maximum exposure of DOMBA, D-I-H, and D-I-M is 4.69. In comparison, SUBMIX reaches a maximum exposure of 8.5e4 and AGG-A reaches a maximum exposure of 1.3e10. We observe that 's 99th percentile exposure is similar to its maximum exposure, supporting the theoretical bound established by our analysis (Theorem <ref>). Regarding the effect of the nubmer of epochs, increasing it generally leads to higher exposure (except for SUBMIX's 99th percentile exposure on the recipe dataset). However, the increase in exposure for DOMBA, D-I-H, and D-I-M is moderate compared to AGG-A for both datasets, while for SUMBIX, the change in exposure is inconsistent between the two datasets. Figure <ref> illustrates the trade-off between utility and security for different methods across both datasets. For most models, as the number of training epochs increases, security tends to worsen while utility improves. However, non-secure models experience a much greater decline in security. DOMBA achieves the best trade-off, providing superior security while maintaining utility levels similar to those of the non-secure models. § DISCUSSION Our results show that D-I-H and D-I-M achieve a good utility-security trade-off. This suggests that our method may be suitable for general privacy-preserving purposes beyond access control. In future research, it will be interesting to develop a variation of DOMBA for non-access-controlled private datasets and compare its performance to that of state-of-the-art privacy-preserving methods which are not focused on the access-control scenario. One limitation of is that its security cannot be increased further (assuming minimum is used as the aggregation function). This is opposed to DP methods which can achieve any security level (with a cost on utility) by adjusting a privacy parameter. Future research could explore hybrid approaches, combining with DP techniques to offer security beyond 's current maximum level of security. Additionally, relies on a strict separation of access levels into two distinct partitions without shared sensitive information. Such separation could be challenging to implement in some scenarios. To make more robust to sensitive information shared between access levels, future research could explore the separation of the access levels into more than two partitions. Resource overhead is incurred with 's deployment due to the use of two LLMs instead of one, which may be impractical for some applications. One potential solution is to employ as a teacher model to train a student model via knowledge distillation <cit.>, where the student model serves as a deployed model mimicking DOMBA. § CONCLUSION In this paper, we proposed , a novel approach for training and deploying access-controlled LLMs with high utility. We formalized the concept of exposed secrets and bounded 's exposure. We evaluated 's performance on two access-controlled datasets, reflecting real world organizations' needs. Our evaluation showed that achieves a better security-utility trade-off than existing methods, across both datasets, two utility metrics and four security metrics. Finally, we believe that the principles of min-bounded aggregation and relative probabilities, which serve as 's core, have substantial potential to serve as foundational elements in a wide range of future machine learning research, extending beyond the scope of security.
http://arxiv.org/abs/2408.12368v1
20240822130719
A Setup to Study Atomic State Population Dynamics and Optical Polarization at CRYRING@ESR
[ "K. Mohr", "R. Sánchez", "W. Nörtershäuser", "Z. Andelkovic", "V. Hannen", "E. -O. Hanu", "F. Herfurth", "R. Heß", "M. Horst", "P. Imgram", "K. König", "C. Krantz", "M. Lestinsky", "Yu. A. Litvinov", "E. Menz", "P. Müller", "J. Palmes", "S. Rausch", "T. Ratajczyk", "L. Renth", "J. Rossbach", "R. S. Sidhu", "F. Sommer", "J. Spahn", "N. Stallkamp", "K. Ueberholz", "G. Vorobjev", "C. Weinheimer", "D. Zisis" ]
physics.atom-ph
[ "physics.atom-ph", "physics.acc-ph" ]
k.mohr@gsi.de r.sanchez@gsi.de wnoertershaeuser@ikp.tu-darmstadt.de , § ABSTRACT We present a recently established setup for laser spectroscopy at CRYRING@ESR at the GSI Helmholtz Centre for Heavy Ion Research. Here, laser spectroscopy can be performed on stored and cooled ion bunches and coasting beams. First spectra of ^24,25Mg^+ ions are presented that were recorded by classical Doppler-limited fluorescence spectroscopy as well as Λ-spectroscopy using counter- and copropagating laser beams that are Doppler-shifted by several nm. A Setup to Study Atomic State Population Dynamics and Optical Polarization at CRYRING@ESR D. Zisis August 26, 2024 ========================================================================================= § INTRODUCTION Beams of spin-polarized particles, ions, and atoms are of considerable interest in fundamental and applied science <cit.>. Polarized electrons, muons, protons, and deuterons can be produced with high degrees of polarization from thermal to ultrarelativistic energies. The production of polarized heavy ion beams in storage rings has a huge potential for fundamental tests of the standard model <cit.>, e.g. parity nonconservation (PNC) effects in highly-charged helium-like ions and connected with the nuclear anapole moment. Despite its potential and several suggested routes for their production, e.g. in Refs.<cit.>, such a polarized beam has not been achieved so far. Contrary, at low energies of a few 10 and for singly charged ions the interaction with circularly polarized laser beams that are superimposed with the ion beam in a weak longitudinal magnetic guiding field is a well established technique to produce polarized beams. They are applied to investigate the properties of short-lived nuclei <cit.>, to study local magnetic fields in solid-state physics <cit.>, and, most recently, for bio-NMR studies <cit.>. In these applications, the ion beams are polarized by the interaction with a circularly polarized resonant laser beam that causes a redistribution of the population in the magnetic m_J or m_F states and, thus, introducing electronic and nuclear polarization. An observation made at the experimental storage ring ESR at the GSI Helmholtz Centre for Heavy Ion Research has given some indication for a similar behavior <cit.> in a storage ring. This was surprising because a fast depolarization of the laser-induced population difference of the magnetic substates was expected in the large and rapidly varying magnetic fields that the ions experience during a single revolution. To study state populations and polarization degrees of freedom, we have developed a dedicated setup at the CRYRING@ESR storage ring <cit.>. This newly installed storage ring at GSI has the advantage that it can be operated with lowly charged ions from a local ion source independently of the full GSI accelerator chain. As the ion of choice for the first experiments, Mg^+ has been selected because the Li^+ ions used at the ESR, required an ion source that provides ions in the metastable 1s2s ^3S_1 state. Since the production of the metastable state is quite inefficient, only a small fraction of the ion beam can be addressed by laser light. This is different for Mg^+ ions, which can be excited out of the ionic ground state along the 3s_1/2→ 3p_1/2,3/2 transitions and can both be regarded as two-level systems for even-even isotopes like ^24Mg. Additionally, the odd isotope ^25Mg with a natural abundance of 10.0 % and a nuclear spin I = 5/2 provides a more complex level scheme as necessary for Λ-spectroscopy. Individual transitions that are addressable at CRYRING for both isotopes are shown in Fig. <ref>. In this contribution, we describe the experimental setup recently established for laser spectroscopy at CRYRING@ESR, with the goal to study optical population transfer and optical pumping, and report on first laser spectroscopic results. § SETUP An exhaustive description of the CRYRING@ESR design has been provided in <cit.>. Here, we will focus only on those parts that are most relevant for laser spectroscopy studies or have been explicitly designed for this task. §.§ Ion Beam Preparation Solid Magnesium is evaporated inside an oven consisting of a small ceramic vessel, heated resistively via a tungsten spiral, and equipped with a tantalum heat-shield. During the first beamtimes a Nielsen-type hot surface ion source (MINIS) <cit.> was used to create Mg^+ ions, which was afterwards replaced by an electron cyclotron resonance ion source (ECRIS). The source is located on a platform designed for bias voltages up to 40, which is followed by a 90 sector magnet for mass separation of ion species. The subsequent radio-frequency quadrupole accelerator (RFQ) can accelerate ions up to an energy of 300 but is designed for charge-to-mass ratios of q/m ≤3.2e/u. Thus, Mg^+ ions can not be accelerated but are only transported through the RFQ and injected into CRYRING@ESR at an energy of 36/q, corresponding to 1.5/u for ^24Mg and 1.44/u for ^25Mg. The RF system of the storage ring is then used to accelerate the ions to the designated energy. The maximum beam energies are 170/u (β=0.019) and 155/u (β=0.0185) for ^24Mg^+ and ^25Mg^+, respectively, limited by the rings maximum magnetic rigidity of B_ρ=1.44Tm. For convenience, the final energy is usually set to 155/u for both isotopes. Bunching and acceleration is usually peformed on the 14^th or a higher harmonic of the RF system, resulting in 14 or more stored bunches, with subsequent rebunching or a transition to an unbunched, coasting beam. §.§ CRYRING@ESR The storage ring consists of 12 straight sections linked by 30 dipole bending magnets as shown in Fig. <ref>. The sections are called 'YRXX'. Even numbers XX refer to magnetic sections providing ion-optical focussing, odd numbered are drift sections dedicated to a specific task related to ring operation, diagnostics or experiments. The beam from the local ion source is injected into YR01, bunched and accelerated through the radio-frequency (RF) system in YR05, and electron-cooled in YR03. The optical detection region and the infrastructure for laser spectroscopy experiments is located in YR07, which is also used for beam extraction to material-science experiments. Further in-ring experiments can be installed in YR09 and beam diagnostic including the Schottky-noise pickup is mounted in YR11. §.§ Electron Cooler Beam cooling can enhance the ion lifetime in the storage ring and is essential to reduce the transversal and longitudinal emittances of the stored beam. This leads to a significant reduction of beam diameter (improved spatial laser-ion beam overlap) and velocity distribution (Doppler broadening). The electron cooler as installed at GSI has been described recently <cit.> and only a brief summary is provided here. The adiabatic expansion technique is a peculiarity realized at the CRYRING@ESR electron cooler, which leads to low transversal beam temperatures <cit.>. The electron beam emerges from the electron gun at high-voltage, embedded in a strong magnetic field B_g of a superconducting solenoid. Two toroid-magnets are used to provide 50 bending of the electron beam for superposition with the ion beam and its subsequent extraction after a cooling section of about 0.8m, along which a normal-conducting solenoid provides a homogeneous magnetic flux density B_d along the common ion and electron beam axis in the drift section. The ratio of the magnetic flux density at the electron gun and the drift section determines the beam expansion factor α=B_g/B_d, which relates the transversal electron temperature to the temperature of the electron gun (T_⊥=T_g/α). Typical expansion factors at the CRYRING@ESR electron cooler are 30–100. In coasting beam operation, the electron energy in the electron cooler defines the kinetic energy of the revolving ion beam due to the thermalization between the ions and the cold electrons. Thus, it is of utmost importance for laser spectroscopy to either find a (well known) resonance transition or to extract an observed and unknown transition frequency by taking into account the large Doppler shift of the laser frequency in the ion's rest frame ν_a / c=ν_0 γ (1 ∓β) for a copropagating (collinear) ν_c and counterpropagating (anticollinear) ν_a laser beam, respectively. In terms of the speed of light, the mean electron velocity β – under good cooling conditions equivalent to the ion storage velocity – is determined from the electron acceleration voltage U_eff according to β=√(1-(1+q U_eff/m_e c^2)^-2), with the relativistic time-dilation factor γ=1 / √(1-β^2). It should be noted that the space-charge potential U_sc of the electron beam as well as possible contact potentials U_contact need to be considered in U_eff. Electron beam energies for laser spectroscopy of Mg^+ are about 90. A voltage of U_cooler≲102V is applied to the electron cooler high-voltage (HV) terminal and is monitored using a high-precision HV divider ('G35'). It is a variant of the 35 precision divider designed for the KATRIN experiment <cit.> and, together with a readout of the divider output voltage using a 8.5-digit precision digital multimeter (DMM, type Keysight 3458A), allows to determine the input voltage with an accuracy of <10ppm <cit.>. §.§ Laser System and Beam Transport The rest-frame wavelengths of the 2s_1/2→ 2p_1/2,3/2 transitions in Mg^+ are about 280nm. At β=0.019 the wavelength in the laboratory frame is shifted to 285nm for anticollinear and 275nm for collinear excitation. Both UV wavelengths are produced by second-harmonic generation in a Wavetrain® frequency doubler using beta-barium-borate (BBO) crystals. The fundamental for the anticollinear excitation (570nm) is generated with a single-mode continuous-wave (cw) ring dye-laser (Matisse DS®, Sirah Lasertechnik) operated with a 0.75g/l Rhodamine-6G solution in ethylene glycol. It is pumped by 5–8 of a frequency-doubled cw multimode Nd:YVO_4 laser (Millennia® eV 20, Spectra Physics). The fundamental wavelength for collinear excitation is produced by sum-frequency mixing of the 852nm output of a single-mode cw ring titanium-sapphire (Ti:Sa, Matisse TS®) laser and an ultra-stable fiber laser locked at a wavelength of 1550.12nm in a periodically poled crystal (Mixtrain®). The Ti:Sa is pumped by a second cw Millennia® eV laser with a power of 15. The frequencies of both ring lasers are stabilized to a precision wavelength meter (WSU10, High Finesse), and their set-wavelengths can be controlled by the data acquisition system (DAQ). The frequency of the light obtained from mixing with the fiber laser is directly measured with the WSU10, such that the Ti:Sa frequency is adapted to accommodate also drifts of the fiber laser. A laser power controller (LPC) from Brockton Electrooptics is installed between the Mixtrain and the Wavetrain to compensate for laser power fluctuations and to compensate the power fluctuations of the Mixtrain when scanning the Ti:Sa laser frequency. This laser beam is afterwards chopped by an acousto-optical modulator (AOM) operated at a frequency of f_AOM=200MHz to realize pump-and-probe experiments. Both UV-laser beams (275 nm and 285 nm) are independently cleaned using a spatial filter and then collimated using a telescope. Afterwards, both laser beams are sent to CRYRING@ESR using 2-inch optical mirrors. The settings of the optical telescopes are chosen such that the laser beams at target (section YR07) have a diameter of 2.5 mm (FWHM). An active laser beam stabilization (COMPACT®, MRC-Systems) is used for each UV-laser beam to stabilize the laser beam position at the target. §.§ Interaction Region and Optical Detection The optical detection region for the laser spectroscopy setup at CRYRING@ESR is located at section YR07 and mounted inside a CF250 chamber having a total length of 720 (see figure <ref>, left panel). Seven stainless-steel ribs support an elliptical mirror system, made from MIRO®–3 aluminum sheets <cit.> providing a high reflectivity down to UV wavelengths. Since this ring section is also used for slow and fast beam extraction, the end plates of the mirror system are designed with two holes providing sufficient space for the large injected beam before cooling and the extracted beam. The diameters of the holes are 80 and 40, respectively. Several clamping screws are used to fix the support frame inside the vacuum chamber. The mirror system is divided into three identical segments of 190 length, each of them located in front of a CF63 viewport equipped with a UV-transparent sapphire window. Off-centered CF250 to CF100 adapter-flanges on both sides of the vacuum chamber ensure that the focal point of the elliptical mirror system coincides with the propagation axis of the circulating ion beam. Hence, fluorescence photons originating from the ions are collected efficiently (see figure <ref>, right panel). Photons originating from other places have a lower detection efficiency, enhancing the signal-to-noise ratio. The whole setup can be baked at 300 for improved vacuum conditions. Outside the vacuum, each viewport is coupled to a cooled PMT housing (type FACT50, ET Enterprises Ltd.), which can be equipped with 2" photomultiplier tubes (PMTs) for single photon counting. Two sets of PMTs are available, one for the UV region (type 9235QA, ET Enterprises Ltd.) and one for longer wavelengths up to 900 (type 9658A, ET Enterprises Ltd.). PMT signals are amplified and then digitized using a fast amplifier and a constant fraction discriminator (CFD) before being fed into the field programmable gate array (FPGA)-based data acquisition system (see below), which is a modified TILDA <cit.> version tailored for laser spectroscopy at storage rings. §.§ Scraping system and spatial beam overlap In front of the mirror system one of two scraper pairs is installed, a second one 2.925m downstream. Each of the hook-shaped copper scraper blades can be driven either in horizontal or in vertical direction and provide sufficient space for the uncooled beam during injection, when driven to the outer or inner end-point. This allows the beam to be scraped with the 10 wide scraper blade from both sides. By simultaneously measuring the stored beam intensity, using the signal amplitude induced in a pickup electrode as a proxy, the ion beam axis can be determined in both horizontal and vertical directions. The scrapers are mounted on linear translators (Allectra, type DN40CF compact linear translator 100 mm/150 mm travel) with a pitch 10mm/rev and driven by 5-phase stepper motors. In combination with an angular resolution of 1.44 per step and a maximum step rate of 1000Hz, the achievable velocity of the stepper motors is ≈4mm/s. This is sufficient to determine the ion beam position for beam lifetimes of at least a few seconds. The stepper motors are controlled via power drive cases (PDC), which are integrated into the FAIR (Facility for Antiproton and Ion Research in Europe) control system via a Stepper Motor Control Unit (microIOC). The scraper is moved into the beam from both directions to determine the beam position while observing the stored beam intensity on the pickup. The position of the scraper can be measured in two different ways. A film resistor is used to determine an absolute position. For fast measurements of the relative position, the individual steps of the stepper motor are counted since the film resistor is limited to 10Hz. A typical measurement of the ion beam position is depicted in Fig. <ref>. Here, the signal-amplitude of the pickup signal is plotted against the scraper position (black points). During this measurement the scraper was driven horizontally from the starting position at 42mm to the end position 30mm counting the steps of the stepper motor. Since the lifetime of the ion beam is finite, the signal amplitude decreases right from the start while the scraper is still travelling towards the beam's edge without disturbing it. An exponential function can express this behavior and is shown as a dashed line in Fig. <ref>. As soon as the scraper blade interacts with the ion beam, the signal decreases more rapidly. Due to the betatron oscillations, the ion beam is completely lost once the scraper reaches the ion beam center. To extract the ion beam profile from the data, the procedure presented for the scraper measurement at the cryogenic storage ring (CSR) <cit.> was adapted. The blue curve represents a fit of the function I(x_s)= I_0 e^-x_s-x_0/v_s·τErf(x_s-x_ion/√(2 π)σ) x_s-x_ion≥ 0 0 x_s-x_ion<0 to the recorded data points, where x_0 and v_s are the (known) start position and drive velocity of the scraper, respectively, x_ion the unknown center of the ion beam, i.e., the point at which the ion current vanishes, and τ is the ion lifetime in the storage ring. Assuming a Gaussian ion beam profile, the intensity distribution can also be modeled as shown in blue in Fig. <ref>. Utilizing the visual control of the laser beam position with respect to the scraper positions allows to superimpose the ion and the laser beam with an estimated maximum displacement of Δ x=Δ y = 2mm between both beams for the horizontal and vertical direction. In the worst case, the displacements at the two scraper-positions have opposite signs, resulting in a maximum angular misalignment between laser and ion beam of about 2mrad. §.§ Data Acquisition Laser spectroscopy at a storage ring requires the reliable combination of laser and accelerator control and readout with high timing stability. The data acquisition (DAQ) system must record all relevant parameters of the CRYRING@ESR operation, control the laser frequency, and collect information on the photomultiplier events. The DAQ needs to be synchronized to the operation of the storage ring. Therefore, the triggers that control the individual steps of the ion beam preparation, which are * ion creation and injection, * acceleration up to the designated energy, * electron cooling * measurement gate (with continuous electron cooling), * extraction of the beam, are fed into the DAQ. The measurement gate defines the fraction of the storage ring cycle in which laser spectroscopy can be performed. It starts when ions have reached their equilibrium temperature through electron cooling and lasts as long as a sufficient number of ions is stored in the ring. We note here that in other parts of the ion beam preparation cycle, laser spectroscopy can serve as a tool to obtain information about the ion dynamics in the storage ring, but this is beyond the scope of this contribution. A brief description of the implementation of the most important parameters to perform and analyze laser spectroscopy is provided in the next paragraphs. Light (Z ≳ 3) singly charged slow ions have short lifetimes of a few seconds in CRYRING@ESR, and the corresponding fraction of the complete accelerator cycle for laser spectroscopy is relatively small. To improve the duty cycle, accelerator cycles of 7–10s are typically applied of which 1–3s are used for spectroscopy. The laser frequency is typically changed once the extraction trigger is received to record a spectrum. This provides sufficient time to set the next laser frequency before the next prepared ion beam is delivered by the ring. The whole storage time is used to determine the fluorescence signal at the set laser frequency. Photomultiplier signals are processed in the FPGA (NI PXI-7852R) to obtain the count rate and the photon's arrival time. All arrival times are recorded relative to a fixed phase of the ion revolution. Therefore, the discriminated bunching frequency obtained from the ring RF system is divided by the harmonic number and used as the trigger input. The bunch-pickup signal mentioned above provides ion-current information and is used to normalize the fluorescence signal (see below). The bunch-pickup system consists of four conductive plates in YR11, also used for Schottky analysis of coasting, highly-charged ion beams. Induced voltages are fast-amplified and subsequently Fourier-transformed by a real-time spectrum analyzer (RSA, type Tektronix N9020B MXA Signal Analyzer). For bunched beam operation, the dominating frequency component is given by the bunching frequency, i.e., the revolution frequency (for Mg^+ under the usual conditions about 100) times the number of bunches. The signal of this dominant frequency component, taken from the analog output of the RSA, is fed into a voltage-to-frequency converter with a conversion ratio of 1MHz/V. The resulting frequency is recorded by the real-time DAQ to provide a relative ion current measurement. The platform voltage and the electron current at the electron cooler are continuously published via the open-source event streaming system Apache Kafka®, which is integrated into the CRYRING@ESR data acquisition. From here, the information can be read from any device and stored for analysis of the electron velocity β according to Eq. (<ref>). § FIRST RESULTS We present spectra of Mg^+ ions obtained at CRYRING@ESR with the above-described setup under several experimental conditions. §.§ Doppler-limited Spectroscopy Figure <ref> shows resonance signals of a bunched (a) and a coasting beam (b). The laser was scanned across the resonance position, and the photomultiplier signals were recorded using the laser frequency. Due to the short lifetime of the ions in the ring of only a few seconds, the spectra are taken with a single fixed laser frequency for each ring filling. The laser is tuned to the next frequency, while the ions of the next injection are prepared and cooled. The photon's arrival time related to an arbitrary but fixed phase of the revolution frequency was also recorded. The color-coded signal intensity in the upper part of the figure shows the result, where the x-axis is the laser frequency while the y-axis represents the photon arrival time. The 18^th harmonic of the revolution frequency was applied at the RF-cavity. Three out of the 18 revolving bunches are visible in the fluorescence spectrum. The lower frame shows the extracted resonance signal before (red) and after (black) normalization. The red one is obtained by simply projecting the number of counts to the x-axis. However, the intervals in which ions are not present in the detection region were disregarded to reduce the laser-induced background. The obtained signal exhibits a clear asymmetry with a falling baseline. It turned out that this is largely caused by laser intensity fluctuations and shot-to-shot ion intensity variations while recording the spectra. Normalization to the laser intensity was performed by integrating the dark counts in the periods where no ion bunch is present at the optical detection, i.e., those regions previously not included in the summation. The ion beam intensity is obtained from the bunch pickup signal, as described in Sect. <ref>, and can also be used in the normalization procedure (not applied here). The normalization leads to a reasonable Gaussian-like structure as shown by the black trace in Fig. <ref>a. Lineshapes with bunched beams from different beamtimes are presented in Fig. <ref>. They are well described by Gaussian lineshapes with decreasing linewidth (FWHM): In the first beamtime in 2019 (∘) the electron cooler was not operated (7.3GHz). In 2020 (+) it was not fully optimized (4.46GHz) and, finally, in 2021 (□) optimal conditions were achieved (1.11GHz), providing a seven-fold improvement in linewidth compared to operation without cooling. As bunching introduces some heating due to driven synchrotron oscillations, the limit is reached if the heating rate equals the cooling rate. It can, therefore, be expected that the beam temperature can be further decreased if the bunching is turned off, i.e., coasting beam operation is used. A spectrum taken with a coasting beam (December 2023) is shown in Fig. <ref>b. No bunches are observed in this case. Ion arrival and fluorescence light are equally distributed over the revolution time. The lineshape has a tail to lower frequencies, which correspond to higher velocities of the ions in counterpropagating geometry. Without RF-bunching, ion-current normalization is not possible due to the absence of a strong coherent pickup signal. The Schottky noise of the coasting beam is undetectable with our present pickup system at the low charge state and revolution frequency of the Mg^+ beam. Laser intensity normalization using intrinsic information is also not possible since periods without ion beam in the detector region are not available. The origin of the tail is not yet fully understood but there are indications that it might be related to a not fully converged cooling process when the spectroscopy gate was started. We also note that the linewidth of 3.71GHz is larger than the one observed in bunched-beam mode in the 2021 beamtime (1.11GHz) and rather comparable to the linewidth of 4.46GHz observed in 2020. This is in conflict with the expectation of a lower linewidth for a coasting beam under identical cooling conditions and indicates that the electron cooling was not fully optimized. Data analysis and simulations of these spectra are currently performed. §.§ Sub-Doppler Λ-Spectroscopy Finally, we present a first spectrum of ^25Mg^+ taken with Λ-spectroscopy in the 3s_1/2→ 3p_1/2 transition. The more intense copropagating laser is fixed to the F=2 → F^'=3 transition close to the maximum of the Doppler-broadened resonance, while the frequency of the weaker counterpropagating laser is scanned across the transitions starting on the F=3 hyperfine level. The principle of Λ-spectroscopy is as follows: as long as both laser beams are operating on different velocity classes of the revolving ions, the respective ions will be quickly pumped into the other (dark) hyperfine level of the electronic ground state from where no excitation can occur anymore. A repeated transfer between the two ground-state hyperfine levels and, thus, enhanced fluorescence becomes possible only when the probe laser frequency simultaneously addresses those ions that are pumped by the second laser. While the spectrum of the even isotope in Fig. <ref>b represents the total Doppler width of the ion velocity distribution, Λ-spectroscopy provides, in principle, spectra that exhibit a resolution given by twice the (homogeneously broadened) natural linewidth of the transition. The factor of two arises from the fact that the pumped fraction represents a Lorentzian distribution in velocity space with width corresponding to the natural linewidth, which is then convoluted with the probing line profile, which has approximately the same width and shape. The natural linewidth of the D1 line of Mg^+ is 41.3MHz, which might be slightly power broadened under our experimental conditions. The lineshape in Fig. <ref> is well represented by a Lorentzian shape with a FWHM of 213, which is slightly more than twice the expected value. Additional broadening can be caused by velocity changing collisions, e.g., in the cooler, as it was also observed in the Λ-spectroscopy of Li^+ ions in the ESR, where the natural linewidth was of the order of 5 but an experimental linewidth of about 100 <cit.> was observed. Three peaks are visible in the spectrum even though the 3p_1/2 level has only two hyperfine states (F^'=2,3). It turns out that the distances from the central peak to both, the left and the right peak, are after correction for the relativistic Doppler-Shift in reasonable agreement with the well-known hyperfine structure splitting in the 3p_1/2 level Δν_3p-hfs=308.61(24)MHz <cit.>. The third peak's appearance is due to the relatively broad velocity distribution that covers both hyperfine states of the 3p_1/2 level within the Doppler width. Therefore, both lasers are in resonance with the upper (F^'=2) state in one velocity class and the lower (F^'=3) state in another. This is visualized in the lower part of Fig. <ref>. There are three possibilities for cooperation between the velocity groups. The left peak in Fig. <ref> corresponds to the case where the fast-ion fraction of the anticollinear probe laser excites the F=3 → F'=3 transition while the same (fast) ions interact simultaneously with the collinear pump laser on the F=2 → F'=3. Hence, they share the same upper state to pump the population between the two ground-state hyperfine levels. If the frequency of the probe laser is now increased by Δν_2p-hfs, the fast fraction of the probed ions coincides with the slow-ion fraction of the pump laser, while simultaneously, the slow fraction of the probe laser coincides with the fast-ion fraction driven by the pump laser. At this point, both transitions and both velocity groups contribute to the fluorescence, which explains the large intensity of the central peak. Increasing the anticollinear probe laser frequency again by Δν_2p-hfs, the F=3 → F^'=2 transition of the slow ion fraction is addressed while the same ions are pumped back by the collinear laser on the F=2 → F^'=2 transition. The situation resembles somehow the crossover resonances in standard saturation spectroscopy but is nevertheless different since the resonances do not appear halfway between the "real" saturation signals but have the full hyperfine splitting. § CONCLUSION We have established a setup to perform collinear laser spectroscopy at CRYRING@ESR. Collinear and anticollinear excitation was performed and simultaneous operation in both directions has been demonstrated for using Λ-spectroscopy. Lineshapes have been studied with and without electron cooling and for both modes of operation, i.e., bunched-beam and coasting-beam operation. Linewidths between 1 and 7 have been observed. They were further reduced to about 200 with the application of Λ-spectroscopy. With these installations, we have established laser spectroscopy as a versatile tool at CRYRING@ESR and will use it, e.g., to investigate the application of optical pumping in a magnetic storage ring. § ACKNOWLEDGEMENT The research presented here is a result of a R&D project experiments E148 and G-22-00058 at CRYRING@ESR in the frame of FAIR Phase-0 supported by the GSI Helmholtz Centre for Heavy Ion Research in Darmstadt (Germany) and the German Minsitry for Education and research (BMBF) under contracts 05P21RDFA1 and 05P21PMFA1. § REFERENCES
http://arxiv.org/abs/2408.12539v1
20240822165318
LOUD: Synthesizing Strongest and Weakest Specifications
[ "Kanghee Park", "Xuanyu Peng", "Loris D'Antoni" ]
cs.PL
[ "cs.PL" ]
printfolios=true,printccs=false,printacmref=false =1 none mycommfont []{} shapes,snakes mybox[1][gray!20] [ breakable, left=0pt, right=0pt, top=0pt, bottom=-1pt, colback=#1, colframe=#1, width=, boxsep=2pt, arc=0pt,outer arc=0pt, ] exampleExample[section] definitionDefinition[section] [itemize]align=parleft,left=0pt..1em, topsep=2pt [description]topsep=2pt basicstyle=, keywordstyle=, identifierstyle=, commentstyle= showstringspaces=false, numbers = left, 0009-0005-7983-233X khpark@cs.wisc.edu University of Wisconsin-Madison USA xuanyupeng@cs.wisc.edu University of Wisconsin-Madison USA 0000-0001-9625-4037 ldantoni@ucsd.edu University of California-San Diego USA § ABSTRACT Specifications allow us to formally state and understand what programs are intended to do. To help one extract useful properties from code, <cit.> recently proposed a framework that given a quantifier-free query posed about a set of function definitions, and a domain-specific language in which each extracted property is to be expressed (we call properties in the language ), synthesizes a set {_1, … , _n} of such that each of the _i is a strongest for : _i is an over-approximation of and there is no other that over-approximates and is strictly more precise than _i. The framework by <cit.> has two key limitations. First, it only supports quantifier-free query formulas and thus cannot synthesize specifications for queries involving nondeterminism, concurrency, etc. Second, it can only compute , i.e., over-approximations of the program behavior. This paper addresses these two limitations and presents a framework, , for synthesizing strongest and weakest (i.e., under-approximations of the query ) for function definitions that can involve existential quantifiers. We implemented a solver, , for problems expressed in which can be used to describe and identify sources of bugs in both deterministic and nondeterministic programs, extract properties from concurrent programs, and synthesize winning strategies in two-player games. : Synthesizing Strongest and Weakest Specifications Loris D'Antoni August 26, 2024 =================================================== § INTRODUCTION The idea of synthesizing specifications has found applications in many domains, such as generating code documentation <cit.> and finding sources of bugs <cit.>. However, most specification-synthesis approaches are domain-specific or, to achieve generality, are data-driven and based on dynamic testing—i.e., the tools that produce them yield specifications that though correct on the observed test cases might be unsound in general, as the actual code in the program is not taken into account. To thread the needle between expressiveness and soundness, <cit.> introduced , a parametric framework for synthesizing specifications that are provably sound and precise. In , the problem of synthesizing a specification is phrased as a logical problem. Given a quantifier-free query posed about a set of function definitions, and a domain-specific language in which each extracted property is to be expressed (we call properties in the language ), the goal is to synthesize a set {_1, … , _n} of logically incomparable such that each _i being an over-approximation of and is a strongest in —i.e., there is no other that over-approximates and that strictly implies _i. For example, for a query := = () describing a list-reverse function, and a language of arithmetic formulas over variables and their lengths, the properties _1 := () = () and _2 := () ≤ 0 ⇒ = are logically incomparable strongest that over-approximate . 's key properties are its expressiveness (the DSL of properties is given as an input), soundness (the synthesized specifications are sound for all inputs to the function definitions), and precision (no valid property in the DSL is stronger than the synthesized ones). The expressiveness of the framework is shown by the two ways in which the framework is parametric: one can choose through the query what aspect of the program (or set of programs) one wants to synthesize a specification for, and one can choose through the language what set of properties they are interested in. The parametric nature of makes this framework applicable to different domains. For example, has been used for extracting simple refinement types from data-structure transformations <cit.>, synthesizing precise abstractions for numerical abstract domains <cit.>, and extracting algebraic specifications for interfacing software modules <cit.>. Limitations of Existing Specification-Synthesis Frameworks suffers from two key limitations that prevent one from applying it to certain settings. First, is limited to synthesizing over-approximations of the program behavior. While over-approximations can capture program correctness (e.g., Hoare logic <cit.>), reasoning about the actual behaviors a program can exhibit and about the presence of possible bugs (e.g., incorrectness logic <cit.>) requires one to synthesize under-approximated specifications. Second, 's synthesis algorithms are fundamentally limited to quantifier-free function definitions. Without existential quantifiers, 's queries cannot describe nondeterministic programs, interleaving of concurrent programs, and uncertainty. The Framework This paper addresses the two limitations of and presents , a general framework for solving the following problem: Given an existentially quantified query posed about a set of function definitions, find a strongest conjunctive formula expressed in the supplied language that is implied by (i.e., ), and a weakest disjunctive formula expressed in the supplied language that implies (i.e., ). Existentially quantified queries allow to reason about programs involving nondeterminism. For example, consider the dining-philosophers problem where n philosophers concurrently (and nondeterministically) acquire and release contended resources placed on their sides. We can model what combinations of actions and scheduling lead to a deadlock using an existentially quantified query such as ∃s. dl = (s, p_1, …, p_n), where p_i∈{L, R} indicates which resource the philosopher p_i tries to take first and dl denotes that a deadlock has happened; s is the nondeterministic sequence of order in which threads are scheduled (detailed description in <Ref>). Supporting both over- and under-approximate reasoning (i.e., to compute both and ) enables new applications. Let's say we are interested in understanding what resource choices from the philosophers may lead to a deadlock for some possible schedule. When given an appropriate language , in the framework, synthesizing under-approximations (i.e., ) of the query ∃s. dl = (s, p_1, …, p_n) can characterize deadlock can happen when all the philosophers prefer the same direction (e.g., the dl∧p_1 = ⋯ = p_n). Thanks to its expressivity, can capture many reasoning capabilities of Hoare logic <cit.> (e.g., computing weakest liberal precondition and strongest postcondition) and incorrectness logic <cit.> (e.g., computing weakest possible precondition and weakest under-approximate postcondition). Synthesis Challenges in To understand why solving synthesis problems expressible in is challenging, it is useful to understand how synthesis problems are solved. <cit.> presented a counterexample-guided synthesis algorithm for solving specification-synthesis problems in the framework. At a high level, the algorithm accumulates positive and negative examples of possible program behaviors with respect to the given query and synthesizes consistent with them. Using a primitive called , the algorithm checks if a candidate property is indeed sound and, if not, it produces a new positive example that the property fails to accept. To find strongest (and not just sound ones), introduces a primitive called , which checks if the current is strongest; if it is not, returns a new that accepts all positive examples, rejects all negative examples, and rejects one more negative example (which is also returned). By alternating calls to these primitives, the algorithm eventually finds a strongest . The first challenge in adapting this algorithm to the framework is identifying ways to also synthesize instead of just . To address this challenge we generalize the and primitives so that each operation has a dual form that can be used to synthesize both and . The second challenge is implementing the primitives in the presence of existential quantifiers. Specifically, proving that an is strongest and proving that an is sound require solving a constraint of the form ∃. ∀h. (, h) (). To perform this check, we integrate a counterexample-guided quantifier instantiation algorithm (CEGQI) that operates in tandem with the overall counterexample-guided synthesis (CEGIS) algorithm. The CEGIS algorithm accumulates examples that approximate the behavior of the query, while the CEGQI algorithm accumulates instances of the quantified variable h that show if an example is positive or negative. As long as one has implementations for the primitives used in our algorithm, the algorithm is sound. When the DSL is finite, the algorithm is also complete. We implement a tool, called , to solve the synthesis problems in the framework. can describe and identify sources of bugs in both deterministic and nondeterministic programs, extract properties from concurrent programs, and synthesize winning strategies in two-player games. Because is built on the top of the program synthesizer <cit.>, it is only sound for programs in which inputs, recursion, and loops are bounded. In the future, this limitation can be lifted by considering more general (though less efficient) program synthesizers <cit.>. Contributions Our work makes the following contributions: * A formal framework, , for the problem of synthesizing strongest and weakest for existentially quantified queries (<ref>). * Algorithms for solving problems using four simple well-defined primitives: , , and (<ref>). * A counterexample-guided quantifier instantiation algorithm for efficiently implementing the primitives and for existentially quantified queries (<ref>). * A tool that we implemented to support our framework, called (<ref>). * Multiple instantiations of , showing its capability across a wide range of applications, e.g., reasoning about nondeterministic/concurrent programs and synthesizing game strategies (<ref>). <ref> discusses related work. <ref> concludes. <ref> relates to program logics; <ref> contains further details about the evaluation, and <ref> contains further details about algorithms. § MOTIVATING EXAMPLES In this section, we illustrate how the framework can be used to synthesize useful correctness (<ref>) and incorrectness (<ref>) properties of programs. §.§ Example 1: Synthesizing Possible Program Behaviors Consider a parametric hash function =, with being an integer input and and > 0 being possible parameters. The red variables denote parameters that are typically fixed in a specific implementation of —i.e., can be considered as a family of possible hash functions—whereas the blue variable is the actual input to the function. In our framework, to reason about the behavior of a program, one provides a (potentially complex) logical query they are interested in over- or under-approximating with properties in a fixed language. For example, one may provide the following query, which allows identifying properties of the family of hash functions that hold for any possible choice of input : . = We color in red all free variables, to denote that our goal is to identify properties that capture the relationship between the output and parameters and . §.§.§ Over-approximate Reasoning We start with properties that are consequences (over-approximations) of the query (<ref>). That is, we want formulae (, , ) such that , , . (. = ) ⇒(, , ). As argued by <cit.>, many applications, such as generating type judgments for sensitivity analysis or generating algebraic specifications, require formulae to adhere to a specific syntactic fragment. Thus in our framework, one can provide as input a DSL (containing user-given functions), and we call properties expressible in this DSL . Furthermore, we say an is an if it is a consequence of the given query formula. One goal of our framework is to synthesize a set of incomparable strongest . In our example, the DSL (for over-approximations) is defined by the following grammar: [ := ⊤|||⋯|; := {≤| < | =|≠} |() |(); := 0 |||| -; ] For instance, ≤ is an for query (<ref>), but not strongest, as it is strictly implied by the <. An incomparable set of strongest for query (<ref>) is: [ 0 ≤ < = 0 ⇒ = 0 = ⇒ = 0 = -⇒ = 0 ] We write p ⇒ q instead of p ∨ q for readability. The properties in <Ref> give us insights into the behavior of the function —e.g., that setting the value of to be equal to or - is probably not a good idea as it would result in a function that always returns =0. For our discussion, we focus our attention on the first two properties, which imply that the output falls within 0 ≤ <. A well-designed hash function with a set S as range should be surjective onto the set S, meaning that for every value v in S, there should be inputs that yield v as output. However, because the formulae in <Ref> are over-approximations, we are not guaranteed that all the values in 0 ≤ < are indeed possible outputs of . §.§.§ Under-approximate Reasoning Over-approximation alone cannot capture whether a specific program behavior can occur. Instead, under-approximated reasoning allows us to capture specific values that are actually attainable/reachable when executing the program. For a formula (, , ) to define a reachability condition (i.e., a behavior that must happen) of from some input , the formula (, , ) must be an implicant (under-approximation) of the query (<ref>), which formally can be stated as follows: , , . (, , ) ⇒∃. = We say an is an if it is an implicant of the given query formula. Another goal of our framework is to synthesize a set of incomparable weakest . In our example, we define the DSL (for under-approximation) using the rules for and as shown in (<ref>), but replace disjunction rules (nonterminal ) with the following conjunction rules: [ := |||⋯| ] For instance, = 0 = 0 is a for query (<ref>), but not a weakest one, as it strictly implies a = 0. A mutually incomparable set of weakest for query (<ref>) is: [ = 0 0 ≤ < = 0 ≤ < - < < ≠ 0 () ] Each formula in <Ref> provides a sufficient condition for reachability of the output —that is, if , and satisfies any formula in <Ref>, then there exists an input such that =. Crucially, the last formula provides a sufficient condition for to be surjective onto ℤ_ = {0, 1, …, - 1}—i.e., for a prime value of and non-zero value of selected from the range - < <, all values of in ℤ_ are attainable from some choice of input . §.§ Example 2: Describing Incorrect Behavior <cit.> recently presented a Hoare-style logic called incorrectness logic for performing under-approximated reasoning. One application of incorrectness logic is proving the presence of bugs in programs. In this section, we demonstrate how our framework facilitates and automates both forward and backward reasoning in the style of incorrectness logic. §.§.§ Forward Reasoning Let's say we are interested in reasoning about the possible behavior of the incorrect modular hash function =, where is the remainder operator (instead of the modulus), which is often misused when implementing a modular operation. Note that the operation a b may yield a negative output when either a or b is negative. A summary of the possible incorrect behaviors can be identified by under-approximating a query similar to the one used earlier—i.e., by summarizing the possible behaviors of the function : . = From the perspective of incorrectness logic, under-approximating (<ref>) corresponds to performing forward reasoning to find results for when no presumption on x is given–i.e., the presumption is . For capturing under-approximations of the query (<ref>), we reuse the DSL from the previous example. A mutually incomparable set of weakest for query (<ref>) is: [ = 0 - < < = - < < - < < ≠ 0 () ] The above output produced by shows that can indeed yield negative values for some choice of parameters and , as evidenced by the occurrence of a state in both the second and last formulae where is negative. In other words, we recognize that some choices of input value can result in incorrect outcomes—i.e., negative numbers—but we don't know which ones. §.§.§ Backward Reasoning We have shown the existence of a bug in the function through forward reasoning. A natural next question is what inputs can lead to incorrect outputs. This question corresponds to the concept of weakest possible precondition <cit.>, for backward reasoning in incorrectness logic. A predicate p is called a possible precondition of predicate q for program s, if every input state satisfying p has a run of the program s that terminates to an end state satisfying q. To compute the weakest possible precondition expressible in a DSL, we switch the roles (quantifiers) of input and output in the query and conjoin the function application with the predicate _() := < 0; the formulae (, , ) then under-approximates possible incorrect behaviors, which can be stated as follows: , , . (, , ) ⇒∃. [ = ∧_()] Looking at <Ref>, we can spell out that a weakest possible precondition of _() for is a weakest implicant of the following query: ∃. [ = ∧_()] To capture implicants of the query (<ref>), we define a new DSL _ by substituting every occurrence of in with . An incomparable set of weakest _ for query (<ref>) is: [ 0 < < ∧ - < < 0 ∧() 0 < < ∧ - < < 0 ∧() ] Each formula in <Ref> states sufficient conditions under which produces a negative output—i.e., when either or falls within the interval (-, 0) and the other falls within the interval (0, ). Detailed discussions about how our framework relates to program logics will be provided in the <Ref> and the <Ref>. § STRONGEST CONSEQUENCES AND WEAKEST IMPLICANTS In this section, we describe our framework, which extends the strongest synthesis framework of <cit.> in two key ways: allowing existentially quantified variables, and enabling the synthesis of both the strongest and weakest . We describe what inputs a user of the framework provides, and what they obtain as output. Input 1: Query. The query is a first-order formula of the form , where is a quantifier-free formula . The inclusion of the existentially quantified variables h in the query is a key novelty of this paper: it enables many new applications such as reasoning about nondeterministic programs, and forward and backward reasoning in program logics (<Ref>). We use the symbol h (for hidden) to represent existentially quantified variables and the symbol v (for visible) to represent free variables. In practice, both h and v can be tuples and denote multiple variables. In our motivating examples, queries are given in <Ref>. Input 2: Grammar of . The grammar of the DSL in which the synthesizer is to express properties for the query. Each formula in the DSL is a predicate defined over the free variables v of the query . E.g., DSLs and are defined in <Ref>. Input 3: Semantics of the program and operators. A specification of the semantics of the function symbols that appear in query (e.g., ) and in the DSL (e.g., ). In our implementation, semantic definitions are given as a program in the programming language <cit.>, which is then automatically transformed into first-order formulae. We discuss in <Ref> how works and examine the limitations of this approach. Output: Strongest and weakest . Our goal is to synthesize a set of incomparable strongest and a set of incomparable weakest of query . Ideally, both strongest and weakest would be the formula that is exactly equivalent to , but in general, the DSL might not be expressive enough to do so. As argued by <cit.>, this feature is actually a desired one as it allows for the application of our framework to various use cases, as demonstrated in <ref>. Because in general there might not be an and an that are equivalent to , the goal becomes instead to find that tightly approximate . We use to denote the set of models (over the free variables of ) of a formula . For example in <Ref>, ≥ 0 represents the set of models { (, , ) |≥ 0 }. We say is stronger than ' (or ' is weaker than ) when ⊆', and is strictly stronger than ' (or ' is strictly weaker than ) when ⊂'. 2 An is a strongest for a query if and only if is a consequence of the query : () := ∀v. [∃h. (v, h) ⇒(v)] is strongest with respect to and : ∃' ∈. (') '⊂ An is a weakest for a query if and only if is an implicant of the query : () := ∀v. [(v) ⇒∃h. (v, h)] is weakest with respect to and : ∃' ∈. (') '⊃ Throughout the paper, we also use the term most-precise to mean strongest for and weakest for . We use () and () to denote the set of all strongest and the set of all best for , respectively. Because may not be closed under conjunction (and disjunction), strongest (and weakest ) may not be semantically unique. In <Ref>, both formulae 0 ≤ and < are strongest of query (<ref>), and neither implies the other. The goal of our framework is to find a semantically strongest conjunction of incomparable strongest and a weakest disjunction of incomparable weakest . 2 A potentially infinite set of Π = {_i} forms a best =⋀_i_i for query if and only if each _i∈Π is a strongest of ; every distinct _i, _j ∈Π are incomparable—i.e., _i∖_j≠∅ and _j∖_i≠∅; the set is semantically minimal—i.e., for every strongest we have ⊆. A potentially infinite set of Π = {_i} forms a best =⋁_i_i for query if and only if each _i∈Π is a weakest of ; every distinct _i, _j ∈Π are incomparable—i.e., _i∖_j≠∅ and _j∖_i≠∅; the set is semantically maximal—i.e., for every weakest we have ⊇. Best and best are not necessarily unique, but they are all logically equivalent. Specifically, a best is equivalent to the conjunction of all possible strongest , and a best is equivalent to the disjunction of all possible weakest . Note that best means more than semantic optimality because a strongest is not necessarily a best ; predicates _1() := ≥ 0 and _2() := ≥ 1 could form a strongest , but it is not a best one because _1 is strictly stronger than _2–i.e., _1 and _2 are comparable. [Semantic Optimality]theorembestset If is a best , then its interpretation coincides with the conjunction of all possible strongest : =⋀_∈(). If is a best , then its interpretation coincides with the disjunction of all possible best : =⋁_∈(). We are now ready to state our problem definition: Given query , the concrete semantics for the function symbols in , and a domain-specific language with its corresponding semantic definition, synthesize a best and for . § COUNTEREXAMPLE-GUIDED INDUCTIVE SPECIFICATION SYNTHESIS In this section, we present algorithms for synthesizing best and best . We follow the example-guided approach proposed by <cit.> that synthesizes strongest . In tandem, we present a dual algorithm to synthesize weakest , and we extend both algorithms to allow existentially quantified query formulas. We first present the primitives necessary to instantiate the synthesis algorithms (<Ref>). Then, we present the algorithms for synthesizing a single or that is incomparable to all the ones synthesized so far (<Ref>). Finally, we present the algorithms for iteratively synthesizing the properties forming an or an (<Ref>). §.§ Synthesis from Positive and Negative Examples The algorithms for synthesizing strongest and weakest maintain two sets of examples: a set of positive examples , which should be accepted by the synthesized predicates, and negative examples , which should be rejected by the synthesized predicates. Given a query := and a model over the free variable v of query , we say that is a positive example if (e, h) holds true for some value of h (i.e., ∈) and a negative example if (e, h) does not hold for all values of h (i.e., ∉). [Positive and Negative Examples] Given the query := ∃. =, the model that assigns to the integer 1, to the integer 6, and to the integer 5 is a positive example, because the choice of value x = 1 makes the equation 1 = 651 = 6 5 holds true. For brevity, we represent such example as (1, 6, 5), where it denotes a valuation to the tuple (, , ). The following examples are negative ones because no value of satisfies =: (-1, 1, 3), (3, 1, 3), (3, 2, 6). Because the DSL might not be expressive enough to capture the exact behavior of the query , in general there is no predicate capable of accepting all the positive examples and rejecting all the negative examples. Intuitively, a strongest must accept all positive examples while also excluding as many negative examples as possible. [Examples and ] Consider again the query := ∃. = and the set of strongest { 0 ≤, < , = 0 ⇒ = 0, = ⇒ = 0, = -⇒ = 0 } from <Ref>. While a positive example (, , ) = (1, 6, 5) is accepted by all , the negative example (3, 2, 6) is not rejected by any of them. In fact, the in <Ref> form a best , so (3, 2, 6) must be accepted by every strongest . As illustrated by <Ref> when attempting to synthesize , we can consider positive examples as hard constraints but need to treat negative examples as soft constraints. For , the role of positive and negative examples is inverted. A weakest must reject all negative examples while also accepting as many positive examples as possible. [Examples and ] Consider again the query := ∃. = and the set of weakest { = 0, 0 ≤ < = , 0 ≤ < - < < ≠ 0 () } from <Ref>. While the negative example (, , ) = (3, 2, 6) is rejected by all , the positive example (1, 6, 5) is not accepted by any of them. The in <Ref> form a best , so (1, 6, 5) must be rejected by every weakest . We are now ready to introduce the generalizations of the key operations used by <cit.> to synthesize strongest : (<Ref>), (<Ref>) and (<Ref>). Additionally, we introduce (<Ref>), an operation used alongside and to synthesize weakest . §.§.§ Synthesis from Examples While strongest and weakest can effectively treat some of the examples as soft constraints, as we will show in <Ref>, our synthesis algorithm can find such properties by iteratively calling a synthesis primitive that treats a carefully chosen set of examples as hard constraints. Avoiding soft constraints was one of the key innovations of <cit.> with respect to prior work <cit.>. Given a set of positive examples and a set of negative examples , the procedure (, ) returns an that accepts all the positive examples in and rejects all the negative examples in , if such an exists. If no such exists, then (, ) returns . Given a set of examples E, we write (E) to denote the conjunction ⋀_∈ E() and (E) to denote the conjunction ⋀_∈ E(). The operation (, ) can be expressed as the formula ∃. () ∧(). [] <Ref> showed there can be a negative example that no can reject. With the DSL defined in <Ref>, if = {(1, 6, 5)} and = {(3, 2, 6)}, then (, ) can return the formula (, , ) := <, which is not a consequence of the query := ∃. =. In this case, once more positive examples are added to (which is something our synthesis algorithm automatically takes care of), then will return . For example, if is augmented to the set {(1, 6, 5), (1, 1, 5), (1, -4, 5), (6, 2, 8)}, then (, ) returns . §.§.§ Checking Implication The primitive described in this section allows us to check whether a formula is valid or a valid . Given two predicates and ', the primitive (, ') checks whether is an implicant of ' (or dually, whether ' is consequence of ). In logical terms, (, ') checks whether there exists an example that is accepted by but rejected by '; it returns that example if it exists, and otherwise. This check can be expressed as ∃ e. '(e) (e). To check whether a predicate is a consequence of , one can look for a positive example that may be incorrectly rejected by by performing the check (, ). Similarly, to check whether a predicate is an implicant of , one can look for a negative example that is incorrectly accepted by by performing the check (, ). [] Consider again the query := ∃. =. Because the formula (, , ) := < is not a consequence of , the primitive (, ) would return a positive example that is incorrectly rejected by , such as (1, -4, 5). On the other hand, calling (, ') on the formula '(, , ) := ≥ 0 would instead return because the formula ' is indeed a consequence of . Similarly, for the formula (, , ) := <, which is also not a implicant of , running (, ) (where this time the query is the second parameter) would return a negative example that is incorrectly accepted by , such as (-1, 1, 3). On the other hand, running (', ) on the formula '(, , ) := = 0 would instead return because the formula ' is an implicant of . The procedure can implement using a constraint solver. However, the presence of quantifiers in implicants can result in a constraint with alternating quantifiers, making the check computationally harder, and most importantly, outside the capabilities of solvers that do not support nested quantifiers. We discuss a practical procedure for performing this check in <Ref>. §.§.§ Checking Precision Checking precision—i.e., whether an is strongest or whether an is weakest—requires more sophisticated queries than the one described above. Specifically, one cannot simply ask whether there exists a negative example that is accepted by to check whether is a strongest , because, as shown in <Ref>, there might be some negative example that must be accepted by every strongest . In theory, to prove or disprove that an is strongest one needs to check whether there exists an ' that is a consequence of the query (i.e., ' accepts all the positive examples) and strictly stronger than (i.e., ' rejects at least one more negative example while rejecting all the negative examples that were already rejected by ). Such a check would be too expensive as it effectively asks one to synthesize a provably sound . Our algorithm does not require such a powerful primitive, and instead approximates and using a set of positive examples accepted by and a set of negative examples rejected by . By combining implication and precision checks in a counterexample-guided fashion, our algorithm improves the approximation over time and is thus sound. Given an , a set of positive examples accepted by , a set of negative examples rejected by , a query , and a Boolean formula ψ denoting the set from which examples can be drawn, (, , , , ) checks if there exist an ' and an negative example ∉ satisfying such that: ' accepts all the positive examples in ; ' rejects and all the negative examples in , whereas accepts . In our algorithm, the set is used to ensure that the example produced by is not already rejected by best we already synthesized. The above check can be logically stated as follows: (, , , , ) =∃', . () () () '()'() '() The highlighted part of the formula is what changes when checking if the formula is weakest. [] Consider again the query := ∃. =, and an (, , ) := ≠, which is not a strongest . (, , {(1, 6, 5)}, {(3, 1, 3)}) can return a strictly stronger '_1(, , ) := < with a negative example _1 = (6, 1, 5). However, because only considers whether the formula is strongest with respect to the examples {(1, 6, 5)}, {(3, 1, 3)} can also alternatively return a property that is not an actual —e.g., '_2(, , ) := < with a negative example _2 = (6, 1, 5). The formula '_2 is not a of as it incorrectly rejects the positive example (1, -4, 5). When computing weakest , we can perform a dual check and ask if there exists an that can accept one more example than the current formula. That is, (, , , , ) checks whether there exist an ' and a positive example ∈ satisfying ψ such that: ' accepts all the positive examples in ; ' accepts and all the positive examples in , whereas rejects . This check can be logically stated as (, , , , ) =∃', . () () () '()'() '() [] Consider again the query := ∃. =, and a (, , ) := = 0 ∧ = 0, which is not a weakest . (, , {(0, 0, 5)}, {(3, 2, 6)}) can return a strictly weaker '_1(, , ) := = 0 with a positive example _1 = (0, 1, 5). However, for the same reasons outlined in <Ref>, the returned property may not be an . The and procedures are effectively solving a synthesis problem—i.e., they are looking for a formula—and implementing them requires a form of example-based synthesis. Again, the presence of quantifiers in the negation of the query for can result in constraint (<ref>) containing alternating quantifiers, thus bringing us outside of the capability of many program synthesizers. We discuss a practical procedure for sidestepping the quantifier-alternation problem and performing this check in <Ref>. §.§ Synthesizing One Strongest and One Weakest We are now ready to describe our main procedures: (Algorithm <ref>) and (Algorithm <ref>). We first recall the description of by <cit.>, the algorithm that synthesizes a strongest that is incomparable with the we already synthesized (the algorithm will be used in <Ref> to synthesize one at a time). We then describe one of the contributions of this paper, i.e., how the algorithm changes for its under-approximated dual . §.§.§ Synthesizing One Strongest Given a query formula and a conjunction of we have already synthesized , the procedure synthesizes a strongest for the query that is incomparable to the already synthesized formulas in . We say an for the query is strongest with respect to if there does not exist an ' for such that ' is strictly stronger than —i.e., the is incomparable to all the in . In each iteration, performs two steps. First, it uses to check whether the current candidate is a consequence of (line <ref>). Second, if the candidate is a consequence of , it uses to check whether is strongest with respect to (line <ref>). The algorithm terminates once a formula passes both checks (line <ref>). If the current candidate is not a consequence of , returns a positive example (line <ref>). The algorithm then adds to the set of positive examples and uses it to a new candidate (lines <ref> and <ref>). If the current candidate is a consequence of but there is an ' that aligns with the current set of positive and negative examples, and , and can reject one more negative example , returns this property ' along with the negative example . The example is then temporarily stored in without immediately updating (line <ref>). Updating is delayed because ' may not be a consequence of the query (<Ref>), and in the worst case, there might not exist an that rejects (<Ref>). The example stored in can be safely added to when verifies that returned by is indeed a consequence of the query (line <ref>); at this point we are certain that the example in can be rejected by at least one , as witnessed by .[ Delaying the update of negative examples enables treating all negative examples in as hard constraints. This is a key innovation by <cit.> over prior work <cit.>.] The candidate returned by in line <ref> is only guaranteed to be consistent with the examples, and therefore must be checked again by in line <ref>. If the candidate fails to pass , the algorithm keeps adding more positive examples until either it finds a that rejects all the negative examples in ∪, or in line <ref> fails to find an . In the latter case, concludes that the example in cannot be rejected by any and thus restarts after discarding the example in (line <ref>). For efficiency, whenever is updated in line <ref>, the algorithm stores the current that rejects all the negative examples in in a variable (line <ref>). This way, the algorithm can revert to when and if is discarded in line <ref>. As proved by <cit.>, , once it terminates, returns a strongest for (with respect to ) that accepts all the examples in and rejects all the examples in . The algorithm is also guaranteed to terminate for any finite DSL when every call to primitives , and terminates. §.§.§ Synthesizing One Weakest Given a query formula , a disjunction of we already synthesized , the goal of the procedure is to synthesize a strongest for the query that is incomparable to the already synthesized formulas in . We say an for the query is weakest with respect to if there does not exist an ' for such that ' is strictly weaker than —i.e., the is incomparable to the already synthesized . solves the dual of the problem solved by , and the two algorithms share the same structure. Due to the duality the roles of positive and negative examples are inverted; in line <ref> checks whether is an implicant of , instead of checking that is a consequence of ; and precision is checked by instead of . These changes are highlighted in red in Algorithm <ref>. §.§ Synthesizing a Best and We conclude by briefly recalling how works (as described by <cit.>) and present the dual algorithm . These two algorithms use and to synthesize a best and , respectively. The detailed algorithms are illustrated in <Ref>. The algorithm iteratively synthesizes incomparable strongest . At each iteration, keeps track of the conjunction of synthesized strongest , and calls to synthesize a strongest for with respect to . If returns an that does not reject any example that was not already rejected by , the formula is a best , and thus returns the set of synthesized . If returns an that rejects some example that was not rejected by , needs to further strengthen to a strongest for with respect to examples that might already be rejected by . Without this step the returned may be imprecise for examples that were not considered by because they were outside of . To achieve this further strengthening, makes another call to with the example sets and returned by the previous call to together with and :=, but with :=. Again, because solves the dual problem of the one solved by , the two algorithms share the same structure. uses in a similar manner, but it maintains the disjunction of synthesized weakest instead of conjunction. For weakening, it also makes another call to , but := is replaced by :=. Using the argument of <cit.> for , our algorithms satisfy the following soundness and completeness theorems. In particular, <Ref> states that the algorithm is complete if the DSL only contains finitely many properties, which is the practical case we are interested in when performing our evaluation. [Soundness]theoremsoundnessconj If terminates, it returns a best for . If terminates, it returns a best for . [Relative Completeness]theoremfinite-completeness Suppose that either contains finitely many formulas, or the example domain is finite. If , and are decidable on , then and always terminate. If , and are decidable on , then and always terminate. § COUNTEREXAMPLE-GUIDED QUANTIFIER INSTANTIATION (line <ref>) in and (line <ref>) in check for the existence of a new negative example (along with additional constraints). However, when dealing with an existentially quantified query :=, a negative example e must be such that the formula (e, h) is valid for all values of the existentially quantified variable h. Therefore, checking the existence of a negative example e requires solving a formula that has alternating quantifiers. To handle these primitives involving quantifier alternation, we propose a CounterExample-Guided Quantifier Instantiation (CEGQI) algorithm similar to the one by <cit.>, which can implement the primitives that require finding negative counterexamples using only existentially-quantified formulas. §.§.§ Counterexample-Guided Quantifier Instantiation for Weakest We start with the simpler of the two queries, in (line <ref>), which requires solving a formula with alternating quantifiers of the following form (<ref>): ∃. ∀h. (, h) () The CEGQI algorithm for solving <Ref> iteratively builds a set H of possible values for h and finds a value of that is consistent with the finite set of values H. The set H is updated by repeating the following two operations until a solution that holds for all values of h is found. Generating Candidate Negative Example Given formulae , , and a finite set H of values the existentially-quantified variable h can take, (, , H) generates an example ∈ such that the formula (, h) does not hold for all the values h in the set H, if such an example exists. If no such example exists, (, , H) returns . Formally: (, , H)=∃. ⋀_h∈ H(, h) () Checking Candidate Negative Example Given a formula , and a candidate negative example , the function (, ) checks if there exists a value for the existentially quantified variable h such that (, h) holds true (i.e., whether there exists a value of h that makes the example actually positive); it returns that value of h, if one exists, and otherwise. Formally: (, )=∃h. (, h) Counterexample-Guided Quantifier Instantiation The CEGQI algorithm for (<Ref>) iteratively generates candidate negative examples using and checks whether they are actually negative using . Across iterations, it maintains the set of values of h returned by in H, and uses to find an example that behaves well for all the values in H discovered so far—i.e., satisfies (, H) () (line <ref>). If fails to find an example, it means that there is no example satisfying (, H) (), thereby a stronger condition in <Ref> also cannot be satisfied. Therefore, returns (line <ref>)—i.e., there does not exist a valid negative example. If returns an example , the example is tested by to check whether (, h) does not hold for every possible value of h and not only for values discovered so far (line <ref>). The algorithm returns the example once it passes the check (line <ref>), but if it fails the check, a new counterexample h returned by is added to the set H, and the algorithm restarts at line <ref>. Note that the set of instances H can be cached and reused across different calls to . 0.44 0.54 §.§.§ Counterexample-Guided Quantifier Instantiation for Strongest The call to in (line <ref>) requires solving a formula that has alternating quantifiers and the following form (<ref>): ∃, '. ∀h. (, h) () () '() '() '() This formula looks more complicated due to the presence of the existential variable '. However, a similar approach to the one presented in <Ref> can also be used to solve <Ref>, by finding a negative example and a formula in tandem. The only change in the CEGQI algorithm for solving (<Ref>) is that is replaced by a new operation, , defined as follows. Given formulae , , , set of examples , , and a finite set H of values the existentially-quantified variable h can take, (, , , , , H) generates an ' and an example satisfying such that the formula (, h) does not hold for all the values h in the set H ' accepts all the positive examples in ; ' rejects and all the negative examples in , whereas accepts , if such an example and formula ' exist. If no such example exists, then (, , , , , H) returns . Stated formally: (, , , , , H)= ∃, '. ⋀_h∈ H(, h) () () '() '() '() Because the variable h only appears in the constraint (e, h), whether e is indeed a negative example can still be tested using  (<ref>). Similar to the <Ref>, if fails to find an example, it means that there is no example satisfying <Ref>, thereby a stronger condition in <Ref> also cannot be satisfied. The example is only returned after it has been tested by to ensure that (, h) does not hold for every possible value of h. §.§.§ Correctness The above observations are summarized as the following soundness theorem. [Soundness of CEGQI]theoremsoundnesscegqi If terminates with an example , the example is a valid solution to the existential quantifier in <Ref>. If terminates with , there is no example that satisfies <Ref>. If terminates with an example and ', the example and ' are valid solution to the existential quantifier in <Ref>. If terminates with , there is no example and ' that satisfy <Ref>. Because <Ref> and <ref> monotonically increases the size of the set H, as long as the domain of one of the variables and h is finite, both algorithms always terminate. [Completeness of CEGQI]theoremfinite-completeness-cegqi Suppose at least one of the domains of the variables or h is finite. If and are decidable for and , then always terminates. If and are decidable for and , then always terminates. Therefore, when the domain is finite, the specification synthesis for an existentially quantified query can be solved using only calls with the quantifier free part of the query. Note that, in the worst case, can enumerate the entire domain of h. As we demonstrate in our evaluation, this exhaustive enumeration (which is common for CEG-style algorithms <cit.>) is practically rare and a small number of examples are usually sufficient to solve the problem. § IMPLEMENTATION We implemented our algorithms for solving synthesis problems in the framework in a tool called . Like  <cit.> the work we improve upon, is implemented in Java, on top of the program synthesizer (v.1.7.6) <cit.>. Following <Ref>, takes the following four inputs: A query for which is to find or . Each variable in should be labeled either as free or existentially quantified. The context-free grammar of the DSL in which properties are to be expressed. A piece of code in the programming language that expresses the concrete semantics of the function symbols in and . The bounded space of each variable in the query —i.e., each variable is assigned a range of possible input values. From Grammars to Generators As synthesis needs to be performed over properties in the DSL , the context-free grammar for is automatically translated to a generator. A generator is a construct that allows one to describe a recursively defined search space of programs. In a generator, one is allowed to use holes (denoted with ) to allow the synthesizer to make choices about what terms to output. In our setting, holes are used to select which production is expanded at each node in a recursively defined derivation tree. also uses context-free grammar, which in turn is also translated to a generator, to specify the values each variable in the query can assume. For example, the grammar is translated to a generator called that can produce an integer from -3 to 3 (the notation denotes a 2-bit hole). Thanks to this feature, supports inductive datatypes, e.g. a generator of lists of integers within [-3, 3] can be defined as , where the struct of and the implementation of the constructors and are functions provided by the user as code. Finally, each variable in the query needs to be assigned a nonterminal, which specifies the generator for this variable. Synthesis Primitives in The primitives , in <Ref>, and in <Ref> are implemented as calls to the synthesizer. Typically, a program contains 3 elements: a harness procedure that defines what should be synthesized, holes associated with a corresponding generator, and assertions. The harness procedure is the entry point of the program, and together with the assertion it serves as a specification for what values the holes can assume to form a valid solution to the synthesis problem. Multiple harnesses in one program are also allowed, where the goal of the synthesizer is to find the same assignment to shared holes that make all assertions pass. For example, when encoding , each example is implemented as a harness with assertions indicating that it should be positive or negative. Both in (<Ref>) and in (<Ref>) are implemented using the CEGQI approach described in <Ref> (<Ref>, respectively). These algorithms are implemented as separate procedures where each call to and only has an existential quantifier, and can thus be implemented as a single call to the synthesizer. Bounds allows one to provide bounds for recursion and loops to make synthesis tractable. In , we need to consider two kinds of bounds. First, one has to bound the depth of each recursive generator. Concretely, this bound means that can only support DSLs where the derivation trees have bounded height (allows one to specify the bound for each DSL). As recursive generators are used to produce examples for inductive datatypes—e.g. list—one also has to bound the height of such examples. Second, one has to bound how many times a loop can be unrolled/executed. We will discuss in <Ref> what benchmarks are in theory affected by these bounds. Timeout We use a timeout of 20 minutes, after which returns the current (or ). Although it might not be the strongest (or weakest ), each individual (or ) is a strongest (or a weakest) one. § EVALUATION We evaluated through five case studies: reasoning about nondeterministic programs (<ref>), incorrectness (<ref>), concurrent programs (<ref>), two-player games (<ref>), and mining specifications (<ref>). These case studies involve one or both underapproximation and existential quantifiers and are therefore not solvable using existing approaches, namely,  <cit.>. For each case study, we describe how we model the problem in , how we collected the benchmarks, and we present an analysis of the running time and effectiveness of and the quality of synthesized and . We ran all experiments on an Apple M1 8-core CPU with 8GB RAM. All results in this section are for the median of three runs (by synthesis time). §.§ Application 1: Reasoning about Nondeterministic Programs In this section, we evaluate 's ability to reason about nondeterministic programs. Nondeterminism can be modeled using existential quantifiers, a key innovation of the framework. Consider a program that takes as input an integer x and then adds a nondeterministically chosen positive number to it. Such a program can be modeled using the query ∃h. o = x + h where h is a positive number. A consequence of this query is the property o≥x, which holds for all possible values of h. An implicant of the same query is the property o = x + 17, which holds when h = 17. From these two example properties (which can be synthesized by when given proper DSLs), we observe that for nondeterministic programs, consequences hold for every possible nondeterministic choice (the demonic perspective of nondeterminism), whereas implicants hold for at least one nondeterministic choice (the angelic perspective). To model a nondeterministic program, we introduce an array of nondeterministic values h into the query as an existentially quantified variable. Whenever the program execution reaches a non-deterministic command (e.g. is called), the command takes the next value of the array h. §.§.§ Benchmark Selection and Quantitative Results We collected 12 benchmarks involving nondeterminism: we created 4 nondeterministic sorting algorithms where the goal is to synthesize properties that characterize when the algorithm works or does not work as intended; we collected the 4 nondeterministic recursive programs for which the goal is to synthesize a polynomial invariant by <cit.> (all other benchmarks by  are deterministic), and we collected 4 SV-COMP <cit.> benchmarks in the bitvector category where nondeterministic values are used to model unknown inputs and parameters that determine control flow in the program (all others SV-COMP benchmarks are deterministic or unsuitable for specification synthesis). synthesizes and for 11/12 benchmarks. For those solved benchmarks, takes less than 400 seconds to synthesize and less than 140 seconds to synthesize . Evaluation details are shown in <Ref>. Because these benchmarks involve existential quantifiers, they require <Ref>, the CEGQI algorithms presented in <Ref>. <Ref> avoided exploring a large portion of the space of values for existentially quantified variables—e.g., for the benchmark 4 each call to <Ref> terminated with at most 13 instances h in H, instead of considering all 3^8 = 6561 nondeterministic instances. When we reuse the instance h generated across all calls to , the total number was only 17. This result inspired us to create a version of our algorithm that caches instances and reuses them across different calls to . Caching and reusing instances results, on average, in a speedup. §.§.§ Qualitative Evaluation We discuss each benchmark in detail. Nondeterministic Sorting The l (l∈{3,4}) benchmarks model a program that, given an array [a_1, ⋯, a_l], repeatedly nondeterministically swaps two neighboring elements at most n time and returns whether the final array is sorted (stored in Boolean variable ok). When given a DSL that can describe relations between elements of the array, for 3 computes 7 , including n≥ 1 a_1≤a_3ok, which states that if the first and third elements are already in the right order, we can make the entire array sorted using one swap. also synthesizes 5 . For example, the (n < 3 a_1 > a_2a_2 > a_3) ⇒ok tells us that, if the array is descending, we cannot make it sorted using fewer than 3 swaps. The benchmarks 3 and 4 consider a similar problem to l, but allow swaps of arbitrary elements in the array instead of only neighboring ones. r0.41 Applications 1 to 4. |∃| is the size of the domain of the existentially quantified variables. #P and T(s) are the number of properties and synthesis time for both and (- denotes timeouts). Incorrectness reasoning does not require synthesizing . [.1em] 2c2*[-0.4ex]Problem 1c2*LoC 1c2*|∃| 2c-cons. 2c-impl. 5-8 #P T(s) #P T(s) [.1em] [t]2mm12*[origin=c]90Nondeterminism 28 ∼10^5 1 5.51 1 10.29 28 ∼10^5 1 11.25 2 4.53 28 ∼10^5 1 4.07 2 4.80 31 ∼10^7 - - - - 1 27 ∼10^12 1 0.91 1 0.63 2 29 ∼10^12 2 26.94 1 27.19 4 34 ∼10^12 3 17.34 1 37.10 6 34 ∼10^12 3 161.80 1 131.91 3 33 64 6 9.63 8 7.34 4 33 6561 17 392.00 22 136.56 3 33 ∼10^7 7 9.30 6 7.27 4 33 ∼10^9 24 164.61 19 69.77 1-8 [t]2mm8*[origin=c]90Concurrency 79 ∼10^13 4 11.85 3 6.15 1 82 64 1 0.71 3 0.85 2 86 1024 1 2.08 3 2.06 3 88 4096 1 18.79 5 4.69 1 81 16 4 2.73 4 1.67 2 85 256 4 5.08 4 5.37 3 114 ∼10^6 4 51.83 4 38.31 4 96 ∼10^5 6 145.61 6 81.41 1-8 [t]2mm5*[origin=c]90Game 1 47 8 10 16.13 19 13.99 2 47 32 25 21.32 15 8.54 29 32 2 0.38 2 0.49 2 59 ∼10^22 2 99.23 4 22.19 34 120 10 125.24 10 74.65 1-8 [t]2mm12*[origin=c]90Incorrectness 1 13 256 / / 3 1.10 2 28 64 / / 9 2.15 3 16 2 / / 2 0.41 1 12 256 / / 2 0.42 2 33 4096 / / 7 6.01 3 21 2 / / 1 0.15 1wupo 8 32 / / 2 4.03 2wupo 8 64 / / 3 2.96 1wpp 8 4 / / 2 3.76 2wpp 8 8 / / 5 62.52 87 16 / / 3 15.94 23 1024 / / 1 20.33 [.1em] [.1em] Polynomial invariants <cit.> This category contains 4 benchmarks: , , , and . The DSLs contain polynomials of a certain degree. The  benchmark models a program that nondeterminisitcally adds a subset of the numbers from 1 to n, i.e., s = ∑_k=1^n· k. When given a DSL that can describe quadratic functions over n, produces n^2 + n≥ 2s≥ 0 as both the only and the only . The synthesized formula tells us the summation is not greater than (n^2 + n)/2 and all natural numbers that are not greater than (n^2 + n)/2 can be obtained as the result. Similarly, the benchmarks  and  model ∑_k=1^n · k^2 and ∑_k=1^n · k^3. synthesizes 2n^3 + 3n^2 + n + 2 ≥ 6s≥ 0 and 2n^4 + 2n + 4 > 4s≥ 0 as the only for   and , respectively. Unlike for , not all natural numbers in the range induced by the synthesized can be obtained, so will not yield the same . Instead, for each benchmark synthesizes two that describe particular nondeterministic choices, e.g., {s = 0, s = n^2} for . takes an array A with nondeterministic values as input and computes the number of inverse pairs of the subarray A_s..e using merge sort. should synthesize an like n≤ (e-s)(e-s+1)/2 that explains the maximal number of inverse pairs, but due to the nested recursion/loops, fails to return any properties within the time limit. Benchmarks from SV-COMP <cit.> These 4 benchmarks model programs that add a constant to each input variable repeatedly. E.g., 2 models the program [language=C, tabsize=3, basicstyle= , keywordstyle=, commentstyle=, xleftmargin=0em, escapeinside=“, numbers = none, numbersep = 1pt, ] x = 1; y = 1; while(nondet()) x = x + 2 * nondet(); y = y + 2 * nondet(); The remaining benchmarks differ from 2 in the number of variables, the initial values, and the values that are added. We designed DSLs with basic arithmetic operators including modulo, from which can discover properties about the final values of the variables. For 2, synthesizes {y 2 = 1, (x+y) 2 = 0 } as and {x 2 = 1 y 2 = 1} as . Note that the consists of two properties and only has a single one; however, these properties are equivalent. For the benchmarks, synthesis of takes on average 1.38x longer than synthesis of ; for the former, has to discover the entire property at once. Findings: can synthesize both and , which can be useful for understanding both angelic and demonic nondeterministic program behavior. §.§ Application 2: Incorrectness Reasoning Thanks to the support of both over- and under-approximation, some forms of forward/backward reasoning for both Hoare logic <cit.> and incorrectness logic <cit.> can be captured in the framework. Because there has been a lot of research and tools on precondition/postcondition inference of Hoare triples, we only discuss the relation between the framework and incorrectness reasoning in this subsection, along with an evaluation. A complete formalization of the relation between the framework and Hoare/incorrectness logic can be found in Appendix. <ref>. §.§.§ Relation to Incorrectness Logic An incorrectness triple PsQ consists of a presumption P, a statement s, and a result Q, and it has the following meaning: every final state satisfying Q is reachable by executing program s starting from some state that satisfies presumption P: ∀'. Q(') ⇒∃. [P() ∧s(, ')] Forward Reasoning: Weakest Under-approximate Postcondition Given a program s and a presumption P, the weakest under-approximate postcondition (s, P) is the weakest predicate Q such that the triple PsQ holds. We use (s, P) to denote the weakest under-approximation postcondition expressible as a disjunction of predicates in the DSL . From <Ref>, (s, P) can be obtained by synthesizing weakest for the following query: ∃. P() ∧s(, ') The under-approximated reasoning described in <Ref> effectively computed s ( = [, ](), ⊤) and ( = [, ](), ⊤), respectively. Backward Reasoning: Weakest Possible Precondition Surprisingly, backward predicate transformers for incorrectness logic do not always exist because valid presumptions may not exist. For example, there is no predicate P making the triple P = = -1 true because no values of , and satisfy = -1. To address this shortcoming <cit.> suggests using the weakest possible precondition (s, Q), which is termed by <cit.> as “possible correctness”. Intuitively, (s, Q) captures the set of initial states from which it is possible to execute s and terminate in a state that satisfies Q. Formally, (s, Q) is the weakest P satisfying ∀. P() ⇒ [∃'. Q(') ∧s(, ')] Note that P = (s, Q) does not form neither a Hoare nor an incorrectness triple with the program s and the postcondition Q. As proposed by , we can use P = (s, Q) to compute a new postcondition Q' = (s, P) and obtain a valid incorrectness triple PsQ'. We use (s, Q) to denote the weakest possible precondition expressible as a disjunction of predicates in the DSL . From <Ref>, (s, Q) can be obtained by synthesizing weakest of the following query: ∃'. Q(') ∧s(, ') The under-approximated reasoning described in <Ref> effectively computes _ _( = [a, M](), _()). §.§.§ Evaluation on Incorrectness Reasoning We collected a total of 12 benchmarks: 6 benchmarks are simple programs used as illustrative examples in the incorrectness logic paper  <cit.>, and 6 benchmarks are more complicated programs we crafted to illustrate how 's handling of incorrectness reasoning differs from incorrectness logic. Among these benchmarks, 6 are about and the other 6 are about . It takes less than 4 seconds to solve each benchmark from <cit.> and less than 50 seconds to solve each benchmark we crafted. Evaluation details are shown in <Ref>. Analysis of Benchmarks from <cit.> We collected all the 3 triples [P]s[Q] from the examples used in <cit.> where s is a nondeterministic program. However, in these triples, there was no guarantee that Q was the weakest under-approximate postcondition of P, or P was the weakest possible precondition of Q. We used to synthesize (s, P) and (s, Q) (the DSL contained the same primitives appearing in the examples in <cit.>), thus 3+3=6 benchmarks. We examined that each of 3 synthesized (s, P) by was indeed a subset of (s, P), and for 2 cases the two were equal. For the one that is not equal, (s, P) is the set of all perfect squares numbers, whereas (s, P) is the perfect squares numbers lower than a bound (this difference was due to our query limiting the sample space of each variable). The (s, Q) synthesized by are equal to (s, Q). More Complex Benchmarks The 6 more complex benchmarks for which we performed incorrectness are 1-wupo, 1-wpp, 2-wupo, 2-wpp, , and . The benchmarks 1 and 2 model two arithmetic functions x' = (h_0, x,-x) and x' = (h_1 + 1)· x + h_2, where each h_i∈{0, 1} is a nondeterministic value. For both cases, we set a≤ x ≤ b as a precondition P (or a≤ x' ≤ b as a postcondition Q) to synthesize (s, Q) (or (s, P)), and thus get 4 benchmarks in total. To use , we need to mark x as existentially quantified variables when synthesizing (s, P), whereas mark x' as existentially quantified variables when synthesizing (s, P). Given a DSL containing basic arithmetic and comparison operators, synthesizes (s,Q) and (s,P) that are equal to (s,Q) and (s,P). For example, to synthesize (s,Q) for 1, one can construct a query ∃x', h_0.  x' = (h_0, x,-x) a≤x'≤b, and will synthesize the { -b≤x≤ -a, a≤x≤b}. We briefly summarize the findings on other benchmarks. The coin benchmark models the values one can produce using two coins that have co-prime denominations; can identify a lower bound above which all possible values can be produced using these coins. The benchmark models a parametric hash function; can synthesize the condition that possibly causes a hash collision. More details are discussed in <Ref>. §.§ Application 3: Reasoning about Concurrent Programs We show how can be used to reason about bugs in concurrent programs by considering variants of concurrency problems by <cit.> (2 problems related to deadlocks, and 1 to race conditions). Similar to how we model nondeterminism, we introduce an array h to represent the order in which threads are scheduled. In the benchmark, we show how can synthesize conditions under which deadlock can be reached or avoided for the dining-philosophers problem, where N processes arranged in a circle contend N resources that are shared by neighboring processes. A deadlock happens when no process can access both of their Left and Right resources indefinitely. models this problem with a query ∃h. dl = schedule(o_1, ⋯, o_N, h), where o_i∈{L,R} indicates which resource the process i always takes first; dl denotes that a deadlock has happened. For the case involving three processes/philosophers (N=3), when given a DSL that contained predicates of the form o_i = {L|R}, synthesizes the following , which informally state that deadlock can be prevented by having two of the processes disagree on their fork choice: [ (o_0 = L o_2 = R) ⇒dl (o_2 = L o_1 = R) ⇒dl (o_1 = L o_0 = R) ⇒dl ] For the same N, and a dual DSL , also synthesizes the following , which exactly characterize the two cases in which a deadlock can happen (first two properties) and also capture that there exists an execution that does not lead to a deadlock (last property). [ o_0 = L o_1 = L o_2 = L dl o_0 = R o_1 = R o_2 = R dl dl ] Each of the 4 benchmarks describes a simple resource allocator; synthesizes properties describing the minimum number of resources that must (or may) cause a deadlock. Each of the 3 benchmarks describes two threads; can discover the necessary (or sufficient) ways to place a critical section to prevent race conditions. Details are shown in <Ref>. Finding: can synthesize and that capture various sources of bugs in concurrent programs. §.§ Application 4: Solving Two-Player Games In this section, we show how can even be used to synthesize generalized strategies for solving two-player games. We illustrate the idea using an example by <cit.>, called (for request/grant). The two players take on the roles of client and server, and in each round, the server decides whether to grant (g) or not (g̅) the request for that round, and then the client decides whether to send (r) or not (r̅) a request in that round. To win the game, the server must grant every request in the same or next round. <cit.> show the server player can be in 3 possible states: q_0: no ungranted request q_1: an ungranted request in the last round q_2: ungranted requests 2 or more rounds ago. The server should prevent entering state q_2. One of the winning strategies from the server side is to always grant on both state q_0 and q_1. We denote such a strategy as [q_0] = g [q_1] = g—i.e., [q] = a denotes that strategy chooses action a when in state q. We can find winning strategies by modeling the   game as a query ∃ . w = (, ), where the client's strategy is existentially quantified and (, ) is the game controller that takes the strategy of both players and produces a Boolean value w denoting whether the server wins after playing the game. The generality of the framework allows to solve two-player games using the following queries: Must-win strategy: what strategy can guarantee a win for any strategy (<Ref>)? ∀α, w.  (∃. w = (, )) ⇒(P_must() ⇒w = ) and May-win strategy: what strategy can win for at least one strategy (<Ref>)? ∀α, w. (P_may() w = ) ⇒(∃. w = (, )) By providing a DSL that expresses formulas in the form P_must() ⇒w =, we can extract the must-win strategy in the P_must part of the synthesized formulas. By replacing with we can get the must-lose strategy. For the game, synthesized the following , which tells us that the server will always win if they grant requests in either of states q_0 and q_1—i.e., finds “a set of” winning strategies. [ [q_0] = g ⇒w = [q_1] = g ⇒w = ] When provided with the dual DSL also synthesized the following : [ [q_0] = g̅[q_1] = g̅w = w = ] The first states that the server may lose if they do not grant requests at both states q_0 and q_1, whereas the second states that whatever strategy the server uses there exists a strategy of the requester (i.e., the one that never issues requests) that causes the server to win. Other benchmarks We consider a total of 5 benchmarks: (discussed above), 2 (the Nim game) and (a temperature controller), which are adapted from linear reachability games by <cit.>[All other games studied by cannot be modeled in due to the restricted features of languages, such as limited support to floating point numbers.], and 1 and 2, which are games designed by us in which two players manipulate an integer where one player's goal is to keep the integer in a certain range. Because of the implementation bounds discussed in section <ref>, we stipulate that player 1 (typically the player that needs to stay in safe states) wins, after a finite number (we set as 15) of rounds of play. It takes less than 85 seconds to synthesize must/may strategies for each benchmark. Compared to the work by <cit.>, synthesizes not only must but also may strategies, properties on desired strategies instead of a concrete strategy, and general strategies that work for games with parameters (e.g. the initial number of pebbles in 2). Details of DSL design and synthesized properties of benchmarks are provided in <Ref>. Finding: can synthesize general strategies for two-player games. §.§ Application 5: Mining Under-Approximated Specifications The goal of this section is to evaluate 's general capability to mine under-approximated specifications for deterministic programs. <cit.> mined for programs used in the synthesis literature. Their benchmark set consists of 45 programs paired with corresponding grammars that their tool uses to mine . Note that the queries for all the benchmarks do not contain existential quantifiers. We successfully recomputed produced by for all benchmarks. However, we found that 's implementation was on average 2.2x faster (geometric mean) than at computing for deterministic programs. Although and implement the same algorithm for computing for deterministic programs, 's implementation of forces the generated example to be negative (using an assertion), whereas 's implementation generates an example and then checks later if it is negative. The implementation avoids spurious calls to and is therefore faster. We focus on the new capability of to compute . To compute for the deterministic benchmarks used to evaluate , we modified the top-level production of each DSL used in the evaluation to consider conjunctions instead of disjunctions—i.e., if the top-level production was of the form P → AP AP ⋯, we replaced it with P → AP AP ⋯. We denote the grammar before modification as and after modification as . Of the 45 benchmarks used when evaluating , we only consider 35 benchmarks for which the semantics of operations were expressed using . The 10 benchmarks we don't consider are simple ones for which the semantics are expressed directly using SMT formulas instead of ; does not support SMT semantics yet. The benchmarks consist of: programs from the CLIA track of the competition <cit.>, 22 programs manipulating data structures taken from the synthesizer <cit.>, and 6 imperative programs designed by the authors of . For the problems and imperative programs, the DSLs can express arithmetic comparison between variables—e.g. for the query o = (x), one would get the -x ≥ x ⇒ x = -o. For problems, the DSLs support common functions over the data structures appearing in each specific benchmark —e.g., length for lists. For the query l_out = (l_in) one would get (l_out) = (l_in) as one of the . could synthesize properties for 35/35 benchmarks, and guaranteed that all of them were best . <Ref> shows the evaluation details of a few selected benchmarks. Overall, could solve each of the benchmarks in under 7 minutes (the largest grammar contained 1.48· 10^13 properties). Next, we analyze the results for each subcategory separately. Benchmarks For the 7 benchmarks, we found that the synthesized exactly coincides with the semantics of the given queries. When we replicated the experiment by <cit.>, we could only synthesize for 4/7 benchmarks (like , failed on , , and ), but for these 4 benchmarks the synthesized properties also coincided with the semantics of the given queries. The result shows that the grammar and of the benchmarks are expressive enough to synthesize exact approximations—i.e., the properties are semantically equivalent to the query. Take o = (x_1, x_2) as an example: the synthesized are {o = x_1 o = x_2, x_2 = x_1 o > x_1 o > x_2}, and the synthesized are {o = x_1 x_2 < o, o = x_2 x_1 < o}, which can be proved to be equivalent. A detailed evaluation of the differences between and is shown in <Ref>. Imperative Program Benchmarks These benchmarks ask to find linear or nonlinear relations to capture the semantics of simple imperative programs. synthesized that exactly capture the program semantics except for 1, for which the DSL can only describe linear relations, but the program semantics can only be captured by a quadratic relation. Benchmarks Many of the synthesized by for this benchmark category were not particularly useful. Specifically, the DSLs are not expressive enough for computing useful , mostly yielding trivial properties involving cases in which the data structure is empty. For example, for the query l_out=(l_in), only synthesized the {l_out = l_in(l) ≤ 1}, which only describes the behavior of the function on lists of length less than 1. While the original DSLs were useful for computing consequences (e.g., (l_in) = (l_out)), their dual versions are too weak to reason about implicants—one would instead need to talk about more specific position information of elements in lists. The same problem holds for other data structures. Finding: can mine and that help understand program behaviors. If the DSL is not expressive enough, cannot produce useful . § RELATED WORK Abstract Interpretation Many static program-analysis and verification techniques represent large program state spaces symbolically as predicates. One of these approaches is known as abstract interpretation <cit.>, and it provides a manageable way to analyze possible states that are reachable during program execution. While the majority of works on abstract interpretation has been focused on over-approximation, it can also be used to describe under-approximations of the program behavior <cit.>. In particular, the best synthesis problem is an instance of strongest-consequence problem <cit.>. Given a formula in logic _1 (with interpretation ·_1), the goal of strongest-consequence problem is to determine the strongest formula ψ that is expressible in a different logic _2 (with interpretation ·_2) such that _1 ⊆ψ_2. One existing technique to solve this problem identifies a chain of weaker implicants until one becomes a consequence of  <cit.>, whereas other techniques take the opposite direction, identifying a chain of stronger consequences <cit.>. Our framework, like  <cit.>, differs from existing works in abstract interpretation because it supports a customizable DSL. In contrast, existing methods have certain structure requirements to perform operations on elements within the DSL , such as join <cit.>. The ability to modify the DSL is what makes the framework applicable to many domains. Best -term Synthesis The idea of synthesizing a “best” term from a user-provided DSL was first proposed by <cit.>, where the goal was to synthesize a most-precise abstract transformer for a given abstract domain. <cit.> generalized the idea and introduced the general logical setting required for defining and solving the problem of synthesizing strongest and best . In these work, the “best” term should be sound: it is a valid approximation to the best transformer in <cit.> or the semantics of query in <cit.>, and precise: it is minimal w.r.t. a preorder defined on . The framework takes a step further: it further generalizes the queries to allow existential quantifiers and introduces the problem of synthesizing weakest and best . Logically, the framework subsumes both and the work by . At the algorithmic level, the tools solving the above problems all use two kinds of examples for synthesis, where one is treated as hard constraints to guarantee soundness and the other one is treated as soft constraints to guarantee precision. improved the algorithm by by introducing the idea of freezing examples, thus avoiding the need for a synthesizer with hard and soft constraints. The CEGQI algorithm we present <Ref> is a new approach that is not present in the aforementioned works as none of them supports existential quantifiers in their queries—e.g., is the first tool that can synthesize best for nondeterministic programs. Program Logic Hoare <cit.> and incorrectness logic <cit.> can reason about program properties through preconditions and postconditions. If one treats the DSL as an assertion language, the problems of computing strongest postcondition <cit.> and weakest liberal precondition <cit.> in Hoare logic, and weakest under-approximation postcondition and weakest possible precondition <cit.> in incorrectness logic, can be expressed within the framework (the relationship between the framework and program logics is detailed in <Ref> in the supplementary material). One key distinction between our approach and the one used in automating computing the above operations in program logics is that in the framework, one can specify what DSL they want their properties to be expressed in. In contrast, the properties produced automatically for, e.g., weakest possible preconditions in incorrectness logic, are the results of syntactic rewrites that often result in complex properties with potentially many quantifiers. Invariant inference Many data-driven, CEGIS-style algorithms can infer program invariants—e.g., Elrond <cit.>, abductive inference <cit.>, ICE-learning <cit.>, LoopInvGen <cit.>, Hanoi <cit.>, and Data-Driven CHC Solving <cit.>. Dynamic techniques like Daikon <cit.>, QuickSpec <cit.> and Precis <cit.> can also synthesize invariants through program traces or random tests. The framework differs from the above works in three key ways: The language is customizable and is not limited to a set of predefined predicates, and thus the framework can be used in a domain-agnostic way (as showcased by the many applications presented in <Ref>); the framework supports both over-approximated and under-approximated reasoning, and the properties synthesized by are provably sound strongest and sound weakest . Quantifier Elimination Many algorithms <cit.> are built on abductive inference, specifically, approximate quantifier elimination. <cit.> defined overapproximate existential quantifier elimination as a “cover operation”, where the goal is, given a formula ∃ V. ϕ, to find a quantifier-free formula φ such that (∃ V. ϕ) ⇒φ. If φ is restricted to be in a DSL , the cover problem corresponds to synthesizing for queries with an existential quantifier in framework. Some algorithms <cit.> also define underapproximate existential quantifier elimination, which corresponds to synthesis of . framework differs from the above work because it allows custom DSLs that express the target quantifier-free formulas and thus is not restricted to any fixed theory. Under-approximation The framework could potentially be combined with existing compositional under-approximate reasoning techniques, such as incorrectness logic <cit.> or compositional symbolic execution <cit.>. An inherited limitation of syntax-directed under-approximate reasoning is the inability to effectively reason about statements or procedures involving constraints beyond the scope of the theory 𝒯 assumed by the under-approximate reasoning framework. We expect one could synthesize weakest to approximate such constraints into summaries that are expressible in the theory 𝒯 assumed by under-approximation frameworks. § CONCLUSION This paper presented , a general framework for synthesizing over- and under-approximated specifications of both deterministic and nondeterministic programs, thus enabling broad applications—e.g., describing sources of bugs in concurrent code and finding winning strategies in two-player games. The paper also presents general procedures for solving problems using simple synthesis primitives that do not involve complex quantifier alternations. Currently, our tool is implemented on top of the synthesizer, which results in some limitations. First, synthesized formulas are only sound for inputs up to a given bound. Such an issue could be addressed by combining our approach with an off-the-shelf verifier; however, we are not aware of verifiers that can reason about —i.e., under-approximated specifications. Our work provides a motivation for building such verifiers. Second, limits us from exploring applications that involve inputs of unbounded length—e.g., reasoning about infinite traces, LTL formulas, and reactive systems. Our work thus opens an opportunity for the research community: by improving efficiency and providing stronger soundness guarantees for the primitives used to solve problems, researchers can tackle the many applications supported by the framework. ACM-Reference-Format § RELATION TO PROGRAM LOGICS In this section, we discuss how our problem formulation relates to the type of reasoning program logics like Hoare <cit.> and incorrectness logic <cit.> can do with respect preconditions/presumptions and postconditions/results. §.§ Relation to Hoare Logic We start by considering Hoare logic and its ability to reason about correctness properties of programs. A Hoare triple PsQ consists of a precondition P, a statement s, and a postcondition Q, and it has the following meaning: if the precondition P holds before executing s and s terminates, then the postcondition Q holds upon termination. In other words, the postcondition over-approximates the set of possible behaviors the program can result in. We assume the semantics of a program s is given by a relation s(, '), which holds true if s on input state can terminate with an output state '. The meaning of the triple PsQ can then be formalized as follows: ∀, '. P() ∧s(, ') ⇒ Q(') Backward Reasoning: Weakest Liberal Precondition Weakest precondition operations can be formalized as predicate transformers that assign a unique (in a sense, most general) precondition P to each program s and postcondition Q. Given a program s and a postcondition Q, the weakest liberal precondition (s, Q) represents the weakest predicate P such that the triple PsQ holds <cit.>. If we view (s, Q) as a backward predicate transformer, it reformulates the problem of verifying the triple PsQ to the problem of checking a first-order formula P ⇒(s, Q).[ Dijkstra's original weakest precondition requires that whenever the precondition P holds before the execution of s, the execution of s is guaranteed to terminate <cit.>. While our over-approximation framework can elegantly capture the notion of weakest liberal precondintion, reasoning about Dijkstra's weakest precondition and total correctness is problematic because semantics encoding presented in this section describe possible end states ' from an initial state , without addressing whether some executions may not terminate. ] The problem of computing such a predicate transformer, and in particular the weakest one expressible as a disjunction of predicates in the DSL , can be phrased in our framework. The need to lift to disjunction of predicates arises because there may exist multiple incomparable predicates in that satisfy the triple PsQ and cannot be further weakened (while still satisfying PsQ). Thus, we define (s, Q) as follows: if P is a weakest predicate in such that PsQ holds true, then it must also hold true that P ⇒(s, Q). Given a program s, a postcondition Q and a DSL , the (s, Q) is the (possibly infinite) disjunction ⋁_i P_i of all predicates P_i ∈ such that P_isQ holds true, and no P ∈ is strictly implied by P_i while PsQ holds true. Note that if the DSL is expressive enough, the (s, Q) will be equivalent to the weakest liberal precondition (s, Q)—i.e., the DSL would be what in Hoare logic is called an expressive enough assertion language. Because is a weakest formula one may be tempted to compute it via weakest . However, this approach would not yield the desired result because an only proves the existence of an execution satisfying the postcondition Q, but it does not prove whether every execution satisfies the postcondition Q. For example, consider a simple DSL that can only express the properties true and false. Given a nondeterministic program = ∗ (i.e., one that non-deterministically assigns an integer to ) and a postcondition Q() := = 0, the property “true” would be an of the query ∃. ( = ∗∧ = 0) because there always exist an execution that results in being zero, while the formula “false” is a valid for the query ∃. = ∗ and the postcondition = 0 because there is no precondition that ensures will be zero after the execution. As we will see in <Ref> can be used to compute the weakest possible precondition, a different backward predicate transformer used in reverse Hoare logic and incorrectness logic. We next show that by negating every predicate (precondition, postcondition, and language or properties), we can capture via . First, observe that by appropriately negating the pre- and postcondition, <Ref> can be rewritten as follows: ∀. [∃'. Q(') ∧s(, ')] ⇒ P() Intuitively, <Ref> states that every state ' that violates Q must come from a state that violates P. Thus, we can introduce a DSL for negated formulae = {|∈} and have that any predicate P satisfies conditions and of <Ref> if and only if P is a strongest of the query: ∃'. Q(') ∧s(, ') By combining <Ref> and De Morgan's laws with the above observation, we can prove the equivalence of (s, Q) and a best for the query (<ref>). The (s, Q) is semantically equivalent to the negation of a best of query := ∃'. Q(') ∧s(, '). For example, to compute for the program =, a postcondition = 0 and a DSL defined in <Ref>, we introduce a DSL for negated formulae = {|∈}. Then the problem of computing ( = , = 0) is encoded as the problem of synthesizing a best for the query: ∃. ( = 0) ∧ = The following set of forms a best for the query (<ref>): [ ≠ 0 ≠ 0 ≠ ≠ ] By negating the formulas in <Ref> we get the following disjunction—i.e., the : = 0 = 0 = = Forward Reasoning: Strongest Postcondition Strongest postcondition predicate transformers can be thought of being the dual of the weakest precondition ones. Given a program s and a precondition P, the strongest postcondition (s, P) represents the strongest predicate Q such that the triple PsQ holds <cit.>. If we view (s, Q) as a forward predicate transformer, we can reformulate the problem of verifying the triple PsQ as the problem of checking whether a first-order formula (s, P) ⇒ Q holds. The problem of computing such a predicate transformer, and in particular the strongest one expressible as a conjunction of predicates in the DSL , can be phrased in our framework. Similar to the case of the weakest liberal precondition, we define (s, P) as follows: if Q is a strongest predicate in such that PsQ holds true, then it must also hold true that (s, P) ⇒ Q—i.e., the (s, P) is stronger than any strongest postcondition Q in . Given a program s, a precondition P and a DSL , the (s, P) is the (possibly infinite) conjunction ⋀_i Q_i of all predicates Q_i ∈ such that PsQ_i holds true, and no Q ∈ strictly implies Q_i while PsQ holds true. The problem of obtaining the (s, P) can also be encoded as synthesizing strongest , this time without a need of negating formulas. Observe that relocating the quantifier ∀ into the implicant rewrites <Ref> as follows: ∀x'. [∃. P() ∧s(, ')] ⇒ Q(') From <Ref> we have that a predicate Q ∈ satisfies conditions and of <Ref> if and only if Q is a strongest of the following query: ∃. P() ∧s(, ') This observation yields the following theorem: The (s, P) is semantically equivalent to the best of the query := ∃. P() ∧s(, '). <Ref> illustrated how to obtain a ( = , ⊤) using over-appximated reasoning in our framework. §.§ Relation to Reverse Hoare Logic and Incorrectness Logic Reverse Hoare logic <cit.> and incorrectness logic <cit.> can both be thought of being the dual of Hoare logic—they under-approximate (instead of overapproximate) the set of possible behaviors a program can result in. The under-approximating logics are semantically equivalent despite having been designed with different goals in mind: reverse Hoare logic was designed to reason about the correctness of nondeterministic programs, whereas incorrectness logic was designed to identify the presence of bugs in programs. Our evaluation in <Ref> investigates both of these applications. A incorrectness triple PsQ consists of a presumption P, a statement s, and a result Q, and it has the following meaning: every final state satisfying Q is reachable by executing program s starting from some state that satisfies presumption P. In other words, the predicate Q under-approximates the set of possible behaviors the program s can result in when executed on inputs satisfying P: ∀'. Q(') ⇒∃. [P() ∧s(, ')] Forward Reasoning: Weakest Under-approximate Postcondition Weakest postcondition operations can be formalized as predicate transformers that assign a unique precondition Q to each program s and precondition P. Given a program s and a presumption P, the weakest under-approximate postcondition (s, P) represents the weakest predicate Q such that the triple PsQ holds. If we view (s, P) as a forward predicate transformer, we can reformulate the problem of verifying the triple PsQ as the problem of checking whether a first-order formula Q ⇒(s, P) holds. The problem of computing such a predicate transformer, and in particular the weakest one expressible as a disjunction of predicates in the DSL , can be phrased in our framework. We define (s, P) as follows: if Q is a weakest predicate in such that PsQ holds true, then it must also hold true that Q ⇒(s, P). Given a program s, a presumption P and a DSL , the (s, P) is the (possibly infinite) disjunction ⋁ Q_i of all predicates Q_i ∈ such that PsQ_i holds true, and no Q ∈ is strictly implied by Q_i while PsQ holds true. Following <Ref>, the problem of obtaining the (s, P) can be directly encoded as synthesizing weakest . A predicate Q ∈ satisfies conditions and of <Ref> if and only if Q is a weakest of the following query: ∃. P() ∧s(, ') This observation yields the following theorem: The (s, P) is semantically equivalent to the best of the query := ∃. P() ∧s(, '). The under-approximated reasoning described in <Ref> effectively computed s ( = [, ](), ⊤) and ( = [, ](), ⊤), respectively. Backward Reasoning: Weakest Possible Precondition While forward predicate transformers for incorrectness logic behave well—i.e., given a presumption P and a program s, one can always assign the weakest result Q such that PsQ holds true—backward predicate transformers for incorrectness logic do not always exist! This problem arises because valid presumptions may not exist in incorrectness logic. For example, there is no predicate P making the triple P = = -1 true because no values of , and satisfies = -1. To address this shortcoming and still take advantage of some form of backward reasoning in incorrectness logic, <cit.> suggests using the weakest possible precondition (s, Q), which is predicate transformer described by <cit.> for what he referred as “possible correctness”. Intuitively, (s, Q) captures the set of initial states from which it is possible to execute s and terminate in a state that satisfies Q. then proposes to use a two phase approach to derive valid incorrectness triples as follows. Starting with a postcondition Q, one first computes the weakest possible precondition P=(s, Q) and then applies forward reasoning and computes the weakest under-approximate postcondition Q' = (s, P) to obtain a valid incorrectness triple PsQ'. Since we already showed how to capture the weakest under-approximate postcondition in our framework, we now show how to capture the weakest possible precondition. A predicate P is called a possible precondition of predicate Q for program s, if every input state satisfying P has a run of the program s that terminates to an end state satisfying Q. That is, (s, Q) is the weakest P satisfying ∀. P() ⇒ [∃'. Q(') ∧s(, ')] Note that P = (s, Q) does not form neither a Hoare nor an incorrectness triple with the program s and the postcondition Q. The postcondition Q is not a valid over-approximation of possible final states because there could be other executions that do not satisfy Q—e.g., if the program is nondeterministic. The postcondition Q is not a valid under-approximation either because there might be states satisfying Q that are not reachable—e.g., if the postcondition Q is and the program s is =, then any negative value of satisfies the postcondition Q, but no values of , and can yield a negative output. As proposed by O'Hearn, we can remedy this issue by computing a new postcondition Q' = (s, P) using the weakest under-approximate postcondition operator to obtain a valid incorrectness triple PsQ' The problem of computing such a predicate transformer, and in particular the weakest one expressible as a disjunction of predicates in the DSL , can be phrased in our framework. Given a program s, a postcondition Q and a DSL , the (s, Q) is the (possibly infinite) disjunction ⋁_i P_i of all predicates P_i ∈ such that <Ref> holds true, and no Q ∈ is strictly implied by Q_i while <Ref> holds true. From <Ref>, the problem of obtaining (s, Q) can be directly encoded as synthesizing weakest . A predicate P ∈ satisfies conditions and of <Ref> if and only if P is a weakest of the following query: ∃'. Q(') ∧s(, ') This observation yields the following theorem: The (s, Q) is semantically equivalent to the best of the query := ∃'. Q(') ∧s(, '). The under-approximated reasoning described in <Ref> effectively computes _ _( = [a, M](), _()). § EVALUATION DETAILS §.§ Application 1: Reasoning about Nondeterministic Programs DSL and synthesis result for We supply the following two DSLs (the one rooted at nonterminal B_ is for , whereas the one rooted at B_ is for ): [ B_ := G ⇒ D B_ := G D; G := ⊤|||⋯|; := {≤| < | =|≠}; := a_1|⋯|a_l|n| [0, l^2]; D := ok|ok; ] The benchmarks 3 and 4 simply differ in whether l = 3 or l = 4 in the above DSL. When given the DSL in <Ref>, for 3 computes the 7 in <Ref>, which helps us understand what arrays may or may not be sorted in at most n swaps. [ a_1≤a_2a_2≤a_3ok n≥ 1 a_1≤a_3ok n≥ 2 a_2≤a_3ok n≥ 2 a_1≤a_2ok; n≥ 3 ok a_2<a_1ok a_3<a_2ok; ] For example, the n≥ 1 a_1≤a_3ok says that if the first and third elements are already in the right order, we have a way to make the entire array well-ordered using one swap. also synthesizes the in <Ref> when the DSL is rooted at B_, which helps us understand what arrays must or must not be sorted in at most n swap. [ (n< 2 a_1>a_3) ⇒ok (n< 1 a_2>a_3) ⇒ok (n< 1 a_1>a_2) ⇒ok; (n< 3 a_1>a_2a_2>a_3) ⇒ok (a_1≤a_2a_2≤a_3) ⇒ok; ] For example, the (n < 3 a_1 > a_2a_2 > a_3) ⇒ok tells us that, if the array is descending, we cannot make it sorted using fewer than 3 swaps. Polynomial invariants <cit.> This category contains 4 benchmarks: , , , and . The DSLs of the 4 benchmarks contain polynomials of a certain degree. The query in the  benchmark, ∃h. s = rsum(n, h), models a program that nondeterminisitcally adds up some number from 1 to n, i.e., computes s = ∑_k=1^n· k. Here the existentially quantified variable h is an array of nondeterministic choices. Each time is called, it takes the next value of h as its return value. When given a DSL that can describe quadratic functions over n, produces n^2 + n≥ 2s≥ 0 as both the only and the only . The synthesized formula tells us the summation is not greater than (n^2 + n)/2 and all natural numbers that are not greater than (n^2 + n)/2 can be obtained as the result. Similarly, the benchmarks  and  model ∑_k=1^n · k^2 and ∑_k=1^n · k^3. synthesizes 2n^3 + 3n^2 + n + 2 ≥ 6s≥ 0 and 2n^4 + 2n + 4 > 4s≥ 0 as the only for   and , respectively. Unlike for , not all natural numbers in the range induced by the synthesized can be obtained, so will not yield the same . Instead, for each benchmark synthesizes two that describe particular nondeterministic choices, e.g., {s = 0, s = n^2} for . The query for the  benchmark, ∃A. n = mergesort(A,s, e), models a program that takes an array A with nondeterministic values as input and computes the number of inverse pairs of subarray A_s..e through via merge sort. should synthesize an like n≤ (e-s)(e-s+1)/2 that explains the maximal number of inverse pairs, but due to the nested recursion/loops, fails to return any properties within the time limit. Benchmarks from SV-COMP <cit.> These 4 benchmarks model programs that add a fixed constant to each input variable repeatedly. For example, 2 models The rest of 2 benchmarks differ from 2 in the number of variables, the initial values, and the values that are added. We designed DSLs with basic arithmetic operators including modulo, from which can discover properties about the final values of the variables. For 2, synthesizes {y 2 = 1, (x+y) 2 = 0 } as and {x 2 = 1 y 2 = 1} as . Note that the consists of two properties and only has a single one; however, these properties are equivalent. For the benchmarks, synthesis of takes on average 1.38x longer than synthesis of ; for the former, has to discover the entire property at once. §.§ Application 2: Incorrectness Reasoning Expressible amounts in two kinds of coins The benchmarks non-deterministically model the possible dollar amounts that can be represented using coins of two values. <cit.> provides syntax-direct rules to find the weakest under-approximate postcondition and the weakest possible precondition, but this approach has to explicitly unroll loops and introduce existential quantifiers to deal with assignments and nondeterminism, thus resulting in predicates that might be hard to dispatch to a constraint solver. Using , one can instead use the DSL to customize what properties they are interested in obtaining in the and . For example, the following function coin takes two integers a and b that are co-prime (shown in presumes part) as input, then nondeterministically chooses two non-negative integers x and y, and finally returns ax + by. The more intuitive interpretation of the program is that it represents all amounts that can be expressed using only coins of value a and b. [language=C, tabsize=3, basicstyle= , keywordstyle=, commentstyle=, xleftmargin=0em, escapeinside=“, numbers = left, numbersep = 1pt, ] int coin(int a, int b) /* presumes: [gcd(a,b)==1] achieve1: [gcd(a,b)==1 / exists x>=0, y>=0. r==a*x+b*y] achieve2: [gcd(a,b)==1 / r==a] achieve3: [gcd(a,b)==1 / r>a*b-a-b] */ int x = nondet(); assume(x >= 0); int y = nondet(); assume(y >= 0); return a * x + b * y; The predicates achieve1, achieve2, and achieve3 are all valid under-approximation postconditions. achieve1 is the one obtained using the derivation rules by <cit.>: It is precise but has an existential quantifier and multiplication, which make it hard to check in later reasoning. achieve2 could be obtained by a dynamic symbolic execution approach <cit.> that concretizes x = 1 and y = 0; this postcondition is valid but less precise than achieve1. The flexibility of allows us to modify the DSL and not be tied to any specific rule derivation technique. Consider for example a situation in which we are not interested in the actual relation captured by the program, but are just interested in identifying a lower bound (or upper bound) above (or below) which all program outcomes can be effectively produced—i.e., an under-approximation of the output range of the function. For example, one may care about after some nondeterministic perturbation of the initial state, which states within a certain distance could all be possible results. In terms of coin, one can look for such a lower bound using , by supplying the following DSL that has the bounding predicate r > N_0 at the top level and such that nonterminal N_0 can derive a quadratic expression containing a and b: [ C := r > N_0; N_0 := N_1 | N_1 + N_1 | N_1 - N_1; N_1 := | + | -; := a|b|ab|a^2 |b^2 | 1 | 0 ] For the DSL in <Ref>, will synthesize the achieve3, i.e., (a,b) = 1 r > ab-a-b, which states that all numbers greater than ab - a - b can be produced by coin assuming gcd(a, b) = 1. Compared to achieve1 and achieve2, achieve3 guarantees a certain degree of precision and meanwhile meets the needs to obtain a lower bound. Conditions lead to a hash collision The benchmark models the condition of hash collision after applying a parametric hash function to a set of integers. The parametric hash function is defined as f[a] (x) = ax M, where a∈{1,⋯, M-1} is the parameter we can instantiate the function with. We also have a set of integers S to which we want to apply the hash function. To identify what kind of set S will possibly lead to a hash collision, we compute the (S_o = map(f[a], S), size(S) > size(S_o)). The following DSL is intended to capture the relation between S and M in the : [ D := | AP | AP AP | AP AP AP; AP := isPrime(M) |isPrime(M) | N {≤| < | = |≠} N; N := size(S) |modsize(S, M) |M| 0 | 1; ] The function modsize(S, M) computes the size of the set obtained by taking the M-modulus for each number in S. Using the DSL from <Ref>, synthesizes the following : [ size(S) > modsize(S, M) size(S) ≥M(M) ] The first is a valid possible precondition since it implies that there are at least two integers in S that are congruent w.r.t. M, which will be hashed to the same value. The second is also a valid one as twofold: if the size of S is larger than M, there exists two integers in S are congruent and thus collide. if the size of S is equal to M, and meanwhile M is not prime, there always exists a bad parameter a that is not coprime with M and therefore can cause collision. By following the syntax-directed rules proposed by <cit.> to compute the weakest possible precondition, we would get the predicates ∃a. size(S) = size(map(f[a], S)), which still contains a quantifier and is effectively the same as the original query we were asking—i.e., it does not help us understand the program behavior. Furthermore, if we are interested in what set S will possibly not lead to a hash collision, we could set the postcondition as size(S) = size(S_o), and will synthesize the in same DSL from <Ref> as follows: [ size(S) = modsize(S, M) ] which means no two integers in S are congruent w.r.t M. This is a valid possible precondition since when it is satisfied, there is always some parameter a (e.g., 1) such that (a, M) = 1, and thus one can prevent hash collision. §.§ Application 3: Reasoning about Concurrent Programs Describing Sources of Deadlock The  benchmark encodes the dining-philosophers problem where N “philosophers” are sitting around a table, and between each pair of philosophers is a single fork (and thus, N total). Each philosopher alternatively thinks and eats. To eat, a philosopher needs two forks, both the one on the left and the one on the right. When finishing eating and back to thinking, they will put down both forks. We say there is a deadlock when all philosophers want to eat but cannot get both forks because every philosopher is holding a single fork. In this example, the so-called circular wait is a necessary condition for deadlock, in which there exists a circular chain of threads such that each thread holds resources that are being requested by the next thread in the chain. Circular wait can be prevented by adjusting the order in which each thread requests resources. In terms of , each philosopher could either take the left fork first or the right fork first. We show can be used to understand what execution orders affect whether a deadlock happens. To do so, we model this problem as query ∃h. dl = schedule(o_1, ⋯, o_N, h), where o_i∈{L,R} indicates which fork the philosopher i always takes first. We then supply the following two DSLs (the one rooted at nonterminal B_ is for , whereas the one rooted at B_ is for ): [ B_ := G ⇒ D B_ := G D; G := ⊤|||⋯|; := O = L | O = R; O := o_1|⋯|o_N; R := dl|dl ] For the case involving three threads/philosophers (N=3), synthesizes the following , which informally state that deadlock can be prevented by having two of the threads disagree on their fork choice: [ (o_0 = L o_2 = R) ⇒dl (o_2 = L o_1 = R) ⇒dl (o_1 = L o_0 = R) ⇒dl ] For the same N, also synthesizes the following , which exactly characterize the two cases in which a deadlock can happen (first two properties) and also capture that there exists an execution that does not lead to a deadlock (last property). [ o_0 = L o_1 = L o_2 = L dl o_0 = R o_1 = R o_2 = R dl dl ] The 4 benchmarks focus on how the amount of resources affects the deadlock. Consider a simple resource allocator that contains M types of resources R_1,⋯ R_M, and initially has n_i units of resource R_i. The allocator receives T threads, each containing a list of resources the thread needs and in what order, and at each step, it needs to decide which next resource of each thread should be allocated. Once all the resources in the list are allocated, the thread completes its job and releases them altogether. However, if a request cannot be fulfilled due to the lack of resources of that type, the thread waits. We say the allocator is in a deadlock when multiple threads are waiting and no progress can be made. We show how tells us what resource amounts never lead to a deadlock and what resource amounts possibly cause a deadlock. The benchmark 2 is a case where T = 2 and M=2, and where thread-1 requests resources [R_1, R_2, R_1, R_2] and thread-2 requests resources [R_2, R_1, R_2, R_1]. In , a completely nondeterministic allocator that could allocate any resource to any waiting thread could be modeled as the query ∃h. dl = schedule(n_1, n_2, h), where dl is a Boolean variable that indicates whether deadlock happens, and the h is an array used to model the sequence of nondeterministic choices during scheduling. <Ref> shows the actual program used to define the semantics of the scheduler. We supply with a similar DSL to <Ref>, with replace the production rule for AP by AP := N {<|≤|=} N, where N can derive every n_i and integer constants. synthesizes the in <Ref> that tells us about resource amounts for which a deadlock must or must not happen. For example, the third states that “if there are more than 2 units of R_1 and more than 3 units of R_2, no scheduling order can lead to a deadlock.” [ n_1≤ 1 ⇒dl n_2≤ 1 ⇒dl (n_1≥ 2 n_2≥ 3) ⇒dl (n_1≥ 3 n_2≥ 2) ⇒dl ] For the same problem, synthesizes the in <ref> that tells us about resource amounts for which a deadlock may or may not happen. For example, the third states that “if both types of resources are available in a quantity no more than 3, there exists a scheduling order that leads to a deadlock.” [ n_1≤ 2 dl n_2≤ 2 dl n_1≤ 3 n_2≤ 3 dl n_1≥ 2 n_2≥ 2 dl ] All benchmarks are instances of the problem above, of which (T, M, length of the request list) are (2,2,2), (2,2,4), (3,2,4), and (2, 3, 8). Even for the hardest instance of (2, 3, 8), synthesized the best within 160 seconds (from 4.37 · 10^12 properties), and the best within 60 seconds (from 2.75 · 10^11 properties) Preventing Race Conditions The 3 benchmarks are about describing possible race conditions in concurrent programs. In each of them, there are 2 threads that access and modify a shared variable using the methods and . For example, <Ref> shows the code of 2 threads in benchmark 2. r0.37 2 [ tabsize=3, basicstyle= , keywordstyle=, commentstyle=, xleftmargin=-0.5em, escapeinside=“, morekeywords = get, set, numbers = none ] // Thread 1 0: t <- get() 1: t <- t + 1 2: t <- get() 3: t <- t + 1 4: set(t) [ tabsize=3, basicstyle= , keywordstyle=, commentstyle=, xleftmargin=-0.5em, escapeinside=“, morekeywords = get, set, numbers = none ] // Thread 2 0: t <- get() 1: t <- t - 1 2: set(t) 3: t <- t - 1 4: set(t) Two threads in 2 When there is no possible context switching between the 2 threads in 2, if the initial value of the variable is 0, its final value should be -1 (which we call the expected result). However, context switching can cause different interleaving of the threads to produce values different than -1—i.e., there exists a data race. Such a data race is typically prevented by introducing critical sections, in which instructions must be executed atomically. We show how can be used to identify the minimum part of the code that should be made atomic for the code to be race free. To model the problem in , we introduce two variables 1 and 2, which will be used to capture what lines in each thread should be executed atomically. The predicate (i, l, r) holds true if the instructions from line l to line r of thread-i should be executed atomically. Now we can model whether a race happens as a query ∃h.  race = schedule(1, 2, h), where race captures whether a race condition can happen. We provide the following two DSLs (the one rooted at nonterminal B_ is for , whereas the one rooted at B_ is for ): [ B_ := G ⇒ D B_ := G D; G := ⊤|||⋯|; := (AC, I, I) |(AC, I, I); I := all line numbers; AC := 1|2; D := race|race ] The predicate (i, l, r) is implemented as a conjunction ⋀_k=l^r-1noSwitch(i, k), where the predicate noSwitch(i, k) is true if in thread-1, the instruction k+1 must be immediately executed after instruction k. This implementation makes it so that the predicate (i, l_1, r_1) implies (i, l_2, r_2) when l_1 ≤ l_2 and r_1 ≥ r_2. Since looks for a tightest properties, such an implementation lets reason about what is the smallest needed atomic execution. For 2, synthesizes the following , which informally states that setting line 2 to line 4 of thread-1 and line 0 to line 4 of thread-2 as critical sections can prevent data race: [ ((1, 2, 4) (2, 0, 4)) ⇒race ] We can observe that the effect of the first two instructions of thread-1 is overwritten by the third instruction, so it is unnecessary to include them in a critical section. For 2, to synthesizes the following , which states that setting line 2 to line 4 of thread-1 and whole thread-2 as critical sections is in fact necessary! [ (1, 2, 4) race (2, 0, 4) race race ] This example shows how over- and under-approximated reasoning can be cleverly combined in to understand necessary and sufficient interventions in preventing data races. §.§ Application 4: Solving two-player games Definition of safety games A safety game consists of a game graph G = ⟨ (S, E), (S_1, S_2)⟩ and a safety objective F. In the graph, S is a set of states partitioned into  states S_1 and  states S_2, E⊆ S × S is a set of edges in which each edge connects a state in S_1 and a state in S_2. The safety objective F⊆ S is a set of safe states. 's goal is to remain in safe states, while 's goal is to visit unsafe states at least once. We assume both players play the game according to a finite-state memoryless strategy that is independent of the action history and depends only on the current states. For safety games, there always exists a memoryless winning strategy. More benchmarks The 2 game is played with 2 heaps of pebbles with number n_1 and n_2. On each turn, a player must remove at least one pebble and may remove any number of pebbles if they all come from the same heap. The goal of the game is to be the player to remove the last pebble. One synthesized must strategy is <Ref>. ∀ i,j. (i < j ⇒[i, j].heap = 2) ∀ i,j. (i < j ⇒[i, j].num = j - i)  ∀ i,j. (i > j ⇒[i, j].heap = 1) ∀ i,j. (i > j ⇒[i, j].num = i - j) n_1 != n_2⇒w = T where heap denotes the heap from which pebbles are taken when the numbers of pebbles in two heaps are i and j, and num denotes the number of pebbles taken. <Ref> essentially states that one can win if the initial two heaps of stones are different, and always keep them the same after taking. r0.4 [ tabsize=3, basicstyle= , keywordstyle=, commentstyle=, xleftmargin=-2.5em, escapeinside=“, language = C, morekeywords = assert, numbers = none ] temp = 20.5; while(*) assert(20 <= temp <= 25); isOn = ??; if (isOn == 1) temp += 2; temp -= (temp - 19) / 10; The  game models a controller for a thermostat shown in the right. We consider strategies that set as 1 at the k-th of every n times. synthesizes 9 must strategies (and also 9 equivalent may strategies since there is no adversary in  game), e.g., 2 ≤.n ≤ 7 .k = 1 ⇒w, which states that the thermostat can keep the temperature in [20, 25] by increasing two degree in the first second of every n(2≤ n ≤ 7) seconds. The benchmarks consider games played over a one-dimensional grid—i.e., an integer. Each game is a 4-tuple (v, A_1, A_2, S), where v is an initial integer value, A_1 = {f_1, f_2,⋯, f_n} and A_2 = {g_1, g_2, ⋯, g_m} (such that f_i, g_i ∈ℤ→ℤ) are the actions set that  and  use to manipulate the integer (e.g., increments, decrements, etc.), and S ⊆ℤ is a the set of integers   wants to stay in to win the game. The problems 1=(v, {nop, -1}, {× 2, +1}, [0, 4]) and 2=({nop, -1}, {nop, +1}, [0, 4]) are two instances of the game above, where the initial integer value v is left unspecified. We want to use to understand the relationship between the value of v and winning strategies, and thus supply with the following DSL: [ B_must := G ⇒ R B_may := G R; G := ⊤|||⋯|; := (, S_1, A_1) |v = {0 | 1 |2 |3 |4}; S_1 := 0 | 1 | 2 | 3 | 4; A_1 := nop | -1; R := w = {T| F} ] Using the DSL rooted at B_must, for 1 synthesized the that consists of (v = 0 [0] = nop [2] = -1) ⇒w = T as well as other 9 in 15 seconds. The property states that if the initial value is 0, can remain in the range [0,5] by performing the action nop at state 0 and the action -1 at state 2. Note that the strategy does not need to be defined at any of the other infinitely many states. Using the DSL rooted at B_may, for 1 synthesized the that consists of [1] = nop [2] = nop w = F as well as other 16 in 10 seconds. The property states that by performing the action nop at states 0 and 2, there exists a strategy that makes lose the game. §.§ Application 5: Mining Under-Approximated Specifications Due to the equivalence of the and , we further compared the efficiency of computing over-approximations and under-approximations for these benchmarks to assess which formulas were easier to compute. Although the size of grammar for and are the same (since we only replace by at the top layer), synthesizing was on average 1.6x faster (geometric mean) than synthesizing for the 4 benchmarks on which both synthesis processes terminated. Furthermore, while (and also ) failed to synthesize for , , and within the given timeout, successfully synthesized in less than 5 minutes for each of these benchmarks. We compared the to to see how they differed. For the query o = (x_1, x_2, x_3), for example, the synthesized consist of 5 conjuncts [ x_2 < o x_1 < o x_2 = x_1 x_3 ≤ o o = x_2 o = x_1 x_1 < x_3; o = x_2 o = x_3 x_3 < x_1 x_2 < x_3 o = x_2 x_2 < x_1 ] while the synthesized consist of 3 disjuncts [ x_1 = o x_3 ≤ x_1 x_2 ≤ x_1 x_2 = o x_3 ≤ x_2 x_1 ≤ x_2 x_3 = o x_1 ≤ x_3 x_2 ≤ x_3 ] The in Eq. <ref> can be thought of as a declarative specification that  function much to satisfy, and in fact similar to the specifications provided in benchmarks. Instead, the in Eq. <ref> captures the paths of the program and the output they produce, which one would obtain via symbolic execution. To summarize, under-approximation computed semantically equivalent specifications for benchmarks, but it was faster than over-approximation. This improvement could be attributed to the fact that fewer could capture the semantics of a program when compared to —e.g. 3 vs 5 for . § SYNTHESIZING A BEST AND In this section we present detailed algorithms to synthesize a best ($<ref>) and ($<ref>). §.§ Synthesizing a best The algorithm iteratively synthesizes incomparable strongest . At each iteration, keeps track of the conjunction of synthesized strongest , along the set of positive examples that have been observed so far. Each iteration calls to try to synthesize a strongest for with respect to (line <ref>). A property returned by is checked whether it rejects some example that was not rejected by (lines <ref> and <ref>). If does not reject any example that was not already rejected by , the formula is a best , and thus returns the set of synthesized (line <ref>). If rejects some example that was not rejected by , needs to further strengthen to a strongest for with respect to examples that might already be rejected by . Without this step the returned may be imprecise for examples that were not considered by because they were outside of . To achieve this further strengthening, makes another call to with the example sets and returned by the previous call to together with and :=, but with := (line <ref>). §.§ Synthesizing a best Because solves the dual problem of the one solved by , the two algorithms share the same structure. Due to the duality the roles of positive and negative examples are inverted; is replaced by ; and at each iteration a weakest is synthesized by instead of . These changes are highlighted in red in Algorithm <ref>.
http://arxiv.org/abs/2408.12261v1
20240822095829
Core-Shell Nanoparticle Resonances in Near-Field Microscopy Revealed by Fourier-demodulated Full-wave Simulations
[ "Dinghe Dai", "Richard Ciesielski", "Arne Hoehl", "Bernd Kaestner", "Dario Siebenkotten" ]
physics.optics
[ "physics.optics" ]
§ ABSTRACT We present a detailed investigation of the near-field optical response of core-shell nanoparticles using Fourier-demodulated full-wave simulations, revealing significant modifications to established contrast mechanisms in scattering-type scanning near-field optical microscopy (s-SNOM). Our work examines the complex interplay of geometrical and optical resonances within core-shell structures. Using a finite element method (FEM) simulation closely aligned with the actual s-SNOM measurement processes, we capture the specific near-field responses in these nanostructures. Our findings show that core-shell nanoparticles exhibit unexpected distinct resonance shifts and massively enhanced scattering driven by both core and shell properties. This investigation not only advances the understanding of near-field interactions in complex nanosystems but also provides a refined theoretical framework to accurately predict the optical signatures of nanostructures with internal heterogeneity. [lines=1,lhang=0.1]Engineered nanoparticles are garnering increasing interest in the fields of medicine <cit.>, biosensing <cit.>, catalysis <cit.>, energy storage <cit.>, and opto-electronics <cit.> due to their unique properties that arise from their small size and large surface area. The usefulness and functionality of engineered nanoparticles are primarily influenced by the chemical characteristics of their surfaces. For instance, in medical diagnosis and treatment, core-shell nanoparticles are of special interest as the surface of simple nanoparticles (the core) can be functionalized (the shell) to bind to drugs and deliver them in a targeted manner <cit.>. Therefore, accurately quantifying both the functionalization and geometry of nanoparticles is crucial for ensuring their optimal performance. To achieve this, techniques such as X-ray Photoelectron Spectroscopy (XPS) <cit.> and Quantitative Nuclear Magnetic Resonance (qNMR) <cit.> are currently being advanced to meet stringent metrological standards. Optical methods have also been considered as they are non-destructive and chemically specific <cit.>. However, their resolution is typically limited by diffraction to about half the wavelength of the light used, restricting their ability to characterize individual nanoparticles. On the other hand, scattering-type scanning near-field optical microscopy (s-SNOM) provides sub-diffraction spatial resolution and is not limited by the wavelength used. Moreover, this technique simultaneously captures the particle's geometry, allowing the study of correlations, e.g., between particle size and functional group concentration. In principle, s-SNOM <cit.> promises access to the degree of surface functionalization through the use of tightly confined optical near-fields <cit.>, in particular in the mid-infrared spectral range. The confinement in s-SNOM is achieved by focusing electromagnetic radiation onto the metalized probe-tip of an atomic force microscope (AFM) positioned near the sample, which is sketched in Fig. <ref>. The light is reflected back to the probe in dependence of the sample's geometrical and optical properties, and the scattering from the probe-tip is measured. The sensitivity of s-SNOM on the optical properties of nanoparticles has been shown <cit.> down to nanoparticles of a few nanometers in diameter <cit.>, including in a spectroscopic manner <cit.>. However, comparatively little work has been published on core-shell nanoparticles in s-SNOM to date <cit.>. This is partly due to the complexity of quantitative descriptions of the corresponding intricate interplay between geometrical and optical factors. Only a few approaches for the modelling of the s-SNOM response of nanoparticles <cit.> and core-shell nanoparticles <cit.> have been published, all approximating the tip with a comparatively small conducting spheroid <cit.>. An alternative to the (semi-)analytical s-SNOM modelling approaches - which require at least partial, highly challenging redevelopment when adapted to new geometries, and make approximations about the tip shape - is the use of numerical simulations <cit.>. To suppress the far-field background dominating the scattered near-field signals, experiments use periodic tip oscillations and evaluate the higher harmonics of the scattered fields <cit.>. This approach needs to be mimicked in the simulations. Mooshammer et al. <cit.> suggested the application of this procedure to each point of the simulated field, from which insight into the origin of the scattering behaviour of the sample structure, such as a core-shell nanoparticle, can be gained. Extending this approach, we develop a finite element method (FEM) simulation procedure for cylindrically symmetric samples that is closely orientated on the real s-SNOM measurement process, verify it on nanospectroscopy data and apply it to explore the rich interplay of material and geometrical resonances of core-shell nanoparticles on a substrate with varied material and geometrical properties. One approach that has been widely used for fast simulations is the employment of simple tip geometries, such as spheres or ellipsoids, in conjunction with non-planar tip geometries <cit.>. Conversely, other approaches simulate the entirety of a realistic tip using either the Finite Element Method (FEM) on planar <cit.> or single-step samples <cit.> or the simpler, but computationally more efficient, Method of Moments <cit.> on planar samples. We choose FEM as the method of choice as it is known for its accurate determination of the electric field even inside complex structures <cit.> and combine it with a realistic tip setup and core-shell nanoparticles on top of a substrate. The scattered field phasor to be determined can be expressed as the sum of the demodulation orders of background- and near-fields at frequencies nΩ E_sca = ∑_n Ẽ_nexp(i nΩ t) = ∑_n (Ẽ_nf, n + Ẽ_bg,n) exp(i nΩ t), with the tip oscillation frequency Ω and where Ẽ_nf, n dominates over Ẽ_bg,n at higher demodulation orders n, due to the rapid decay of the near-fields <cit.>. By using interferometric detection schemes such as nanoFTIR <cit.> and pseudoheterodyne detection <cit.>, one obtains a field-sensitive detector signal, V_D = κ E_sca, with κ being a typically unknown proportionality constant. As κ is hard to determine experimentally, the measurement is usually related to a known reference material measured under the same conditions via S_n/S_n,refexp(i(ϕ_n - ϕ_n, ref)) = Ẽ_n/Ẽ_n, ref, with the demodulated detector voltage Ṽ_n = κẼ_n ≡ S_n exp(iϕ_n). The left-hand side represents the experimentally measured near-field contrast, while the right-hand side can be predicted by theoretical calculations. For these theoretical calcuations we employ the commercial FEM solver JCMsuite <cit.> to solve Maxwell's Equations in the frequency domain. We model the full tip as a 20-µm long cone with a rounded edge (radius = 1000 nm) and a rounded tip apex (radius = 25 nm). The cone features an opening angle of 30° and is comprised of a silicon core and a 70 nm thick gold layer coating. The system is illuminated by an infrared plane wave incident at a 30° angle to the sample surface. More simulation details are shown in Supplemental Information S1. The z-component of the electric field (along the long tip axis preferred by the polarization <cit.>) of an example of such a simulation (wavelength λ=10µm) is depicted in Fig. <ref> (amplitude) and <ref> (phase), normalized to E_inc. The tip-sample separation is set to 5nm and the substrate's optical properties to ε = 1.9 + 0.003i. Fig. <ref> shows a significant field enhancement between the tip and the sample, while the signals further away are the combined effects of far- and scattered near-fields. Next, we calculate the Fourier-demodulated field <cit.> for a sinusoidal tip-sample variation. The total demodulated field can be expressed as: 𝐄(x,z, t) = ∑_n Ẽ_n(x, z) exp(i n Ω t). To establish Eq. <ref> we additionally calculate the Fourier-demodulated field of a reference sample Ẽ_n, ref(x, z) with optical properties defined by its permittivity ε_ref. The demodulated z-component field maps, referenced to silicon (ε_ref=12), are shown in Fig. <ref> (amplitude) and Fig. <ref> (phase). They display a relatively homogeneous distribution, as the normalization accounts for the near-field decay. There, the scattered field defined by the near-field contrast can be measured. For robustness, we place five detection points (marked in black on the lower left in Figs. <ref> and <ref>) in the forward scattered direction and calculate the average and standard deviation of the extracted fields. The process of demodulation, normalization and use of detection points away from the tip apex aligns the simulations closely with real-world measurements. To demonstrate the methods predictive power for s-SNOM measurements, we compare our method to experimental data obtained from a strongly doped silicon sample, previously presented in <cit.>. For the s-SNOM measurements, we used a commercial setup (neaSNOM by attocube systems AG), with a gold-coated Si tip (PPP-NCSTAu by Nanosensors™) operated with synchrotron radiation in the infrared spectral region <cit.>. Figures <ref> (amplitude) and <ref> (phase) depict the measured 2^nd and 3^rd demodulation order spectra of the doped Si sample referenced to undoped silicon alongside the theoretical FEM spectra. The calculation assumed the known doping density of N=4 × 10^19 cm^-3 from which the permittivity has been derived, as described in the Supplemental Information S2. Due to the strong doping, the doped silicon acts like a free electron gas with its plasma frequency in the infrared, leading to increased scattering at low wavenumbers. The FEM calculations describe the data excellently, particularly near the plasma resonance at around 950 cm^-1. Only at higher wavenumbers some systematic offset can be observed, which is small compared to the noise level of the experiment. Note that all the parameters used for the modeling of the tip (length, apex radius, opening angle, and coating thickness) were taken from manufacturer specifications and electron microscopy measurements and were not fitted to the data. Comparisons to the finite dipole model for several classes of materials are shown in Supplemental Information S2. Next we employ the FEM approach to explore the s-SNOM scattering of single core-shell nanoparticles. For clarity we begin with a simpler system consisting of only the tip and a single 150nm diameter nanoparticle in air (Conf. A, cf. inset in Fig. <ref>). The calculated contrast to bulk gold substrate without nanoparticles at 9µm wavelength is shown in black in Fig. <ref> (amplitude) and Fig. <ref> (phase) with the real part of the nanoparticle's permittivity Re(ε) varied from -35 to 20 and with a constant imaginary part Im(ε)=1. Both, the amplitude and the phase, exhibit a single resonance at small negative Re(ε), similar to the case of an extended strong oscillator sample as derived in Ref. <cit.>. Our FEM calculation reproduces the well known result that metals should appear brighter than high-refractive index dielectrics while low-refractive index dielectrics appear dark. Materials with a small negative Re(ε) generate the brightest signal. Upon the introduction of a gold substrate (Conf. B, blue) a second peak appears at lower negative Re(ε), which is shown by the blue curve in Fig. <ref>. Furthermore, the contrast is enhanced, which is a known effect of metallic mirrors below nanoparticles <cit.>. The inclusion of a 50nm diameter gold core (Conf. C, red) into the nanoparticle leads to a drastic increase of the resonance scattering and a shift of the peak-position to lower Re(ε). When the diameter of the gold core is increased to 100nm (Conf. D, grey), that shift is exacerbated. Note that the total diameter of the nanoparticle is kept constant for all four configurations. The resonance position shift already shows that both material and geometric properties show non-trivial impact on the scattering. This then poses the question if the quenching between the two peaks is caused by the absence of resonant behaviour, or by an explicit antiresonance. To investigate its origin, Fig. <ref> shows the third demodulation order magnitude (|Re(𝐄_3(x,z)|) and direction (the angle to the x-axis α_3) of the electric vector field in the x-z-plane for Conf. A-C for the resonant and quenched case. For Conf. A, the resonance shows strong enhancement between the tip and the nanoparticle, with a spatially tightly confined rotation of the electric field direction α_3. α_3 further rotates over larger areas in lobes at the sides of the tip, which are demodulation effects already observed in simulations by Mooshammer et al. <cit.>. When introducing the gold substrate in Conf. B, a second region of strong field enhancement appears inside the nanoparticle at the substrate-facing side for the resonant case. α_3 also fully rotates once throughout that region. In the quenched case, such a region also appears, but flattened between the nanoparticle and substrate. Furthermore, within the nanoparticle the field rotates around a non-central point, with α_3 diametrically opposed at opposing sides. For Conf. C, the fields behave similarly to Conf. B, but with a centered field rotation. This indicates that the minimum constitutes an antiresonance, as strong field enhancement is present in the near-field, but the opposing directions of the electric field inhibit scattering, leading to the observed minimum at the detection position. These insights highlight the usefulness of the direct access to the electric fields for complex geometries that the FEM calculations provide. In conclusion, we analyzed the s-SNOM contrast of core-shell nanoparticles using Fourier-demodulated full-wave simulations. By employing a finite element method (FEM) simulation approach tailored to mimic the real s-SNOM measurement process, we explored the interplay of geometrical and optical resonances within these nanostructures. Our findings reveal that core-shell nanoparticles exhibit resonance shifts and significantly enhanced scattering effects driven by both the core and shell properties, different from the behavior observed in simpler nanoparticle configurations. These results highlight the potential of s-SNOM in the investigation of individual nanoparticles particularly in applications involving functionalized core-shell structures with concurrent correlation to the nanoparticle size. While our model as presented is restricted to cylindrically symmetric samples, it can be readily generalized to arbitrary geometries, albeit at increased computational demands. This opens the model to more broad applications, such as the analysis of resonance behaviour of complex nanostructures, making it a useful tool for nanophotonics developments. This approach also holds potential for exploring thermal effects in nanostructures, particularly in the context of photocurrent-induced nanoscopy. This work was supported in part by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through Project-ID452301518 “Investigation of quench switching of antiferromagnets with high spatial and temporal resolution” and Project-ID529998081 “Ultrasensing in the nearfield: polariton enhanced molecular fingerprinting”. SEM Measurements on the AFM tips were performed by Patryk Krzysteczko, which are gratefully acknowledged. The supporting information contains additional details of the FEM simulation and a comparison to the finite dipole model (PDF).
http://arxiv.org/abs/2408.11190v1
20240820204950
Light quark loops in $K^\pm \to π^\pm ν\barν$ from vector meson dominance and update on the Kaon Unitarity Triangle
[ "E. Lunghi", "A. Soni" ]
hep-ph
[ "hep-ph", "hep-ex" ]
=1 #1#1
http://arxiv.org/abs/2408.11254v1
20240821002041
Mapping Chaos: Bifurcation Patterns and Shrimp Structures in the Ikeda Map
[ "Diego F. M. Oliveira" ]
nlin.CD
[ "nlin.CD" ]
AIP/123-QED Mapping Chaos: Bifurcation Patterns and Shrimp Structures in the Ikeda Map]Mapping Chaos: Bifurcation Patterns and Shrimp Structures in the Ikeda Map diegofregolente@gmail.com School of Electrical Engineering and Computer Science, College of Engineering & Mines - University of North Dakota, Grand Forks, North Dakota, USA § ABSTRACT This study examines the dynamical properties of the Ikeda map, with a focus on bifurcations and chaotic behavior. We investigate how variations in dissipation parameters influence the system, uncovering shrimp-shaped structures that represent intricate transitions between regular and chaotic dynamics. Key findings include the analysis of period-doubling bifurcations and the onset of chaos. We utilize Lyapunov exponents to distinguish between stable and chaotic regions. These insights contribute to a deeper understanding of nonlinear and chaotic dynamics in optical systems. [ Diego F. M. Oliveira August 26, 2024 ======================== The study of dynamical systems is crucial for understanding complex behaviors in various natural and engineered processes. This paper investigates the properties of the Ikeda map, a well-known model in chaos theory that describes the behavior of light in a nonlinear optical cavity. Despite its simplicity, the Ikeda map exhibits a rich variety of dynamical behaviors, including fixed points, periodic orbits, and chaotic attractors. This research focuses on the impact of dissipation parameters on the map's dynamics, demonstrating the existence of self-similar shrimp-shaped structures within the parameter space. These structures delineate regions of stability and chaos, characterized by transitions from regular to chaotic dynamics via a period-doubling bifurcation cascade. The Lyapunov exponent is used as the primary tool to classify regions in the parameter space as either regular or chaotic, revealing the intricate interplay between order and chaos. Through numerical analysis, we also estimate Feigenbaum's constant, further validating the observed bifurcation patterns. Our findings contribute to a deeper understanding of the complex parameter space of the Ikeda map, highlighting its significance in the broader context of nonlinear dynamical systems. § INTRODUCTION Dynamical systems are crucial for understanding the complex behaviors that arise in various natural and engineered processes. Among the most extensively studied dynamical systems are the Lorenz system <cit.>, the Hénon map <cit.>, the logistic map <cit.>, and the Duffing oscillator <cit.>. These nonlinear systems, along with many others, display a rich spectrum of behaviors ranging from regular and predictable to chaotic and unpredictable dynamics. The phase space of such systems can generally be divided into three regions: regular <cit.>, chaotic <cit.>, and mixed regions <cit.>. Regular regions are characterized by periodic or quasi-periodic trajectories that repeat over time, leading to stable and predictable behavior. Chaotic regions, in contrast, exhibit sensitive dependence on initial conditions, where even minute differences in starting points result in vastly divergent trajectories, leading to an unpredictable and complex phase space. Mixed regions are particularly intriguing as they encompass both regular and chaotic dynamics, with stable islands of regularity embedded within chaotic seas, making the overall behavior of the system highly intricate. In this paper, we explore the properties of a dynamics of the Ikeda map <cit.>. Introduced by Kensuke Ikeda in the late 1970s, the Ikeda map models the behavior of light in a nonlinear optical cavity. It has since become a quintessential example in the study of chaos and complex systems. The Ikeda map is defined by a set of iterative equations that describe the evolution of a point in the complex plane. Despite its relatively simple form, the Ikeda map exhibits a rich variety of dynamical behaviors, including fixed points, periodic orbits, and chaotic attractors. The map is particularly notable for its sensitivity to parameter changes, which can lead to sudden transitions between regular and chaotic dynamics as we will show below. Additionally, as we will demonstrate, increasing the dissipation parameter leads to a period-doubling bifurcation cascade <cit.>, where Feigenbaum's constant, δ <cit.>, which quantifies the rate of these bifurcations, can be calculated numerically. We then examine the two-dimensional parameter space, specifically focusing on the dissipation parameters associated with the real and imaginary components of the map, and the model reveals the presence of self-similar structures known as “shrimps". Initial investigations by Gaspard et al. <cit.> in 1984, Rössler et al. <cit.> in 1989, and Komuro et al. <cit.> in 1991 laid the groundwork for understanding self-similar periodic structures in two-dimensional mappings. Gaspard and colleagues focused on Chua's system, revealing complex bifurcation patterns, while Rössler's study on the logistic map and Komuro's exploration of the Double Scroll circuit further illustrated the ubiquity of these structures in nonlinear dynamical systems. Gallas's groundbreaking work in 1993 <cit.> represented a pivotal moment in the study of dynamical systems. Through a detailed exploration of the parameter space of the Hénon map, Gallas identified the presence of complex shrimp-shaped domains—regions characterized by periodic behavior amidst chaotic dynamics. This discovery not only highlighted the significance of these structures but also spurred a wave of subsequent research focused on uncovering similar patterns in other models. As noted by <cit.>, "Shrimps are formed by a regular set of adjacent windows centered around the main pair of intersecting superstable parabolic arcs. A shrimp is a doubly infinite mosaic of stability domains, comprising an innermost main domain and all adjacent stability domains arising from two period-doubling cascades and their corresponding chaos regions. It is important to distinguish these shrimp from their innermost main domain of periodicity." Since Gallas's seminal work, shrimp-shaped domains have been recognized in a wide array of theoretical models, including but not limited to those examined in studies by Gallas himself <cit.>, as well as in investigations by Hunt et al. <cit.>, Bonatto et al. <cit.>, and Oliveira et al. <cit.>. Furthermore, in addition to theoretical explorations, E. N. Lorenz <cit.>, one of the foremost pioneers of chaos theory, extended the investigation of shrimp-shaped domains into new territories, focusing on their appearance in complex systems. Lorenz's work highlighted the richness and complexity of these structures, solidifying their importance in the study of chaotic dynamics. These theoretical studies have contributed to a deeper understanding of the conditions under which these domains emerge and their implications for the dynamics of the systems in which they appear. More recently, the existence of shrimp-shaped domains has been experimentally verified in physical systems, most notably in electronic circuit <cit.>. These experimental observations have bridged the gap between theory and practice, demonstrating that these intricate structures are not merely mathematical curiosities but are also observable in real-world systems. To classify regions in the parameter space as either regular or chaotic, we primarily use the Lyapunov exponent. Although other quantities, such as particle velocity, can also be employed—like in the well-known Fermi-Ulam model <cit.>—they may not effectively distinguish between regular and chaotic behavior. In this study, we focus on the Lyapunov exponent. Our approach involves starting with a fixed initial condition, allowing for an extended transient period, and then computing the Lyapunov exponent. For each parameter combination of the dissipation paramenters, we assign a color based on the computed Lyapunov exponent. We then increment the parameters, using the final values of the dynamical variables before the increment as the new initial condition, ensuring that we remain within the basin of the same attractor. This methodology reveals well-organized, self-similar shrimp-shaped structures embedded within a broader region of chaotic attractors. § THE MODEL AND THE MAP The Ikeda map, originally introduced to model the dynamics of a laser system, is described by the following 1D equation involving a complex variable: z_n+1 = A + B z_n e^i ( θ - ϕ/|z_n|^2 + 1), where the parameters θ and ϕ typically represent phase angles that control the behavior of the system, z_n = x_n + i y_n is a complex number with real part x_n and imaginary part y_n, and |z_n|^2 = x_n^2 + y_n^2 is the squared magnitude of z_n. In order to obtain the two-dimensional map the describes the dynamics of the system, first, one compute the exponential term in Eq. <ref>.Thus, by using Euler's formula, we have: e^i ( θ - ϕ/|z_n|^2 + 1) = cos(θ - ϕ/|z_n|^2 + 1) + i sin(θ - ϕ/|z_n|^2 + 1). Substitute z_n = x_n + i y_n and the exponential term into the map: z_n+1 = A + B (x_n + i y_n) [cos(θ - ϕ/x_n^2 + y_n^2 + 1) + i sin(θ - ϕ/x_n^2 + y_n^2 + 1)] e^C. Expanding and separating the real and imaginary parts, we obtain: z_n+1 = A + B e^C {[ x_n cos(θ - ϕ/x_n^2 + y_n^2 + 1) - y_n sin(θ - ϕ/x_n^2 + y_n^2 + 1)] + i [x_n sin(θ - ϕ/x_n^2 + y_n^2 + 1) + y_n cos(θ - ϕ/x_n^2 + y_n^2 + 1) ]}. Assuming that A = 1 and B e^C = u_i, with i=x for the real component or i=y for the imaginary componedt of the mapping and defining t_n = θ - ϕ/x_n^2 + y_n^2 + 1. Then the map can be simplified as: z_n+1 = 1 + u_i [ (x_n cos t_n - y_n sin t_n) + i (x_n sin t_n + y_n cos t_n) ]. Finally, by separating the real and imaginary parts of z_n+1, we obtain the discrete two-dimensional non-linear mapping that describes the system's dynamics: S:{[ x_n+1 = Re(z_n+1) = 1 + u_x (x_n cos t_n - y_n sin t_n),; y_n+1 = Im(z_n+1) = u_y (x_n sin t_n + y_n cos t_n), ] , . where t_n = θ - ϕ/x_n^2 + y_n^2 + 1 and u_x ∈ [0,1] and u_y ∈ [0,1] are the dissipation factors for the real and imaginary components of the map. From now on, we will fix θ=0.4 and ϕ=0.6. These factors modulate the transformation of the real and imaginary parts of the complex variable through each iteration. This transformation effectively maps the original complex map into a system of coupled real-valued equations, which can be analyzed to study the dynamical behavior of the system. It is important to mention that if u_x=u_y=1 all the results for the Hamiltonian area-preserving map are recovered. From the map S [see Eqs. (<ref>)], one can easily obtain the Jacobian matrix, J, which is defined as J = [ ∂ x_n+1/∂ x_n ∂ x_n+1/∂ y_n; ∂ y_n+1/∂ x_n ∂ y_n+1/∂ y_n ] with coefficients given by the following expressions ∂ x_n+1/∂ x_n = u_x [ cos t_n - (x_n sin t_n + y_n cos t_n) ∂ t_n/∂ x_n] ∂ x_n+1/∂ y_n = u_x [ -sin t_n - (x_n sin t_n + y_n cos t_n) ∂ t_n/∂ y_n] ∂ y_n+1/∂ x_n = u_y [ sin t_n + (x_n cos t_n - y_n sin t_n)∂ t_n/∂ x_n] ∂ y_n+1/∂ y_n = u_y [ cos t_n + (x_n cos t_n - y_n sin t_n)∂ t_n/∂ y_n] where ∂ t_n/∂ x_n and ∂ t_n/∂ y_n are given by ∂ t_n/∂ x_n = 2ϕ x_n/(1 + x_n^2 + y_n^2)^2, ∂ t_n/∂ y_n = 2ϕ y_n/(1 + x_n^2 + y_n^2)^2 After some calculation one can show that the map is area preserving only when u_x=u_y=1 since the determinant of the Jacobian matrix is (J)=u_xu_y. As an ilustration, Figure <ref> illustrates the structure of the phase space for the map <ref> with varying values of u_x and u_y. Considering u_x=u_y=1 [ Fig. <ref>(a)], we observe a chaotic sea interspersed with a set of Kolmogorov-Arnold-Moser (KAM) islands. In this scenario, the phase space exhibits regions of chaotic behavior, where trajectories appear to move unpredictably and ergodically. Amidst this chaotic sea, KAM islands emerge as stable, quasi-periodic regions where the motion is regular and confined. These islands represent the remnants of the invariant tori that survive the perturbation introduced by the system's non-linearity. The coexistence of chaotic regions and KAM islands highlights the complex and rich structure of the phase space, illustrating the intricate interplay between order and chaos in the dynamical system. As dissipation is introduced, the phase space structure undergoes significant changes. The system exhibits a complex interplay between chaotic and periodic dynamics, as demonstrated in Fig. <ref>(b), where both chaotic and periodic orbits coexist. Specifically, Fig. <ref>(b) illustrates an attractive fixed point (indicated by a ×) alongside a chaotic attractor, each with its own basin of attraction, as further detailed in Fig. <ref>. As the system parameters vary, the dynamics undergo significant changes. For instance, in Fig. <ref>(c), a chaotic attractor emerges, signaling a transition from periodic to chaotic behavior. However, with further decreases in the dissipation parameters, the chaotic attractor is eventually replaced by a set of attracting fixed points, as shown in Fig. <ref>(d). This progression underscores the system's intricate and diverse behavior as it navigates different regions of parameter space. Figure <ref> presents the basins of attraction for both the attracting fixed point (orange) and the chaotic attractor (blue) depicted in Fig. <ref>(b). To generate this figure, the ranges u_x ∈ [-11, 30] and u_y ∈ [-25, 25] were partitioned into grids of 2000 intervals each, resulting in a total of 4 × 10^6 distinct initial conditions. Each initial condition was iterated up to n = 5 × 10^5. While other attractors may exist, their basins of attraction, if present, are either too small to detect or lie outside the scope of the initial conditions considered in this study. Furthermore, as we have shown, by slightly changing the control parameter, the behavior of the initial conditions can shift from chaotic to periodic. In Fig. <ref> (a-b), we see that for lower values of u_x, the system tends to exhibit periodic behavior, characterized by regular oscillations of x and y. As u_x increases, these periodic regions become interspersed with chaotic intervals, where the trajectories of x and y become irregular and sensitive to initial conditions. The transition between these behaviors is marked by bifurcations, indicating changes in the system's stability and the emergence of new dynamical regimes. We can observe the transition from regular to chaotic behavior by examining the bifurcation diagram. In this analysis, we consider the case where u_y = 0.6. To explore typical behaviors, specifically the bifurcation diagrams as the control parameter u_x varies, we use the initial conditions x_0 = 0.2 and y_0 = 0.3. Figure <ref>(a) shows the behavior of x plotted against the control parameter u_x, where a sequence of period-doubling bifurcations is evident. A similar sequence is observed for the asymptotic variable y, as shown in Figure <ref>(b). It is important to note that the bifurcations of the same period in both (a) and (b) occur for the same values of the control parameter u_x. Feigenbaum <cit.> was the first to observe a “universal” feature in the behavior of bifurcations in dynamical systems. Specifically, he noticed that as a system transitions to chaos through a series of period-doubling bifurcations, the ratios of the differences in control parameter values at which these bifurcations occur converge geometrically at a constant rate, denoted as δ. This discovery indicates that there is a universal behavior in a wide range of systems approaching chaos. rownum The procedure to determine the Feigenbaum constant δ is methodical: (a) Identify the bifurcation points: Let u_x(1) be the control parameter value at which a period-1 orbit (a single stable cycle) bifurcates into a period-2 orbit (a stable cycle that repeats every two periods); (b) Continue the process: Let u_x(2) be the value where the period-2 orbit bifurcates into a period-4 orbit, and u_x(3) where the period-4 orbit bifurcates into a period-8 orbit, and so on; (c) Generalize the parameter values: In general, the parameter u_x(n) corresponds to the control parameter value at which a period-2^n orbit is born. The Feigenbaum constant δ is then defined as the limit of the ratio of successive differences between these control parameter values as n approaches infinity. Mathematically, it is expressed as: δ = lim_n →∞u_x(n) - u_x(n-1)/u_x(n+1) - u_x(n). This constant δ captures the rate at which the bifurcations occur and is found to be the same for a wide variety of dynamical systems, highlighting the universality of this behavior. The theoretical value of the Feigenbaum constant δ is approximately 4.669201609…. This value has been confirmed through both numerical calculations and experimental observations, and it plays a crucial role in the understanding of the transition to chaos in nonlinear dynamical systems. How can we determine the parameter where bifurcation occurs? One effective tool is the Lyapunov exponent. As discussed by Eckmann and Ruelle <cit.>, the Lyapunov exponents are defined as: λ_j=lim_n→∞1nln|Λ_j|  ,  j=1, 2  , where Λ_j are the eigenvalues of M=∏_i=1^nJ_i(x_i,y_i) and J_i is the Jacobian matrix evaluated over the orbit (x_i,y_i). However, a direct implementation of a computational algorithm to evaluate Eq. (<ref>) has a severe limitation to obtain M. For the limit of short n, the components of M can assume different orders of magnitude for chaotic orbits and periodic attractors, making the implementation of the algorithm impracticable. To avoid such a problem, J can be written as J=Θ T where Θ is an orthogonal matrix and T is a right up triangular matrix. M is rewritten as M=J_nJ_n-1… J_2Θ_1Θ_1^-1J_1, where T_1=Θ_1^-1J_1. A product of J_2Θ_1 defines a new J_2^'. In a next step, one can show that M=J_nJ_n-1… J_3Θ_2Θ_2^-1J_2^'T_1. The same procedure can be used to obtain T_2=Θ_2^-1J_2^' and so on. Using this procedure, the problem is reduced to evaluate the diagonal elements of T_i:T_11^i,T_22^i. Finally, the Lyapunov exponents are given by λ_j=lim_n→∞1n∑_i=1^n ln|T_jj^i|  ,  j=1,2  . If at least one of the λ_j is positive, the orbit is said to be chaotic. Figure (<ref>)(c) shows the behavior of the Lyapunov exponents corresponding to Fig. <ref>(a-b). It is evident that when bifurcations occur, the exponent λ_j vanishes. Based on the numerical data obtained through the calculation of Lyapunov exponents, Feigenbaum's δ for the Ikeda map is found to be δ = 4.669248396257327…. Here, we have considered bifurcations up to the tenth order (see Table I), and our result is in good agreement with Feigenbaum’s δ up to 10^-4. Furthermore, the detailed examination of these bifurcation diagrams reveals the underlying mechanisms driving the system towards chaos. Overall, these bifurcation diagrams serve as a crucial tool for visualizing and understanding the complex dynamics of the system as it responds to changes in the control parameter u_x. The clear sequence of period-doubling bifurcations leading to chaos underscores the rich and intricate nature of the system's behavior under the given conditions. To obtain a more comprehensive understanding of the model's dynamics, we investigate the parameter space where both components of the dissipation parameters, u_x and u_y, vary. By systematically varying u_x and u_y, we can map out the regions of the parameter space that correspond to different dynamical behaviors. This includes identifying zones of periodic motion, chaotic behavior, and the transitions between them. Such a parametric study provides insights into the robustness of the observed phenomena and helps to uncover the underlying structure of the phase space. The detailed exploration of the parameter space also reveals how the interplay between the dissipation parameters u_x and u_y influences the overall dynamics. For example, increasing u_x might enhance the stability of periodic orbits, while varying u_y could affect the onset of chaos. By understanding these relationships, we can better predict the system's response to changes in the control parameters and potentially develop strategies to control or exploit the dynamics for specific applications. To thoroughly investigate the parameter space of the system described in equation <ref>, we systematically varied the dissipation parameters u_x and u_y. For each combination of these parameters, after discarding a significant transient period, we calculated the Lyapunov exponent, which is a crucial measure of the system's sensitivity to initial conditions. The value of the Lyapunov exponent determines whether the system exhibits chaotic or regular behavior. Once the Lyapunov exponent was computed for each pair (u_x, u_y), we assigned a corresponding color to visually represent the stability or chaos of the system within the parameter space. Specifically, Figure <ref> provides a detailed view of the parameter space for the Ikeda map, where a shrimp-shaped structure becomes evident. The color scale used in this figure is carefully designed to distinguish between different types of behavior: regions exhibiting regular, stable dynamics are colored in shades of red to yellow, while regions displaying chaotic behavior are colored in shades of green to blue. This shrimp-shaped structure is a well-known feature in the study of dynamical systems and indicates the presence of complex bifurcation patterns. Our findings in this parameter space are in agreement with previous studies, particularly with the results presented in Refs. <cit.>. To construct the figure, we divided the range of both u_x and u_y into 2000 equal intervals, resulting in a grid of 4 × 10^6 distinct parameter combinations. For each combination, we began with initial conditions x_0 = 0.2 and y_0 = 0.3, and then iteratively followed the attractor as the parameters u_x and u_y were incrementally adjusted. After each increment, the final state of the previous combination served as the new initial condition, ensuring continuity in the exploration of the attractor's evolution. However, it is important to note that this approach can inadvertently omit information about other potential attractors. This omission occurs because the chosen initial conditions might lie within the basin of attraction of a specific attractor, thereby excluding the exploration of other possible attractors that might exist for different initial conditions. Figure <ref>(b) offers a magnified view of the main structure in Figure <ref>(a), revealing the intricate bifurcation patterns characteristic of each shrimp-shaped structure. Each shrimp features a central body, generally representing stable dynamics, followed by an infinite sequence of bifurcations that adhere to the pattern k × 2^n, where k denotes the period of the central body. These bifurcations give rise to increasingly complex and chaotic dynamics, showcasing the rich and multifaceted behavior of the Ikeda map as the parameters are varied. The detailed bifurcation sequences provide valuable insights into the system’s transition from regular to chaotic dynamics, highlighting the underlying complexity and sensitivity of the parameter space. § CONCLUSION This study explores the rich dynamical behavior of the Ikeda map, a nonlinear system originally designed to model light in a nonlinear optical cavity. Despite its seemingly simple form, the Ikeda map exhibits a complex array of dynamical phenomena, including periodic orbits, chaotic attractors, and intricate bifurcation structures. Through our analysis, we have demonstrated that the Ikeda map features a diverse range of dynamical regimes depending on the values of the dissipation parameters u_x and u_y. The phase space analysis reveals a striking transition from chaotic to periodic behavior as these parameters vary. Specifically, the map exhibits a period-doubling bifurcation cascade, characteristic of systems approaching chaos. We have successfully quantified the Feigenbaum constant δ associated with these bifurcations, finding it to be consistent with the theoretically established value of approximately 4.669201609.... This confirms the universal nature of the bifurcation cascade observed in the Ikeda map and aligns with the behaviors seen in other dynamical systems. Furthermore, the identification of shrimp-shaped structures within the parameter space provides a compelling visualization of the map's dynamical complexity. These structures represent intricate domains of stability interspersed with chaotic regions, underscoring the interplay between order and chaos in the map's dynamics. Our use of Lyapunov exponents as a tool for distinguishing chaotic from regular regions has proven effective, offering a clear method for analyzing the map's behavior. Overall, this work highlights the Ikeda map's utility as a model for studying chaotic systems and bifurcation phenomena. The observed transitions between periodic and chaotic behaviors, along with the rich parameter space structures, illustrate the map's capability to capture the essence of dynamical complexity. Future research could extend these findings by exploring other parameter ranges or by examining the influence of additional nonlinearities, further enhancing our understanding of chaos and stability in nonlinear dynamical systems. § ACKNOWLEDGMENTS I would like to dedicate this work to the memory of Prof. Dr. Jason Alfredo Carlson Gallas, with whom I had the privilege of meeting during my time as a Postdoctoral Researcher at the Institute of Multiscale Simulation, Friedrich-Alexander-Universität Erlangen-Nürnberg, under the supervision of Prof. Dr. Thorsten Pöschel in 2012. § REFERENCES 50 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Lorenz(1963)]lorenz1963deterministic author author E. N. Lorenz, title title Deterministic nonperiodic flow, @noop journal journal Journal of atmospheric sciences volume 20, pages 130–141 (year 1963)NoStop [Gallas(1993)]gallas1993structure author author J. A. Gallas, title title Structure of the parameter space of the hénon map, @noop journal journal Physical review letters volume 70, pages 2714 (year 1993)NoStop [May(1976)]may1976simple author author R. M. May, title title Simple mathematical models with very complicated dynamics, @noop journal journal Nature volume 261, pages 459–467 (year 1976)NoStop [Strogatz(2018)]strogatz2018nonlinear author author S. H. Strogatz, @noop title Nonlinear dynamics and chaos: with applications to physics, biology, chemistry, and engineering (publisher CRC press, year 2018)NoStop [Guckenheimer and Holmes(2013)]guckenheimer2013nonlinear author author J. Guckenheimer and author P. Holmes, @noop title Nonlinear oscillations, dynamical systems, and bifurcations of vector fields, Vol. volume 42 (publisher Springer Science & Business Media, year 2013)NoStop [Moon and Holmes(1979)]moon1979magnetoelastic author author F. Moon and author P. J. Holmes, title title A magnetoelastic strange attractor, @noop journal journal Journal of Sound and Vibration volume 65, pages 275–296 (year 1979)NoStop [Kamphorst and de Carvalho(1999)]kamphorst1999bounded author author S. O. Kamphorst and author S. P. de Carvalho, title title Bounded gain of energy on the breathing circle billiard, @noop journal journal Nonlinearity volume 12, pages 1363 (year 1999)NoStop [Oliveira and Leonel(2010)]oliveira2010dynamical author author D. F. Oliveira and author E. D. Leonel, title title On the dynamical properties of an elliptical–oval billiard with static boundary, @noop journal journal Communications in Nonlinear Science and Numerical Simulation volume 15, pages 1092–1102 (year 2010)NoStop [Oliveira and Robnik(2012)]oliveira2012scaling author author D. F. Oliveira and author M. Robnik, title title Scaling invariance in a time-dependent elliptical billiard, @noop journal journal International Journal of Bifurcation and Chaos volume 22, pages 1250207 (year 2012)NoStop [Pustyl'nikov(1995)]pustyl1995construction author author L. D. Pustyl'nikov, title title Construction of periodic solutions in an infinite system of fermi-pasta-ulam ordinary differential equations, stability, and kam theory, @noop journal journal Russian Mathematical Surveys volume 50, pages 449 (year 1995)NoStop [Sinai(1970)]sinai1970dynamical author author Y. G. Sinai, title title Dynamical systems with elastic reflections, @noop journal journal Russian Mathematical Surveys volume 25, pages 137 (year 1970)NoStop [Bunimovich(1979)]bunimovich1979ergodic author author L. A. Bunimovich, title title On the ergodic properties of nowhere dispersing billiards, @noop journal journal Communications in Mathematical Physics volume 65, pages 295–312 (year 1979)NoStop [Robnik(1983)]robnik1983classical author author M. Robnik, title title Classical dynamics of a family of billiards with analytic boundaries, @noop journal journal Journal of Physics A: Mathematical and General volume 16, pages 3971 (year 1983)NoStop [Oliveira, Vollmer, and Leonel(2011)]oliveira2011fermi author author D. F. Oliveira, author J. Vollmer, and author E. D. Leonel, title title Fermi acceleration and its suppression in a time-dependent lorentz gas, @noop journal journal Physica D: Nonlinear Phenomena volume 240, pages 389–396 (year 2011)NoStop [Oliveira and Leonel(2012a)]oliveira2012flight author author D. F. Oliveira and author E. D. Leonel, title title In-flight and collisional dissipation as a mechanism to suppress fermi acceleration in a breathing lorentz gas, @noop journal journal Chaos: An Interdisciplinary Journal of Nonlinear Science volume 22 (year 2012a)NoStop [Leonel and McClintock(2005)]leonel2005hybrid author author E. D. Leonel and author P. McClintock, title title A hybrid fermi–ulam-bouncer model, @noop journal journal Journal of Physics A: Mathematical and General volume 38, pages 823 (year 2005)NoStop [Oliveira, Bizao, and Leonel(2009)]oliveira2009scaling author author D. F. Oliveira, author R. A. Bizao, and author E. D. Leonel, title title Scaling properties of a hybrid fermi-ulam-bouncer model, @noop (year 2009)NoStop [Oliveira, Leonel, and Robnik(2011)]oliveira2011boundary author author D. F. Oliveira, author E. D. Leonel, and author M. Robnik, title title Boundary crisis and transient in a dissipative relativistic standard map, @noop journal journal Physics Letters A volume 375, pages 3365–3369 (year 2011)NoStop [Oliveira, Silva, and Leonel(2015)]oliveira2015symmetry author author D. F. Oliveira, author M. R. Silva, and author E. D. Leonel, title title A symmetry break in energy distribution and a biased random walk behavior causing unlimited diffusion in a two dimensional mapping, @noop journal journal Physica A: Statistical Mechanics and its Applications volume 436, pages 909–915 (year 2015)NoStop [Page et al.(2020)Page, Antoine, Dettmann, and Talbot]page2020iris author author G. Page, author C. Antoine, author C. P. Dettmann, and author J. Talbot, title title The iris billiard: Critical geometries for global chaos, @noop journal journal Chaos: An Interdisciplinary Journal of Nonlinear Science volume 30 (year 2020)NoStop [Ikeda(1979)]ikeda1979multiple author author K. Ikeda, title title Multiple-valued stationary state and its instability of the transmitted light by a ring cavity system, @noop journal journal Optics communications volume 30, pages 257–261 (year 1979)NoStop [Ikeda, Daido, and Akimoto(1980)]ikeda1980optical author author K. Ikeda, author H. Daido, and author O. Akimoto, title title Optical turbulence: chaotic behavior of transmitted light from a ring cavity, @noop journal journal Physical Review Letters volume 45, pages 709 (year 1980)NoStop [Watanabe and Strogatz(1994)]watanabe1994constants author author S. Watanabe and author S. H. Strogatz, title title Constants of motion for superconducting josephson arrays, @noop journal journal Physica D: Nonlinear Phenomena volume 74, pages 197–253 (year 1994)NoStop [Linsay(1981)]linsay1981period author author P. S. Linsay, title title Period doubling and chaotic behavior in a driven anharmonic oscillator, @noop journal journal Physical review letters volume 47, pages 1349 (year 1981)NoStop [Feigenbaum(1978)]feigenbaum1978quantitative author author M. J. Feigenbaum, title title Quantitative universality for a class of nonlinear transformations, @noop journal journal Journal of statistical physics volume 19, pages 25–52 (year 1978)NoStop [Feigenbaum(1979)]feigenbaum1979universal author author M. J. Feigenbaum, title title The universal metric properties of nonlinear transformations, @noop journal journal Journal of Statistical Physics volume 21, pages 669–706 (year 1979)NoStop [Hanias, Avgerinos, and Tombras(2009)]hanias2009period author author M. Hanias, author Z. Avgerinos, and author G. Tombras, title title Period doubling, feigenbaum constant and time series prediction in an experimental chaotic rld circuit, @noop journal journal Chaos, Solitons & Fractals volume 40, pages 1050–1059 (year 2009)NoStop [Chen et al.(2012)Chen, Sheu, Tam, and Lao]chen2012new author author H.-K. Chen, author L.-J. Sheu, author L.-M. Tam, and author S.-K. Lao, title title A new finding of the existence of feigenbaum’s constants in the fractional-order chen–lee system, @noop journal journal Nonlinear Dynamics volume 68, pages 589–599 (year 2012)NoStop [Gaspard, Kapral, and Nicolis(1984)]gaspard1984bifurcation author author P. Gaspard, author R. Kapral, and author G. Nicolis, title title Bifurcation phenomena near homoclinic systems: A two-parameter analysis, @noop journal journal Journal of Statistical Physics volume 35, pages 697–727 (year 1984)NoStop [Rössler et al.(1989)Rössler, Kiwi, Hess, and Markus]rossler1989modulated author author J. Rössler, author M. Kiwi, author B. Hess, and author M. Markus, title title Modulated nonlinear processes and a novel mechanism to induce chaos, @noop journal journal Physical Review A volume 39, pages 5954 (year 1989)NoStop [Komuro et al.(1991)Komuro, Tokunaga, Matsumoto, Chua, and Hotta]komuro1991global author author M. Komuro, author R. Tokunaga, author T. Matsumoto, author L. Chua, and author A. Hotta, title title Global bifurcation analysis of the double scroll circuit, @noop journal journal International Journal of Bifurcation and Chaos volume 1, pages 139–182 (year 1991)NoStop [Vitolo, Glendinning, and Gallas(2011)]vitolo2011global author author R. Vitolo, author P. Glendinning, and author J. A. Gallas, title title Global structure of periodicity hubs in lyapunov phase diagrams of dissipative flows, @noop journal journal Physical Review E—Statistical, Nonlinear, and Soft Matter Physics volume 84, pages 016216 (year 2011)NoStop [Gallas(1995)]gallas1995structure author author J. Gallas, title title Structure of the parameter space of a ring cavity, @noop journal journal Applied Physics-Section B-Lasers and Optics volume 60, pages S203 (year 1995)NoStop [Gallas(1994)]gallas1994dissecting author author J. A. Gallas, title title Dissecting shrimps: results for some one-dimensional physical models, @noop journal journal Physica A: Statistical Mechanics and its Applications volume 202, pages 196–223 (year 1994)NoStop [Hunt et al.(1999)Hunt, Gallas, Grebogi, Yorke, and Koçak]hunt1999bifurcation author author B. R. Hunt, author J. A. Gallas, author C. Grebogi, author J. A. Yorke, and author H. Koçak, title title Bifurcation rigidity, @noop journal journal Physica D: Nonlinear Phenomena volume 129, pages 35–56 (year 1999)NoStop [Bonatto, Garreau, and Gallas(2005)]bonatto2005self author author C. Bonatto, author J. C. Garreau, and author J. A. Gallas, title title Self-similarities in the frequency-amplitude space of a loss-modulated co 2 laser, @noop journal journal Physical Review Letters volume 95, pages 143905 (year 2005)NoStop [Oliveira and Leonel(2013)]oliveira2013some author author D. F. Oliveira and author E. D. Leonel, title title Some dynamical properties of a classical dissipative bouncing ball model with two nonlinearities, @noop journal journal Physica A: Statistical Mechanics and its Applications volume 392, pages 1762–1769 (year 2013)NoStop [Oliveira and Leonel(2012b)]oliveira2012dynamical author author D. F. Oliveira and author E. D. Leonel, title title Dynamical properties for the problem of a particle in an electric field of wave packet: Low velocity and relativistic approach, @noop journal journal Physics Letters A volume 376, pages 3630–3637 (year 2012b)NoStop [Oliveira and Leonel(2014)]oliveira2014statistical author author D. F. Oliveira and author E. D. Leonel, title title Statistical and dynamical properties of a dissipative kicked rotator, @noop journal journal Physica A: Statistical Mechanics and its Applications volume 413, pages 498–514 (year 2014)NoStop [Lorenz(2008)]lorenz2008compound author author E. N. Lorenz, title title Compound windows of the hénon-map, @noop journal journal Physica D: Nonlinear Phenomena volume 237, pages 1689–1704 (year 2008)NoStop [Maranhao et al.(2008)Maranhao, Baptista, Sartorelli, and Caldas]maranhao2008experimental author author D. M. Maranhao, author M. S. Baptista, author J. C. Sartorelli, and author I. L. Caldas, title title Experimental observation of a complex periodic window, @noop journal journal Physical Review E—Statistical, Nonlinear, and Soft Matter Physics volume 77, pages 037202 (year 2008)NoStop [Stoop, Benner, and Uwate(2010)]stoop2010real author author R. Stoop, author P. Benner, and author Y. Uwate, title title Real-world existence and origins of the spiral organization of shrimp-shaped domains, @noop journal journal Physical review letters volume 105, pages 074102 (year 2010)NoStop [Stoop et al.(2012)Stoop, Martignoli, Benner, Stoop, and Uwate]stoop2012shrimps author author R. Stoop, author S. Martignoli, author P. Benner, author R. L. Stoop, and author Y. Uwate, title title Shrimps: occurrence, scaling and relevance, @noop journal journal International Journal of Bifurcation and Chaos volume 22, pages 1230032 (year 2012)NoStop [Viana et al.(2010)Viana, Rubinger, Albuquerque, de Oliveira, and Ribeiro]viana2010high author author E. R. Viana, author R. M. Rubinger, author H. A. Albuquerque, author A. G. de Oliveira, and author G. M. Ribeiro, title title High-resolution parameter space of an experimental chaotic circuit, @noop journal journal Chaos: An Interdisciplinary Journal of Nonlinear Science volume 20 (year 2010)NoStop [Diego and Leonel(2011)]diego2011parameter author author F. M. O. Diego and author E. D. Leonel, title title Parameter space for a dissipative fermi–ulam model, @noop journal journal New Journal of Phycsicis. December volume 8 (year 2011)NoStop [Eckmann and Ruelle(1985)]eckmann1985ergodic author author J.-P. Eckmann and author D. Ruelle, title title Ergodic theory of chaos and strange attractors, @noop journal journal Reviews of modern physics volume 57, pages 617 (year 1985)NoStop [Baptista and Caldas(1997)]baptista1997parameter author author M. d. S. Baptista and author I. L. Caldas, title title The parameter space structure of the kicked logistic map and its stability, @noop journal journal International Journal of Bifurcation and Chaos volume 7, pages 447–457 (year 1997)NoStop [Mackay and Tresser(1986)]mackay1986transition author author R. S. Mackay and author C. Tresser, title title Transition to topological chaos for circle maps, @noop journal journal Physica D: Nonlinear Phenomena volume 19, pages 206–237 (year 1986)NoStop [Celestino et al.(2014)Celestino, Manchein, Albuquerque, and Beims]celestino2014stable author author A. Celestino, author C. Manchein, author H. A. Albuquerque, and author M. W. Beims, title title Stable structures in parameter space and optimal ratchet transport, @noop journal journal Communications in Nonlinear Science and Numerical Simulation volume 19, pages 139–149 (year 2014)NoStop [Hansen et al.(2014)Hansen, da Costa, Oliveira, and Leonel]hansen2014statistical author author M. Hansen, author D. R. da Costa, author D. F. Oliveira, and author E. D. Leonel, title title Statistical properties for a dissipative model of relativistic particles in a wave packet: A parameter space investigation, @noop journal journal Applied Mathematics and Computation volume 238, pages 387–392 (year 2014)NoStop
http://arxiv.org/abs/2408.12110v1
20240822035139
Pareto Inverse Reinforcement Learning for Diverse Expert Policy Generation
[ "Woo Kyung Kim", "Minjong Yoo", "Honguk Woo" ]
cs.LG
[ "cs.LG" ]
Search-Based LLMs for Code Optimization Shuzheng Gao^1, Cuiyun Gao^2∗, Wenchao Gu^1, Michael R. Lyu^1 ^1 Department of Computer Science and Engineering, The Chinese University of Hong Kong, China ^2 School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen, China szgao23@cse.cuhk.edu.hk, gaocuiyun@hit.edu.cn, wcgu@cse.cuhk.edu.hk, lyu@cse.cuhk.edu.hk ^∗ Corresponding author. The author is also affiliated with Peng Cheng Laboratory. August 26, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Data-driven offline reinforcement learning and imitation learning approaches have been gaining popularity in addressing sequential decision-making problems. Yet, these approaches rarely consider learning Pareto-optimal policies from a limited pool of expert datasets. This becomes particularly marked due to practical limitations in obtaining comprehensive datasets for all preferences, where multiple conflicting objectives exist and each expert might hold a unique optimization preference for these objectives. In this paper, we adapt inverse reinforcement learning (IRL) by using reward distance estimates for regularizing the discriminator. This enables progressive generation of a set of policies that accommodate diverse preferences on the multiple objectives, while using only two distinct datasets, each associated with a different expert preference. In doing so, we present a Pareto IRL framework () that establishes a Pareto policy set from these limited datasets. In the framework, the Pareto policy set is then distilled into a single, preference-conditioned diffusion model, thus allowing users to immediately specify which expert's patterns they prefer. Through experiments, we show that outperforms other IRL algorithms for various multi-objective control tasks, achieving the dense approximation of the Pareto frontier. We also demonstrate the applicability of with autonomous driving in CARLA. § INTRODUCTION In decision-making scenarios, each expert might have her own preference on multiple, possibly conflicting objectives (multi-objectives). Accordingly, learning Pareto-optimal policies in multi-objective environments has been considered essential and practical to provide users with a selection of diverse expert-level policies, which can cater their specific preferences (e.g., <cit.>). However, in the area of imitation learning, such multi-objective problem has not been fully explored due to the requirement for comprehensive expert datasets encompassing the full range of multi-objective preferences (e.g., <cit.>), which might be unattainable in real-world scenarios. In the ideal scenario depicted on the left side of Figure <ref>, having comprehensive expert datasets encompassing diverse multi-objective preferences enables the straightforward derivation of a Pareto policy set by reconstructing policies from each dataset. However, this is often not feasible in real-world situations where datasets might not represent all preferences. This common limitation is illustrated on the right side of Figure <ref>. Here, one typically has access to only two distinct datasets, each reflecting different multi-objective preferences. In such limited dataset cases, a viable approach involves merging these datasets in varying proportions, followed by the application of imitation learning on each blended dataset. However, this approach often leads to a collection of non-Pareto-optimal policies, as demonstrated in Section <ref>. In this paper, we address the challenges of multi-objective imitation learning in situations with strictly limited datasets, specifically focusing on Pareto policy set generation. Our goal is to derive optimal policies that conform with diverse multi-objective preferences, even in the face of limited datasets regarding these preferences. To do so, we investigate inverse reinforcement learning (IRL) and present a Pareto IRL () framework in which a Pareto policy set corresponding to the best compromise solutions over multi-objectives can be induced. This framework is set in a similar context to conventional IRL where reward signals are not from the environment, but it is intended to obtain a dense set of Pareto policies rather than an individually imitated policy. In , we exploit a recursive IRL structure to find a Pareto policy set progressively in a way that at each step, nearby policies can be derived between the policies of the previous step. Specifically, we adapt IRL using reward distance regularization; new policies are regularized based on reward distance estimates to be balanced well between distinct datasets, while ensuring the regret bounds of each policy. This recursive IRL is instrumental in achieving the dense approximation of a Pareto policy set. Through distillation of the approximated Pareto policy set to a single policy network, we build a diffusion-based model, which is conditioned on multi-objective preferences. This distillation not only enhances the Pareto policy set but also integrates it into a single unified model, thereby facilitating the zero-shot adaptation to varying and unseen preferences. The contributions of our work are summarized as follows. * We introduce the framework to address a novel challenge of imitation learning, Pareto policy set generation from strictly limited datasets. * We devise a recursive IRL scheme with reward distance regularization to generate policies that extend beyond the datasets, and we provide a theoretical analysis on their regret bounds. * We present a preference-conditioned diffusion model to further enhance the approximated policy set on unseen preferences. This allows users to dynamically adjust their multi-objective preferences at runtime. * We verify with several multi-objective environments and autonomous driving scenarios, demonstrating its superiority for Pareto policy set generation. * is the first to tackle the data limitation problem for Pareto policy set generation within the IRL context. § PRELIMINARIES AND PROBLEM FORMULATION §.§ Background Multi-Objective RL (MORL). A multi-objective Markov decision process (MOMDP) is formulated with multiple reward functions, each associated with an individual objective. (, , , , Ω, f, γ) Here, s ∈ is a state space, a ∈ is an action space, : ××→ [0,1] is a transition probability, and γ∈ [0, 1] is a discount factor. MOMDP incorporates a vector of m reward functions = [r_1, ..., r_m] for : ××→, a set of preference vectors Ω⊂^m, and a linear preference function f(𝐫, ω) = ω^Tr where ω∈Ω. The goal of MORL is to find a set of Pareto polices π^* ∈ for an MOMDP environment, where π^* maximizes scalarized returns, i.e., max _π_a ∼π(·|s) [∑_t=1^Hγ^tf(r,ω)]. Inverse RL (IRL). Given an expert dataset ^*={τ_i}_i=1^n, where each trajectory τ_i is represented as a sequence of state and action pairs {(s_t,a_t)}_t=1^T, IRL aims to infer the reward function of the expert policy, thus enabling the rationalization of its behaviors. Among many, the adversarial IRL algorithm (AIRL) casts IRL into a generative adversarial problem <cit.> with such discriminator as D(s,a,s') = exp(r̃(s,a,s'))/exp(r̃(s,a,s')) + π(a|s) where s' ∼(s, a, ·) and r̃ is a inferring reward function. The discriminator is trained to maximize the cross entropy between expert dataset and dataset induced by the policy via max [ 𝔼_(s,a)∼_π [log (1 - D(s,a,s'))] + 𝔼_(s,a) ∼^* [ log D(s,a,s')]] where _π is the dataset induced by learning policy π. The generator of AIRL corresponds to π, which is trained to maximize the entropy-regularized reward function such as log(D(s,a,s')) - log(1 - D(s,a,s')) = r̃(s,a,s') - logπ(a|s). §.§ Formulation of Pareto IRL We specify the Pareto IRL problem which derives a Pareto policy set from strictly limited datasets. Consider M distinct expert datasets ^* = {^*_i}_i=1^M where each expert dataset ^*_i is collected from the optimal policy on some reward function r_ = ω_i^T𝐫 with a fixed preference ω_i ∈Ω. Furthermore, we assume that each dataset ^*_i distinctly exhibits dominance on a particular reward function r_i. In the following, we consider scenarios with two objectives (M=2), and later discuss the generalization for three or more objectives in Appendix A.4. Given two distinct datasets, in the context of IRL, we refer to Pareto policy set derivation via IRL as Pareto IRL. Specifically, it aims at inferring a reward function r̃ and learning a policy π for any preference ω from the strictly limited datasets ^*. That is, when exploiting limited expert datasets in a multi-objective environment, we focus on establishing the Pareto policy set effectively upon unknown reward functions and preferences. Figure <ref> briefly illustrates the concept of Pareto IRL, where a self-driving task involves different preferences on two objectives, possibly conflicting, such as driving speed and energy efficiency. Consider two distinct expert datasets, where each expert has her own preference settings for the driving speed and energy efficiency objectives (e.g., ^*_1 and ^*_2 involve one dominant objective differently). While it is doable to restore a single useful policy individually from one given expert dataset, our work addresses the issue to generate a set of policies which can cover a wider range of preferences beyond given datasets. The policies are capable of rendering optimal compromise returns, denoted by dotted circles in the figure, and they allow users to immediately select the optimal solution according to their preference and situation. For an MOMDP with a set of preference vectors ω∈Ω, a vector of reward functions 𝐫, and a preference function f in (<ref>), Pareto policy set generation is to find a set of multi-objective policies such as Π = {π | R_f(𝐫, ω)(π) ≥ R_f(𝐫, ω)(π'), ∀π', ∃ ω∈Ω} for M expert preference datasets {^*_i}_i=1^M. R_r(π) represents returns induced by policy π on reward function r. Neither a vector of true reward functions 𝐫 is explicitly revealed, nor the rewards signals are annotated in the expert datasets, similar to conventional IRL scenarios. § OUR FRAMEWORK To obtain a Pareto policy set from strictly limited datasets, we propose the framework involving two learning phases: (1) recursive reward distance regularized IRL, (2) distillation to a preference-conditioned model. In the first phase, our approach begins with direct imitation of the given expert datasets, and then recursively finds neighboring policies that lie on the Pareto front. Specifically, we employ the reward distance regularized IRL method that incorporates reward distance regularization into the discriminator's objective to learn a robust multi-objective reward function. This regularized IRL ensures that the performance of the policy learned by the inferred multi-objective reward function remains within the bounds of the policy learned by the true reward function. By performing this iteratively, we achieve new useful policies that are not presented in the expert datasets, thus establishing a high-quality Pareto policy set. In the second phase, we distill the Pareto policy set into a preference-conditioned diffusion model. The diffusion model encapsulates both preference-conditioned and unconditioned policies, each of which is associated with the preference-specific knowledge (within a task) and the task-specific knowledge (across all preferences), respectively. Consequently, the unified policy model further enhances the Pareto policy set, rendering robust performance for arbitrary unseen preferences in a zero-shot manner. It also allows for efficient resource utilization with a single policy network. §.§ Recursive Reward Distance Regularized IRL Notation. We use superscripts g ∈{1,...,G} to denote recursive step and subscripts i ∈{1,2} to denote i-th multi-objective policies derived at each recursive step g. We consider two objectives cases in the following. Individual IRL. As shown in the Figure <ref> (1-1), the framework initiates with two separate IRL procedures, each dedicated to directly imitating one of the expert datasets. For this, we adopt AIRL <cit.> which uses the objectives (<ref>) and (<ref>) to infer reward functions {r̃_i^1}_i=1^2 and policies {π_i^1}_i=1^2 from the individual expert dataset ^*_i ∈^*. Reward distance regularized IRL. Subsequently, as shown in the Figure <ref> (1-2), at each recursive step g ≥ 2, we derive new multi-objective reward functions {r̃_i^g}_i=1^2 and respective policies {π^g_i}_i=1^2 that render beyond the given datasets. To do so, a straightforward approach might involve conducting IRL iteratively by blending the expert datasets at different ratios. However, as illustrated in Figure <ref>, the resulting policies tend to converge towards some weighted mean of datasets, rather than fully exploring non-dominant optimal actions beyond simple interpolation of given expert actions. To address the problem, we present a reward distance regularized IRL on datasets ^g-1 = {_i^g-1}_i=1^2 collected from the policies derived at the previous step. Given a reward distance metric d(r, r'), we compute the distance between the newly derived reward function r̃^g_i and previously derived reward functions 𝐫̃^g-1 = [r̃^g-1_1, r̃^g-1_2]. Further, we define target distances as a vector ϵ^g_i = [ϵ^g_i,1, ϵ^g_i,2] to constrain each of the corresponding measured reward distances. Then, we define a reward distance regularization term as I(r̃_i^g,𝐫̃^g-1) =∑_j=1^2 ( ϵ^g_i,j - d(r̃_i^g, r̃_j^g-1) )^2 where the subscripts i and j denote the newly derived reward function and the previously derived one, respectively. Finally, we incorporate (<ref>) into the discriminator objective (<ref>) as max 𝔼_(s,a) ∼𝒯_π_i^g [ log ( 1 - D(s,a,s') )] + 𝔼_(s,a) ∼^g-1 [log D(s,a,s')] - β· I(r̃_i^g,𝐫̃^g-1) where β is a hyperparameter. This allows the discriminator to optimize a multi-objective reward function for a specific target distance across datasets. The reward distance regularized IRL procedure is performed twice with different target distances, to derive policies adjacent to each of the previously derived policies. Furthermore, we fork the new regularized IRL procedure with the previously one (that is adjacent) to enhance the efficiency and robustness in learning. The choice of the target distance is crucial, as the regret of a multi-objective policy is bounded under the reward distance (<ref>). Thus, we set the sum of the target distances as small as possible. As the reward distance metrics satisfy the triangle inequality d(r̃_1^g-1,r̃_2^g-1) ≤ d(r̃_i^g,r̃_1^g-1) + d(r̃_i^g,r̃_2^g-1), we limit the sum of target distances to ϵ̂^g_i = ∑_j=1^2 ϵ^g_i,j = d(r̃_1^g-1,r̃_2^g-1). In practice, we assign a small constant value for one of the target distances ϵ^g_i,i, while the other is determined as ϵ̂^g_i - ϵ^g_i,i. By doing so, we are able to effectively derive a new policy that is adjacent to one of the previous policies. Any reward distance metric that guarantees the regret bounds of policy can be used for . In our implementation, we adopt EPIC, also known as equivalent policy invariant comparison pseudometric <cit.>, which quantitatively measures the distance between two reward functions. The learning procedure of recursive reward distance regularized IRL is summarized in Algorithm <ref>. In Appendix A.4, we discuss the generalization of reward distance regularization for more than two objectives (M ≥ 3). §.§ Regret Bounds of Reward Distance Regularized Policy We provide an analysis of the regret bounds of a reward distance regularized policy. Let r̃ be our learned reward function and π^*_r be the optimal policy with respect to reward function r. Suppose that there exists a (ground truth) multi-objective reward function r_ = ω^T 𝐫 with preference ω = [ω_1,ω_2]. With the linearity of r_, we obtain R_r_(π_r_^*) - R_r_(π_r̃^*) = ∑_i=1^2 ω_i (R_r̃_i(π_r_^*) - R_r̃_i(π_r̃^*)) ≤∑_i=1^2 ω_i (R_r̃_i(π_r̃_i^*) - R_r̃_i(π_r̃^*)). Let 𝒟 be the distribution over transitions S × A × S used to compute EPIC distance d_ϵ, and 𝒟_π, t be the distribution over transitions on timestep t induced by policy π. Using Theorem A.16 in <cit.>, we derive that for α≥ 2, (<ref>) is bounded by the sum of individual regret bounds, i.e., ∑_i=1^2 ω_i (R_r̃_i (π_r̃_i^*) - R_r̃_i(π_r̃^*)) ≤∑_i=1^2 16 ω_i ‖r̃_i ‖_2(K d_ϵ (r̃, r̃_i) + LΔ_α(r̃) ) where L is a constant, K = α / (1-γ), Δ_α(r̃) = ∑_t=0^Tγ^t W_α(𝒟_π^*_r̃, t, 𝒟), and W_α is the relaxed Wasserstein distance <cit.>. Consequently, we obtain R _r_mo (π_r_mo^*)- R_r_mo(π_r̃^*) ≤ 32Kr__2 (∑_i=1^2 [ ω_i d_ϵ (r̃, r̃_i) ] + L/KΔ_α(r̃) ). As such, the regret bounds of our learned policy π on reward function r̃ are represented by the regularization term based on EPIC along with the differences between the respective distributions of transitions generated by π^*_r̃ and the distribution 𝒟 used to compute EPIC distance. This ensures that the regret bounds of π can be directly optimized by using (<ref>). In our implementation, instead of directly multiplying the preference ω to the loss function, we reformulate the preference into the target distance to balance the distance better. The details with proof can be found in Appendix A.2. §.§ Preference-conditioned Diffusion Model To further enhance the Pareto policy set obtained in the previous section, we leverage diffusion models <cit.>, interpolating and extrapolating policies via distillation. We first systematically annotate with preferences ω∈Ω in an ascending order. We then train a diffusion-based policy model, which is conditioned on these preferences; i.e., π_u(a|s,ω) = 𝒩(a^K;0,I) ∏_k=1^Kπ̂_u(a^k-1|a^k,k,s,ω) where superscripts k ∼ [1,K] denote the denoising timestep, a^0 (=a) is the original action, and a^k-1 is a marginally denoised version of a^k. The diffusion model is designed to predict the noise from a noisy input a^k=√(α̅^k)a + √(1-α̅^k)η with a variance schedule parameter α̅^k and η∼𝒩(0,I), i.e. min_(s,a) ∼{^g}_g=1^G, k∼[1,K][||π̂_u(a^k,k,s,ω) - η||_2^2] where {^g}_g=1^G is the entire datasets collected by the policies in . Furthermore, we represent the model as a combination of preference-conditioned and unconditioned policies, π̂_u (a^k,k,s,ω) := (1-δ) π̂_cond.(a^k,k,s,ω) + δπ̂_uncond.(a^k,k,s) where δ is a guidance weight. The unconditioned policy encompasses general knowledge across the approximated Pareto policies, while the conditioned one guides the action according to the specific preference. During sampling, the policy starts from a random noise and iteratively denoises it to obtain the executable action, a^k-1 = 1/√(α^k)(a^k - 1-α^k/√(1-α̅^k)π̂_u(a^k,k,s,ω)) + σ^kη where α^k and σ^k are variance schedule parameters. The diffusion model π̂_u allows for efficient resource utilization at runtime with a single policy network, and is capable of rendering robust performance for unseen preferences in a zero-shot manner. Consequently, it enhances the Pareto policy set in terms of Pareto front density , as illustrated in Figure <ref> (2). § EVALUATION §.§ Experiment Settings Environments. For evaluation, we use (1) a multi-objective car environment (MO-Car), and several multi-objective variants of MuJoCo environments used in the MORL literature <cit.> including (2) MO-Swimmer, (3) MO-Cheetah, (4) MO-Ant, and (5) MO-AntXY. For tradeoff objectives, the forward speed and the energy efficiency are used in (2)-(4), and the x-axis speed and the y-axis speed are used in (5). In these environments, similar to conventional IRL settings, reward signals are not used for training; they are used solely for evaluation. Baselines. For comparison, we implement following imitation learning algorithms: 1) DiffBC <cit.>, an imitation learning method that uses a diffusion model for the policy, 2) BeT <cit.>, an imitation learning method that integrates action discretization into the transformer architecture, 3) GAIL <cit.>, an imitation learning method that imitates expert dataset via the generative adversarial framework, 4) AIRL <cit.>, an IRL method that induces both the reward function and policy, 5) IQ-Learn <cit.>, an IRL method that learns a q-function to represent both the reward function and policy, 6) DiffAIL <cit.>, an IRL method that incorporates the diffusion loss to the discriminator's objective. To cover a wide range of different preferences, these baselines are conducted multiple times on differently augmented datasets, where each is a mixed dataset that integrates given datasets in the same ratio to a specific preference. We also include MORL <cit.> that uses explicit rewards from the environment, unlike IRL settings. It serves as Oracle (the upper bound of performance) in the comparison. Metrics. For evaluation, we use several multi-objective metrics <cit.>. * Hypervolume metric (HV) represents the quality in the cumulative returns of a Pareto policy set. Let ℱ be the Pareto frontier obtained from an approximated Pareto policy set for m objectives and R_0 ∈ℝ^m be a reference point for each objective. Then, HV = ∫1_H(ℱ)(z)dz where H(ℱ)={z ∈ℝ^m |∃R∈ℱ : R_0≤ z ≤R}. * Sparsity metric (SP) represents the density in the average return distance of the Pareto frontier. Let ℱ_j(i) be the i-th value in a sorted list for the j-th objective. Then, SP = 1/|ℱ| - 1∑_j=1^m∑_i=1^|ℱ| (ℱ_j(i) - ℱ_j(i+1))^2. We also use a new metric designed for Pareto IRL. * Coherence metric (CR) represents the monotonic improvement property of approximated policy set = {π_i}_i≤ N generated by two expert datasets. Let policy list (π_1, ..., π_N) be sorted in ascending order by the expected return of the policies with respect to reward function r_1, Then, CR = 2/N (N-1)∑_i=1^N∑_j=i^N1_h(i,j) where h(i,j) = R_r_1(π_i) ≤ R_r_1(π_j) and R_r_2(π_i) ≥ R_r_2(π_j). For HV and CR, higher is better, but for SP, lower is better. §.§ Performance of Pareto Set Generation Table <ref> compares the performance in the evaluation metrics (HV, SP, CR) achieved by our framework (, +DU) and other baselines (DiffBC, BeT, GAIL, AIRL, IQ-Learn, DiffAIL). is trained with the recursive reward distance regularized IRL, and +DU is enhanced through the distillation. For the baselines, the size of a preference set (with different weights) is given equally to the number of policies derived via . When calculating HV and SP, we exclude the out-of-order policies obtained from an algorithm with respect to preferences. As shown, our and +DU consistently yield the best performance for all environments, outperforming the most competitive baseline AIRL by 15.6%∼ 23.7% higher HV, 80.4%∼ 98.2% lower SP, and 21.7%∼ 22.2% higher CR on average. Furthermore, we observe an average HV gap of 9.8% between +DU and Oracle that uses the ground truth reward signals. This gap is expected, as existing IRL algorithms are also known to experience a performance drop compared to RL algorithms that directly use reward signals <cit.>. For the baselines, such performance degradation is more significant, showing an average drop of 26.9% in HV between AIRL and Oracle. +DU improves the performance in HV over by 7.0% on average, showing the distilled diffusion model achieves robustness on unseen preferences. To verify the performance of for three objectives case, we extend MO-Car to MO-Car* where the tradeoff objectives are the velocities on three different directions. Our and +DU show superiority in terms of HV, but sometimes show slightly lower performance in SP. I t is because the baselines tend to shrinks towards the low-performance region, thus yielding lower SP. As CR is defined only for two objectives cases, CR for MO-Car* is not reported. The generalization of reward distance regularization for three or more objectives is discussed in Appendix A.4. In this experiment, the baselines exhibit relatively low performance due to their primarily concentration on imitating the datasets, posing a challenge in generating policies that go beyond the limited datasets. Specifically, as DiffBC and BeT are designed to handle datasets with multiple modalities, they do not necessarily lead to the generation of novel actions. Meanwhile, the IRL baselines demonstrate relatively better performance, as they involve environment interactions. However, imitating from a merged dataset with specific ratio tends to converge towards the mean of existing actions, thus leading to sub-optimal performance. §.§ Analysis Pareto Visualization. Figure <ref> depicts the Pareto policy set by our and +DU as well as the baselines (DiffBC, AIRL) for MO-AntXY. The baselines often produce the non-optimal solutions, specified by the dots in the low-performance region. +DU produces the most densely spread policies, which lie on the high-performance region. Learning Efficiency. Figure <ref> depicts the learning curves in HV for MO-AntXY over recursive steps. For baselines, we intentionally set the number of policies of the baselines equal to the number of policies derived through for each step. The curves show the superiority of our recursive reward distance regularized IRL in generating the higher quality (HV) Pareto frontier. Furthermore, the recursive learning scheme significantly reduces the training time, requiring only 13%∼ 25% of training timesteps compared to the IRL baselines. This is because explores adjacent policies progressively by making explicit use of the previously derived policies to fork another regularized IRL procedure. Ablation Studies. Table <ref> provides an ablation study of with respect to the reward distance metrics and recursive learning scheme. For this, we implement /MSE and /PSD, which use mean squared error (MSE) and Pearson distance (PSD) for reward distance measures, respectively; we also implement /RC which represents without recursive learning scheme. While MSE tends to compute the exact reward distance and PSD estimates the linear correlation between rewards, EPIC accounts for the reward function distance that is invariant to potential shaping <cit.>, thus making optimize the regret bounds of a policy learned on an inferred reward function. Moreover, /RC degrades compared to , clarifying the benefit of our recursive learning scheme. Table <ref> shows the effect of our preference-conditioned diffusion model. +BC denotes distillation using the naive BC algorithm. We test +DU with varying guidance weights δ in (<ref>), ranging from 0.0 to 1.8. The results indicate that +DU improves by 6.42% at average over +BC. Employing both unconditioned and conditioned policies (δ > 0) contributes to improved performance. §.§ Case Study on Autonomous Driving To verify the applicability of our framework, we conduct a case study with autonomous driving scenarios in the CARLA simulator <cit.>. In Figure <ref>, the comfort mode agent drives slowly without switching lanes, while the sport mode agent accelerates and frequently switches lanes (indicated by dotted arrow) to overtake front vehicles (highlighted by dotted circle) ahead. Using the distinct datasets collected from these two different driving modes, generates a set of diverse custom driving policies. Specifically, as depicted in the bottom of Figure <ref>, the closer the custom agent's behavior is to the sport mode, the more it tends to switch lanes (increasing from 0 to 2) and to drive at higher speeds with lower energy efficiency. The agent in custom mode-2 balances between the comfort and sport modes well, maintaining the moderate speed and changing lanes once. § RELATED WORK Multi-objective RL. In the RL literature, several multi-objective optimization methods were introduced, aiming at providing robust approximation of a Pareto policy set. <cit.> explored Pareto policy set approximation through reward scalarization in online settings, where reward signals are provided. Recently, <cit.> proposed the Pareto decision transformer in offline settings, requiring a comprehensive dataset that covers all preferences. These prior works and ours share a similar goal to achieve a tradeoff-aware agent based on Pareto policy set approximation. However, different from the prior works, our work concentrates on practical situations with the strictly limited datasets and without any rewards from the environment. Inverse RL. To infer a reward function from datasets, IRL has been investigated along with adversarial schemes. <cit.> established the practical implementation of IRL based on the generative adversarial framework; which was further investigated by <cit.>. Recently, <cit.> introduced a multi-objective reward function recovery method, using a simple discrete grid-world environment. Contrarily, our targets the approximation of a Pareto policy set. Instead of exploring the linear combinations of rewards, employs the reward distance metric, and further, optimizes the performance lower bound of learned policies. Reward Function Evaluation. Reward function evaluation is considered important in the RL literature, but was not fully investigated. <cit.> first proposed the EPIC by which two reward functions are directly compared without policy optimization, and verified that the policy regret is bounded. This was extended by <cit.> for mitigating erroneous reward evaluation. However, those rarely investigated how to use such metrics for multi-objective learning. Our work is the first to conjugate reward function evaluation for Pareto policy set approximation in IRL settings. § CONCLUSION We presented the framework to induce a Pareto policy set from strictly limited datasets in terms of preference diversity. In , the recursive IRL with the reward distance regularization is employed to achieve the Pareto policy set. The set is then distilled to the preference-conditioned diffusion policy, enabling robust policy adaptation to unseen preferences and resource efficient deployment. Our framework is different from the existing IRL approaches in that they only allow for imitating an individual policy from given datasets. § ACKNOWLEDGEMENTS We would like to thank anonymous reviewers for their valuable comments. This work was supported by Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea government (MSIT) (No. 2022-0-01045, 2022-0-00043, 2021-0-00875, 2020-0-01821, 2019-0-00421) and by the National Research Foundation of Korea (NRF) grant funded by the MSIT (No. RS-2023-00213118) and by Samsung electronics. named § REWARD DISTANCE REGULARIZATION In this section, we briefly explain the EPIC distance and provide theoretical analysis on our reward distance regularized loss based on EPIC. Then, we discuss our motivation for target distance and generalize our regularized loss for more than two objectives cases. §.§ Equivalent Policy Invariant Comparison (EPIC) EPIC <cit.> is computed using the Pearson distance between two canonically shaped reward functions for independent random variables , ' sampled from state distribution 𝒟_ and sampled from action distribution 𝒟_. Then, EPIC distance d_ϵ between two reward functions r and r' is calculated by d_ϵ (r,r') = d_ρ (C_𝒟_S,𝒟_A(r) (S,A,S'), C_𝒟_S, 𝒟_A(r')(S,A,S')) where d_ρ(X,Y) = √(1 - ρ(X,Y)) / √(2) and ρ(X,Y) is the Pearson correlation between random variables X and Y. The canonicalized reward function C is defined as C _𝒟_S, 𝒟_A (r)(s,a,s') = r(s,a,s') + 𝔼 [ γ r(s',A, S') - r(s,A,S') - γ r(S,A,S') ]. §.§ Regret Bounds of Reward Distance Regularization Based on EPIC In this section, we present the proof of regret bounds for the policy learned through our reward distance regularization based on EPIC. We start by Lemma <ref>, where we show the relaxed Wasserstien distance W_α equals to 0 between a distribution 𝒟_i and 1/m∑_i=1^m 𝒟_i. Next, in Theorem <ref>, we prove the regret bounds in terms of the EPIC distance between the reward functions and W_α. Let 𝒟_1, ..., 𝒟_m be arbitrary distributions over transitions S × A × S. For α≥ m and i ∈{1,...,m}, W_α(𝒟_i, (𝒟_1 + ... + 𝒟_m) / m) = 0 where W_α is relaxed Wasserstein distance (Definition A.13 in <cit.>). For simplicity, We denote 𝒟 = (𝒟_1 + ... + 𝒟_m) / m. By the definition of relaxed Wasserstein distance, W_α(𝒟_i, 𝒟) = inf_p∈Γ_α(𝒟_i, 𝒟)∫_S× S x-y dp(x,y) where Γ_α(𝒟_i, 𝒟) is a set of probability measures on S × S satisfying ∫_S p(x, y)dy = 𝒟_i(x), ∫_S p(x, y)dx ≤α𝒟(y) for all x, y ∈ S. Let the set S_D = {(x, x) | x ∈ S} be a diagonal set on S × S. For function f: S → S_D such as f(x) = (x, x), the Borel probability measure μ:S_D →ℝ is defined as 𝒟_i ∘ f^-1. Furthermore, for all Borel sets X ∈ S × S, the Borel probability measure p on S × S is defined as p(X) = μ(X ∩ S_D) <cit.>. Then, ∫_S p(x, y)dy = 𝒟_i(x), ∫_S p(x, y)dx = 𝒟_i(y) hold for all x, y ∈ S. Since 𝒟_i is non-negative and finite, for all i ∈{1,...m}, we obtain ∫_S p(x, y)dx = 𝒟_i(y) ≤ m ·𝒟(y). Thus, the relaxed Wasserstein distance between 𝒟_i and 𝒟 is equal to ∫_S× S x-y dp(x,y) = ∫_S_D x - y dp(x, y) + ∫_S_D^c x - y dp(x, y) = 0 where S_D^c = S × S ∖ S_D. Let 𝒟 be the distribution over transitions S × A × S that is used to compute EPIC distance d_ϵ, and let 𝒟_π, t be the distribution over the transitions on timestep t induced by policy π. Let r̃ be our learned reward function, and let π^*_r be the optimal policy with respect to reward function r. Suppose that there exists a (ground truth) multi-objective function r_ = ω^T 𝐫 with preference ω = [ω_1, ..., ω_m]. For α≥ m, the regret bounds of π^*_r̃ at most correspond to R _r_mo(π_r_mo^*)- R_r_mo(π_r̃^*) ≤ 16mKr__2 (∑_i=1^n [ ω_i d_ϵ (r̃, r̃_i) ] + L/KΔ_α(r̃) ) where Δ_α(r̃) = ∑_t=0^Tγ^t W_α(𝒟_π^*_r̃, t, 𝒟) and K = α / (1-γ). According to Theorem A.16 in <cit.>, for any α≥ 1, the regret bounds of a policy π_r_i^* for reward function r_j are calculated by R _r_j(π_r_j^*) - R_r_j(π_r_i^*) ≤ 16 ‖ r_j ‖_2(α/1-γ d_ϵ (r_i, r_j) + L∑_t=0^Tγ^t B_α (t)) where R_r(π) denotes returns of policy π on reward function r, and reward functions r_i, r_j are L-lipschitz continuous on the L_1 norm. With the linearity of r_, we obtain R_r_(π_r_^*) - R_r_(π_r̃^*) = ∑_i=1^m ω_i (R_r̃_i(π_r_^*) - R_r̃_i(π_r̃^*)) ≤∑_i=1^m ω_i (R_r̃_i(π_r̃_i^*) - R_r̃_i(π_r̃^*)). By (<ref>), then, the last term in (<ref>) is bounded by the sum of individual regret bounds, i.e., ∑_i=1^m ω_i (R_r̃_i(π_r̃_i^*) - R_r̃_i(π_r̃^*)) ≤∑_i=1^m16 ω_i ‖r̃_i ‖_2(K d_ϵ (r̃, r̃_i) + L∑_t=0^Tγ^t B_α (t) ) where K = α / (1-γ). Since 𝒟 is equivalent to the distribution over transitions induced by π^*_r̃_1, ..., π^*_r̃_m in our sampling procedure, B_α can be simplified in B_α(t) = max_π∈{π_r̃_i^*, π_r̃^*} W_α(𝒟_π, t, 𝒟) = W_α(𝒟_π^*_r̃, t, 𝒟) for α≥ m by Lemma <ref>. For simplicity, we use the episodic cumulative Wasserstein distance Δ_α(r̃) = ∑_t=0^T γ^t B_α(t). In practice, reward function r̃_i is bounded by some constant. Consequently, we obtain R _r_(π_r_^*)- R_r_(π_r̃^*) ≤ 16mK r_(∑_i=1^n [ ω_i d_ϵ (r̃, r̃_i) ] + L/KΔ_α(r̃) ). This ensures that the regret bounds of π^*_r̃ with respect to r_ can be directly optimized by using the loss (11) in the main manuscript. §.§ Motivation for Target Distance Here we discuss our motivation for the target distance ϵ^g mentioned in (6)-(8) of the main manuscript. A straightforward approach for incorporating the reward distance regularized loss is to use the weighted sum loss of preference weight and reward distances. However, we observe that the weighted sum loss frequently leads to unstable learning when targeting to balance between the reward distances. Thus, we take a different approach, using the target distance, in a way that the target distance is used as the target for the reward distances. Specifically, as the triangle inequality of reward distance metrics, the sum of the target distance cannot exceed the distance between the reward functions derived from the previous step. Thus, by setting the sum of the target distance as defined in (8) in the main manuscript and using the L2 loss between the target and reward distances, we are able to stabilize the learning procedure in . Furthermore, we deliberately set one of the target distances (specifically, ϵ^g_i,i) to be as small as possible. This allows for gradual interpolation between adjacently learned policies. Table <ref> demonstrates the effectiveness of the target distance, showing 8.85% gain in HV over the naive approach that directly uses the reward distance metric in the form of the weighted sum loss. §.§ Generalization of Reward Distance Regularization In this section, we extend our reward distance regularization to accommodate general cases involving more than two objectives (M ≥ 3). Similar to the two objective case, we consider the triangle inequality of the reward distance metric between the learning reward function r̃^g_i and any two arbitrary reward functions r̃^g-1_k, r̃^g-1_l∈𝐫̃^g-1 derived in the previous step, d(r̃^g_i, r̃^g-1_k) + d(r̃^g_i, r̃^g-1_l) ≥ d(r̃^g-1_k, r̃^g-1_l). By leveraging this inequality, we limit the sum of target distances as ϵ̂_i^g = ∑_j=1^Mϵ_i,j^g = 1/M-1∑_k=1^M∑_l=1^M d(r̃_k^g-1,r̃_l^g-1). Note that M is number of objectives, which is equivalent to the number of given expert datasets. For example, when M=3, let the three reward functions derived in the previous step be r̃^g-1_1,r̃^g-1_2 and r̃^g-1_3. Using the triangle inequality in (<ref>), we establish the following set of inequalities for the newly derived one r̃^g_i d(r̃^g_i, r̃^g-1_1) + d(r̃^g_i, r̃^g-1_2) ≥ d(r̃^g-1_1, r̃^g-1_2) d(r̃^g_i, r̃^g-1_2) + d(r̃^g_i, r̃^g-1_3) ≥ d(r̃^g-1_2, r̃^g-1_3) d(r̃^g_i, r̃^g-1_3) + d(r̃^g_i, r̃^g-1_1) ≥ d(r̃^g-1_3, r̃^g-1_1). Then, by combining all the inequalities, we obtain d (r̃^g_i, r̃^g-1_1) + d(r̃^g_i, r̃^g-1_2) + d(r̃^g_i, r̃^g-1_3) ≥1/2( d(r̃^g-1_1, r̃^g-1_2) +(r̃^g-1_2, r̃^g-1_3) + d(r̃^g-1_3, r̃^g-1_1) ). Finally, the right-hand side of (<ref>) above is set to be the sum of the target distances for M=3. We evaluate the extensibility of for more than two objective in MO-Car* environment, where the result is shown in the Table 1 of the main manuscript. § BENCHMARK ENVIRONMENTS In this section, we show the details about our multi-objective environments used for evaluation. §.§ MO-Car MO-Car is a simple 1D environment, where the agent controls the car with an acceleration a∈[-1,1]. We configure the two objectives as forward speed and energy efficiency, r_1 = 0.05 × v r_2 = 0.3 - 0.15 a^2 where v is the speed. §.§ MO-Swimmer MO-Swimmer is a multi-objective variant of the MuJoCo <cit.> Swimmer environment, where an agent moves forward by applying torques on two rotors. We configure the two objectives as forward speed and energy efficiency, r_1 = v_x r_2 = 0.3 - 0.15 ∑_i a_i^2 where v_x is the speed in x direction, and a_i is the action applied to each rotors. §.§ MO-Cheetah MO-Cheetah is a multi-objective variant of the MuJoCo HalfCheetah environment, where an agent moves forward by applying torques on 6 distinct joints of front and back legs. We configure the two objectives as forward speed and energy efficiency, r_1 = min(v_x,4) r_2 = 4 - ∑_i a_i^2 where v_x is the speed in x direction, and a_i is the action applied to each joints. §.§ MO-Ant MO-Ant is a multi-objective variant of the MuJoCo Ant environment, where an agent moves forward by applying torques on 8 distinct rotors of 4 legs. We configure the two objectives as forward speed and energy efficiency, r_1 = v_x r_2 = 4 - ∑_i a_i^2 where v_x is the speed in x direction, and a_i is the action applied to each rotors. §.§.§ MO-AntXY MO-Ant is another multi-objective variant of the MuJoCo Ant environment. We configure the two objectives as x-axis speed and y-axis speed, r_1 = v_x + C r_2 = v_y + C where C=2∑_i a_i^2 is the energy efficiency, v_x is the speed in x direction, v_y is the speed in y direction, and a_i is the action applied to each rotors. §.§ MO-Car* MO-Car* is a variant of MO-Car in Section <ref>, where the agent moves in three directions. We configure the three objectives as x-axis speed, y-axis speed and z-axis speed, r_1 = v_x r_2 = v_y r_3 = v_z where v_x is the speed in x direction, v_y is the speed in y direction, and v_z is the speed in z direction. §.§ Case Study on CARLA For case study, we use CARLA <cit.>, where an agent drives along the road with obstacles. The agent receives lidar information and an image of size 84 × 84 × 3 as an input. Particularly, the image is processed by image encoder pre-trained with images obtained in CARLA. Figure <ref> visualizes the driving map used in CARLA and Figure <ref> shows an example image input. We configure two objectives as forward speed and enrgy consumption, r_1 = v r_2 = 1 - a^2 where v_x is the speed. § IMPLEMENTATION DETAILS In this section, we describe how we generate expert datasets, and show the implementation details of and other baselines with hyperparameter settings used for training. For all experiments, we use a system of an NVIDIA RTX 3090 GPU and an Intel(R) Core(TM) i9-10900K CPU. §.§ Generating Expert Datasets and Oracle To generate expert datasets, we emulate expert using PGMORL algorithm <cit.>, which is a state-of-the-art multi-objective RL method. The implementation of PGMORL is based on the open source project[<https://github.com/mit-gfx/PGMORL>]. Using PGMORL on the ground truth reward functions, we are able to collect multiple datasets with different preferences with sufficient diversity. Among these datasets, we use only two distinct datasets, each associated with a specific preference over multi-objectives. Regarding to the oracle, we utilize complete datasets generated by PGMORL to measure the performance. For hyperparameter settings, we use the default settings listed in the PGMORL project. To emulate different experts for Carla, we use heuristic agent provided by the Carla <cit.>. By varying the maximum velocity of the agent can reach, we obtain distinct expert datasets. §.§ DiffBC We implement DiffBC using the denoising diffusion probabilistic model (DDPM) <cit.> with augmented datasets. We linearly sample the preference weights ω_i ∈ [0,1] where ∑_i ω_i = 1, and we train the algorithm with different preference weights multiple times to obtain the approximated Pareto policy set , which contains the same number of policies as . This augmenting method is consistent throughout the baselines. The hyperparameter settings for DiffBC are summarized in Table <ref>. §.§ BeT We implement BeT using the open source project[<https://github.com/notmahi/bet?tab=readme-ov-file>] with augmented datasets. BeT employs the transformer architecture with an action discretization and a multi-task action correction, which allows to effectively learn the multi-modality present in the datasets. The hyperparameter settings for BeT are summarized in Table <ref>. §.§ GAIL and AIRL We implement GAIL and AIRL using the open source projects Jax[<https://github.com/google/jax>] and Haiku[<https://github.com/deepmind/dm-haiku>] with augumented datasets. These algorithms are structured with a discriminator involving a reward approximator, a shaping term, and a generator (policy). For the generator, we use the PPO algorithm. For better convergences, we pretrain a policy with BC for fixed timesteps. The hyperparameter settings for the generator and discriminator are summarized in Table <ref> and Table <ref>, respectively. §.§ IQ-Learn We implement IQ-Learn using the open source project[<https://github.com/Div99/IQ-Learn>] with augmented datasets. IQ-Learn employs a single Q-function to implicitly represent a reward function and a policy. For learning, we us the SAC algorithm. For hyperparameter settings, we use the default settings listed in the IQ-Learn project <cit.>. §.§ DiffAIL We implement DiffAIL using the open source project[<https://github.com/ML-Group-SDU/DiffAIL>] with augmented datasets. DiffAIL is structured with a discriminator and a geneartor (policy), where diffusion loss is incorporated into the discriminator objective. For better convergence, we also pretrain a policy with BC as GAIL and AIRL. For hyperparameter settings, we use the default settings listed in the DiffAIL project <cit.>. §.§ We implement the entire procedure of our framework exploiting the open source projects Jax[<https://github.com/google/jax>] and Haiku[<https://github.com/deepmind/dm-haiku>]. We use the same hyperparameter settings for the generator (Table <ref>) and the discriminator (Table <ref>). In addition, the same pre-training method and training timesteps are adopted for learning the individual IRL procedure (the first step) in our framework. For each recursive step g ≥ 2, we use the previously derived policy and the discriminator as the initial point for the IRL procedure of the current g. This reduces the training time of IRL procedure at each recursive step to at most 1/30 of the individual IRL procedure at the first step. Regarding the reward regularization loss, we set the hyperparameter β to 9, and we sample the same size of batches across multiple datasets {_i^g}_i=1^M to calculate the reward distance. To canonicalize the rewards, we sample S,S' and A with size of 512 independently from uniform distributions. The hyperparameters for are summarized in Table <ref>. To train a preference-conditioned diffusion policy, we linearly match the preference weights to the policies learned through our . Then we sample 32 size of batches each from different datasets which are collected from policies to train a preference-conditioned policy. These policies do not emulate the experts of given datasets, but are used to augment the given datasets with imaginary experts of more diverse preferences. For evaluation, we arbitrary sample the preference ω_i ∈ [0,1], where ∑_i ω_i = 1. The hyperparameter settings for the preference-conditioned policy are summarized in Table <ref>. § PARETO FRONTIER VISUALIZATION Figure <ref> shows the Pareto frontier all acquired by our framework (, +DU) and other baselines (DiffBC, BeT, GAIL, AIRL, IQ-Learn, DiffAIL), for each environment. For each case, we conduct the experiments with 3 random seeds and visualize the best results regarding HV. Note that we exclude out-of-order policies obtained from algorithms with respect to preferences, thus, the number of dots are different for the baselines and our framework. As shown, and +DU render the competitive Pareto Frontier of the most densely populated policies compared to the baselines.
http://arxiv.org/abs/2408.12406v1
20240822135808
Generalized SAM: Efficient Fine-Tuning of SAM for Variable Input Image Sizes
[ "Sota Kato", "Hinako Mitsuoka", "Kazuhiro Hotta" ]
cs.CV
[ "cs.CV" ]
Generalized SAM S. Kato et al. Meijo University, 1-501 Shiogamaguchi, Tempaku-ku, Nagoya, 468-8502, Japan 150442030@ccalumni.meijo-u.ac.jp 200442165@ccalumni.meijo-u.ac.jp kazuhotta@meijo-u.ac.jp Generalized SAM: Efficient Fine-Tuning of SAM for Variable Input Image Sizes Sota Kato10000-0003-0392-6426 Hinako Mitsuoka10009-0005-6969-4017 Kazuhiro Hotta10000-0002-5675-8713 August 26, 2024 ======================================================================================================== § ABSTRACT There has been a lot of recent research on improving the efficiency of fine-tuning foundation models. In this paper, we propose a novel efficient fine-tuning method that allows the input image size of Segment Anything Model (SAM) to be variable. SAM is a powerful foundational model for image segmentation trained on huge datasets, but it requires fine-tuning to recognize arbitrary classes. The input image size of SAM is fixed at 1024 × 1024, resulting in substantial computational demands during training. Furthermore, the fixed input image size may result in the loss of image information, due to fixed aspect ratios. To address this problem, we propose Generalized SAM (GSAM). Different from the previous methods, GSAM is the first to apply random cropping during training with SAM, thereby significantly reducing the computational cost of training. Experiments on datasets of various types and various pixel counts have shown that GSAM can train more efficiently than SAM and other fine-tuning methods for SAM, achieving comparable or higher accuracy. Our code will be available at: <https://github.com/usagisukisuki/G-SAM>. § INTRODUCTION Deep learning has been widely applied to various image recognition problems with great success <cit.>. In particular, in recent years, a large-scale and comprehensive model called the foundation model, has been proposed, and it is known to be a powerful model that can achieve high performance for a wide range of tasks <cit.>. In the field of semantic segmentation, SAM <cit.> was proposed in 2023 and can perform highly accurate segmentation on natural images without training. However, if we want to identify arbitrary classes using SAM, we need to perform fine-tuning using the teacher labels of the target dataset. Since the input image size for SAM is fixed at 1024 × 1024, this causes a huge computational cost problem during fine-tuning. Although methods such as LoRA <cit.> and AdaptFormer <cit.> have been proposed for fine-tuning SAM more effectively, the input image size for these methods is fixed to 1024 × 1024 is the same as for SAM, and the problem of computational cost due to the input image size has not been solved. A fine-tuning method for SAM that reduces the input image size to SAM and can train on small images such as 256 × 256 has also been proposed <cit.>, but again the input image size must be fixed. As the number of pixel counts varies in each dataset, the use of a fixed number of pixel counts is likely to lead to serious problems such as missing image information. In this paper, we propose Generalized SAM (GSAM), which can train even when the input image size is variable. In the conventional segmentation models based on Convolutional Neural Networks (CNN) <cit.> which were proposed before SAM, segmentation was possible even if the input image size during training and inference were different, so it is possible to input a small random cropped image during training and input the original image size during inference to obtain segmentation results. As shown in <ref>, GSAM is the first method using SAM that can apply random cropping at training time, and the use of a small random cropping size reduces the computational cost at training. The fixed input size of SAM is due to fixed-size Positional Encoding. Therefore, GSAM supports variable input image sizes by employing a Positional Encoding Generator (PEG) <cit.> consisting of a Depth-wise Convolution layer as a substitute for Positional Encoding. Furthermore, we also propose Spatial-Multiscale (SM) AdaptFormer to consider more spatial information during fine-tuning. SM-AdaptFormer has a multi-scale structure and can handle feature vectors integrating a more diverse and wider range of spatial information. This is a segmentation-specific fine-tuning method since proper segmentation requires information at various scales. From the evaluation experiments on seven different datasets consisting of in-vehicle images, satellite images, microscopic images, endoscopic images, CT images, and transparent object images, we confirmed that the proposed GSAM can significantly reduce the computational cost of training compared to the conventional fine-tuning methods for SAM, and achieved comparable or higher segmentation accuracy. As shown in <ref>, GSAM achieved the trade-off of lower computational cost and higher accuracy by enabling random cropping. In particular, on the Synapse multi-organ dataset, which is CT images, GSAM achieved segmentation accuracy of more than 11% better than conventional SAM fine-tuning methods, indicating that our proposed method may be highly effective in certain areas. This paper is organized as follows. <ref> describes the related works. <ref> describes the details of our proposed method. <ref> shows the experimental results. Finally, the conclusion and future works are described in <ref>. Our contributions can be summarized as follows: * We propose a novel efficient fine-tuning method for SAM, GSAM. GSAM can cope with variable input image sizes, allowing random cropping to be used the first time during fine-tuning for SAM. * We also propose SM-AdaptFormer to acquire multi-scale features during fine-tuning for SAM. * From the evaluation experiments on various datasets, we confirmed that GSAM can significantly reduce the computational cost of training compared to the conventional SAM fine-tuning methods, and achieved comparable or higher segmentation accuracy. § RELATED WORKS §.§ Segmentation Models Since U-Net <cit.> revolutionized the area of semantic segmentation of images, various architectures have been proposed to improve accuracy <cit.>. Methods such as PSPNet <cit.> and DeepLab series <cit.>, which specialize in obtaining features at various scales, and more recently, methods based on Transformer, have also emerged <cit.>. Compared to these methods, GSAM does not require a particularly complex structure and only requires efficient fine-tuning of the foundation model, SAM, to adapt to semantic segmentation to achieve competitive performance. §.§ Foundation Models Since Transformer <cit.> was published in 2017, various foundational models have been built due to its amazing extensibility. Foundation models such as BERT <cit.>, LLaMa <cit.>, and GPT-4 <cit.> have shown ground-breaking performance in natural language processing. Recently, there has been a remarkable development of foundation models in the field of computer vision, with many high-performance models such as Segment Anything Model (SAM) <cit.>, CLIP <cit.>, and Stable Diffusion <cit.>. Among others, SAM is a segmentation model trained on huge datasets with high zero-shot generalization performance. However, foundation models generally have high generalization performance but lack expertise and require fine-tuning to recognize specific downstream tasks, and arbitrary classes properly. For this reason, a lot of research is being done to effectively and efficiently fine-tune foundation models such as SAM. §.§ Efficient Fine-tuning for SAM When we fine-tune foundation models with a huge number of parameters such as SAM, it is computationally very expensive to update all parameters. It is therefore common to update only a part of the weight parameters to achieve fine-tuning at a lower computational cost. Low-Rank Adaptation (LoRA) <cit.> successfully reduces the number of learnable parameters in downstream tasks by applying learnable low-rank matrices to each Transformer layer. This method originated in the field of natural language processing, but it has also been adapted to computer vision and can be used to fine-tune SAM. ConvLoRA <cit.>, which applies convolutional layers to LoRA and reinforces image-related local priors to achieve higher accuracy, has also been proposed. Additionally, AdaptFormer <cit.> achieves higher accuracy with minimal additional learnable parameters by using two fully connected layers and an activation function in each Feed-Forward Network (FNN). However, the input image size for these methods is fixed at 1024 × 1024 which is the same as SAM, and thus the computational cost issue related to the input image size has not been resolved. In this paper, we propose to reduce the computational cost of fine-tuning SAM by using smaller input images only during training. §.§ Changing The Input Image Size for SAM Recently, some methods have been proposed that allow training with smaller images by reducing the input image size of SAM from 1024 × 1024. SAMed <cit.> enables the input image size of 512 × 512 by applying LoRA to SAM. Additionally, SAMUS <cit.> achieves high accuracy in medical image segmentation even with a smaller input image size of 256× 256 by integrating the feature maps of Transformer and CNN using Cross Attention. However, if the images to be handled are larger than the input size, there is a possibility that image information may be lost due to resizing. In this case, a method that enables segmentation even if the size of the input images differs between training and inference is needed. § PROPOSED METHOD In this paper, in order to efficiently fine-tune with random cropping in training, we propose a novel method called Generalized SAM (GSAM). <ref> illustrates the overview of GSAM. In GSAM, all weight parameters of the Prompt Encoder and some weight parameters of the Transformer Encoder are fixed, and the other weight parameters are updated during fine-tuning. In addition, GSAM adds a novel structure to use random cropping in training. The details of each structure of GSAM are described <ref>, <ref>. §.§ Application of Random Cropping during Training Since the input to SAM must be of a fixed size of 1024 × 1024, it is impossible to handle random cropping with small image sizes during training. The most important reason why the input to SAM must be fixed is that the Positional Encoding in the Transformer Encoder, which is a component of SAM, is of a fixed size. Positional Encoding is a structure that adds information to each token to inform its own position of the Vision Transformer. In the case of SAM, it is a learnable weight parameter with a fixed size. Therefore, GSAM employs the Positional Encoding Generator (PEG) <cit.> as a substitute for Positional Encoding. PEG consists of a Depth-wise Convolution layer that considers only spatial orientation, which enables it to retain positional information even when the input size of the feature map is variable. However, the original pretrained SAM does not support random inputs, and it is possible that global learning by Self-Attention in the Transformer Encoder alone is insufficient for small and variable inputs. Therefore, we use a new network composed of CNNs, shown in <ref> as CNN Encoder in order to learn by integrating CNN features and SAM features. Since learning by local kernels of CNN is effective for smaller input images, it is considered to complement the feature map of the Transformer Encoder in SAM. GSAM adds the feature maps of the third block of ResNet101 <cit.>, which retains some spatial information, to pre-input and post-output feature maps of the Transformer Encoder. This enables efficient fine-tuning using random cropping. From the above, the introduction of PEG and CNN Encoder enables the use of random cropping and corresponding feature extraction. §.§ Spatial-Multiscale AdaptFormer In order to further improve the discrimination accuracy for the target dataset, we propose Spatial-Multiscale (SM) AdaptFormer. <ref> illustrates the overview of SM-AdaptFormer. AdaptFormer <cit.> is known as a low computational cost and high performance method for fine-tuning SAM, but AdaptFormer does not take spatial information into account. Since spatial features are important information in semantic segmentation, the proposed SM-AdaptFormer prepares multiple convolutional layers with kernels in various ranges and acquires multiscale features. For convolutional layers with a wide range of kernels, we employ Dilated Convolution <cit.>. Dilated Convolution can expand the receptive field while maintaining the same kernel size, allowing global feature extraction without increasing computational cost. When Dilated Convolution is applied to the input feature map x, the output feature map y can be expressed as in <ref> using the position i of each pixel and the convolution kernel w. y i = ∑_k x i+r ·k w k. where r is a parameter that determines the width of the stride, and the receptive field of the convolutional layer can be adaptively changed by changing r. SM-AdaptFormer provides two convolutional layers with kernel sizes of 1 × 1 and 3 × 3, as well as Dilated Convolution (r=12), Dilated Convolution (r=24), and Dilated Convolution (r=32), for a total of five types of receptive fields. Multiscale features covering these small to large receptive fields are learned by adding them together. In the original AdaptFormer, the number of dimensions is reduced once in the fully connected layer and then restored to the original number of dimensions in the fully connected layer to learn parameters with a low computational cost, and the same structure is used in SM-AdaptFormer. Therefore, even when acquiring multi-scale features, a low dimensionality is an input, thus avoiding computational bloat. § EXPERIMENTS §.§ Datasets and Metrics In the experiments, we assessed different types of image data from various domains with varying input image sizes: in-vehicle images, satellite images, microscopic images, endoscopic images, CT images, and transparent object images. We used two large datasets with more than 10,000 images and five smaller datasets with 1,000 images or less. Specifically, we used the Cityscapes dataset <cit.> (19 classes) for in-vehicle images and the Trans10k <cit.> dataset (3 classes) for transparent object images as large datasets. As smaller datasets, we used the CamVid dataset <cit.> (11 classes) for in-vehicle images, the Massachusetts Buildings dataset <cit.> (M-Building, 2 classes) for satellite images, the ISBI2012 dataset <cit.> (2 classes) for microscopic images, the Kvasir-SEG dataset <cit.> (2 classes) for endoscopic images, and the Synapse multi-organ dataset <cit.> (Synapse, 9 classes) for CT images. The pixel counts for each dataset are listed in <ref> and <ref>. In semantic segmentation, Intersection over Union (IoU), which indicates the overlap ratio between prediction and ground truth labels is generally used as an evaluation metric. Therefore, we used Mean IoU (mIoU) and the average IoU of all classes as the evaluation metrics. §.§ Training Conditions In this paper, we used Pytorch library and trained the model using Adam optimizer for 200 epochs with a batch size of 8. The learning rate was initially set to 0.005 and gradually decreased using the cosine learning rate scheduler <cit.>. For comparison, we used conventional CNN-based networks such as U-Net <cit.> and DeepLabv3+ <cit.>, as well as efficient fine-tuning methods using SAM: LoRA <cit.>, ConvLoRA <cit.>, AdaptFormer <cit.>, and SAMUS <cit.>. For data pre-processing during training, we used random cropping, horizontal flipping, and random rotation for the CNN-based methods and GSAM. Other methods only accept images of fixed-size and therefore random cropping cannot be used. Therefore, we only applied horizontal flipping and random rotation. However, random rotation is not used for in-vehicle images and transparent object images. This is because these two types of images have a clearly defined top and bottom, and do not require pre-processing by random rotation, which would change the top and bottom direction. Image sizes for random cropping are listed in <ref> and <ref>. §.§ Experimental Results §.§.§ Quantitative Results. <ref> and <ref> show the quantitative results for each dataset. Regardless of the size of the dataset, GSAM achieved comparable or even higher accuracy than existing fine-tuning methods using SAM. The red numbers in the table indicate the most accurate values. Except for the Trans10k and the CamVid datasets, the proposed methods, SM-AdaptFormer and GSAM, showed the highest accuracy for the other five datasets. Especially for the Synapse multi-organ dataset, which is CT images, SM-AdaptFormer and GSAM improved the accuracy by 4.78% and 11.50%, respectively, compared to AdaptFormer. This result indicates that our proposed method may be highly effective in certain areas. In addition, for all datasets, GSAM showed higher accuracy than the network composed of CNNs. This is considered to be due to the effectiveness of SAM itself, which is the underlying model based on Transformers, plus the learning of spatial information by the CNN-based SM-AdaptFormer and the effect of data expansion by random cropping, which took advantage of the benefits of each. On the other hand, for the Trans10k and the CamVid datasets, the most accurate was AdaptFormer, while the second most accurate was SM-AdaptFormer which was the proposed method. The Trans10k dataset contains objects of relatively large size in the image. In the case of such images, the SM-Adaptformer for extracting multi-scale information is considered to be less useful because the importance of information such as fine details is not high. The CamVid dataset has many classes among the datasets, and it is considered more difficult to perform fine-tuning. The reason may be that the number of rates of Dilated Convolution set by SM-AdaptFormer was not appropriate because small objects were included. However, the accuracy of SM-Adaptformer outperforms Adaptformer in the Cityscapes dataset, a dataset similar in systematics to the CamVid, which is considered that the optimal number of Dilated Convolution rates varies depending on the dataset. The advantage of GSAM, however, is that it supports variable input image sizes. The ability to input images with aspect ratios other than 1:1, such as the CamVid and Cityscapes datasets, in their original form without any smoothing or cropping is a unique advantage of GSAM among SAM fine-tuning methods. Datasets such as Trans10k and Kvasir-SEG, which have relatively large objects in the images and fewer classes, are easier to perform fine-tuning on and show smaller differences in accuracy between methods. For datasets such as these, the effectiveness of the GSAM advantage of performing random cropping is reduced, and the accuracy is not necessarily superior compared to other methods. Although there may be a more appropriate size of the random cropping for GSAM, we can confirm that the accuracy of GSAM is significantly improved in comparison with that of CNN-based methods. This can be attributed to the combination of multiscale spatial features using SM-AdaptFormer and the effectiveness of the SAM itself, which outperforms CNN-based models. Based on the above results, it was confirmed that GSAM effectively acquires spatial features using SM-AdaptFormer and reduces computational costs compared to conventional SAM fine-tuning methods by supporting random cropping with variable-length inputs, while achieving equal or significantly higher accuracy. §.§.§ Qualitative Results. <ref> illustrates the qualitative results on four datasets. Specifically, the qualitative results are presented for the ISBI and the M-Building datasets, where GSAM had the highest accuracy among the comparison methods, and for the Cityscapes and the Trans10k datasets, where GSAM did not have the highest accuracy. We first focus on the dataset in the top two rows. These datasets contain the detailed structures and fine objects. The results from the ISBI dataset show that GSAM reduces the over-detection of cell membrane classes compared to other SAM fine-tuning methods. On the M-Building dataset, GSAM is able to segment small and complex shaped objects better than other methods. These results show that the characteristics of GSAM, such as the ability to use random cropping and the ability to extract multi-scale features with SM-Adaptformer, are effective for datasets containing small objects and complex structures. Next, we focus on the bottom two rows of the dataset. These datasets have diverse classes or contain relatively large objects. The Cityscapes dataset results show no particular advantage of GSAM over Adaptformer. The Trans10k dataset results also show a significant failure in the segmentation of the lower part of the object when GSAM is used. These factors may be because the number of rates of Dilated Convolution in SM-Adaptformer is always fixed and therefore not suitable for that data set. In addition, GSAM, which aims to acquire multi-scale features, may not be suitable for datasets containing only large objects. §.§ Ablation Study §.§.§ Effectiveness of SM-Adaptformer. We performed an ablation study on the SM-Adaptformer proposed as an internal module of GSAM. To test the effect of each component, we systematically removed each component from the SM-Adaptformer one by one. During this process, we maintained the ability of the GSAM to accept various input image sizes. These results confirm that the convolutional layers of various scales in SM-Adaptformer significantly contribute to improving the segmentation accuracy. Additionally, the effectiveness of both standard and dilated convolutional layers was demonstrated. From these findings, it is evident that SM-Adaptformer is more effective than AdaptFormer, which consists only of coupling layers and activation functions, due to its ability to acquire spatial information at multiple scales. However, in terms of extracting multi-scale features, it is particularly effective for datasets that include a wide range of objects from small objects to somewhat large objects, etc. §.§.§ Efficiency. <ref>, <ref> shows the results of our comparative experiments on the efficiency of GSAM. As a comparison method, the MACs values of SAM, LoRA, ConvLoRA, AdaptFormer, and GSAM are compared when the size of the random cropping is changed. Since the number of input image sizes is fixed in conventional SAM fine-tuning methods, the computational cost becomes huge, and it can be seen that the computational cost is more than 300G MACs for all the methods except SAMUS. On the other hand, with GSAM, the computational cost decreases exponentially as the size of the random crop is reduced. In particular, when the input image size is 128 × 128, the segmentation accuracy outperforms all conventional methods, despite a computational cost of around half that of SAMUS. The SM-Adaptformer included in GSAM has a relatively complex structure to acquire multi-scale features and improve segmentation accuracy, but by using PEG and CNN to support variable-size input, the input image size can be reduced, which reduces the computational cost has been significantly reduced. Based on these results, we expect GSAM to be widely used as an efficient approach for fine-tuning SAM in the future, as it significantly reduces the computational cost and allows for highly accurate segmentation. § CONCLUSION In this paper, a novel fine-tuning method of SAM, GSAM, is proposed to handle variable input image sizes for SAM. GSAM is the first method to allow random cropping for SAM during training and significantly reduce the computational cost during training. From evaluation experiments on datasets with various input image sizes, we have confirmed that GSAM can train more efficiently than the conventional fine-tuning methods for SAM and can achieve the same or better segmentation accuracy. In the future, we would like to address the problems caused by the use of Dilated Convolution with fixed rate values within SM-Adaptformer, and achieve the associated increase in versatility. Since GSAM trains all the weight parameters of Decoder of SAM, we are considering adding the LoRA structure to Decoder to train Decoder more efficiently. splncs04
http://arxiv.org/abs/2408.12148v1
20240822062710
Multi-tool Integration Application for Math Reasoning Using Large Language Model
[ "Zhihua Duan", "Jialin Wang" ]
cs.AI
[ "cs.AI" ]
Multi-tool Integration Application for Math Reasoning Using Large Language Model 1st Zhihua Duan Intelligent Cloud Network Monitoring Department China Telecom Shanghai Company 700 Daning Road, Shanghai, 200072 Shanghai,China duanzh.sh@chinatelecom.cn 2nd Jialin Wang Computer Science Stanford University 450 Serra Mall, Palo Alto,94305 California, America jialinwangspace@gmail.com =============================================================================================================================================================================================================================================================================================================================== § ABSTRACT Mathematical reasoning is an important research direction in the field of artificial intelligence. This article proposes a novel multi tool application framework for mathematical reasoning, aiming to achieve more comprehensive and accurate mathematical reasoning by utilizing the collaborative effect of large language models (LLMs) and multiple external tools. Firstly, use a Math Tool to perform basic mathematical calculations during the inference process through interaction with LLM. Secondly, Code Tool can generate code fragments that comply with syntax rules and execute them, providing support for complex mathematical problems. Then, through the iterative reasoning of the CoT Tool, the logical coherence and accuracy of mathematical reasoning are enhanced. Ultimately, by using self consistency tools to select the final answer based on different parameters, the consistency and reliability of reasoning are improved. Through the synergistic effect of these tools, the framework has achieved significant performance improvement in mathematical reasoning tasks. We conducted experiments on the NumGLUE Task 4 test set, which includes 220 mathematical reasoning fill in the blank questions. The experimental results showed that, based on Math Tool, Code Tool, and CoT Tool, in Task 4 task,our method achieved an accuracy of 89.09,compared with the GPT3+FewShot baseline, Few Shot+ERNIE-4.0+self consistency improved by 49.09%, and compared with fine-tuning the Fine tuning baseline, Few Shot+ERNIE-4.0+self consistency improved by 52.29% ERNIE-4.0,FewShot,CoT,Large Language Model § INTRODUCTION Mathematical reasoning is an important field in artificial intelligence research, which solves complex mathematical problems through deduction and reasoning under the guidance of logic and mathematical rules. However, for computers, conducting mathematical reasoning remains a challenging task. In recent years, with the rapid development of large language models, utilizing their powerful language generation and comprehension abilities to assist mathematical reasoning has become a new research direction. Recent research has focused on improving the mathematical reasoning ability of Large Language Models (LLMs). By introducing Chain of Thinking (CoT) prompts, LLM has made progress in mathematical reasoning tasks. CoT prompts guide LLM to gradually solve problems, improving the accuracy and interpretability of reasoning. However, there are still some problems and limitations when dealing with scenarios such as common sense reasoning, formal logic, and algebraic computation. Current research is still focused on simple arithmetic reasoning, and for more complex mathematical concepts and problems, Further research is needed to expand the scope and ability of mathematical reasoning. This article aims to propose a novel multi tool application framework for mathematical reasoning, utilizing a large language model driven approach and combining the collaborative effects of multiple external tools to achieve more comprehensive and accurate mathematical reasoning. As shown in Figure 1, our framework utilizes various external tools such as Math Tool, Code Tool, CoT Tool, and self consistency tools in the inference process through a large language model to provide diverse inference support. The unique contribution of this paper lies in the implementation of a self-consistency tool. As shown in Figure 2, based on the parameter configuration, the mathematical calculator, code executor, and thought chain tool are sequentially selected to obtain answers. If all three tools are used simultaneously, the answer with the highest occurrence count is chosen as the final answer. If each answer appears only once, the answer from the code is given priority based on the configured priority. § RELATED WORK In mathematical reasoning tasks, the MultiTool CoT framework combines multiple external tools such as calculators and knowledge retrievers, significantly improving the performance of large language models in digital reasoning tasks<cit.>. MathPrompt technology improves the performance of large language models on arithmetic problems by generating multiple algebraic expressions or Python functions to solve the same mathematical problem <cit.>. The use of prompt based learning paradigms can improve the performance of information extraction tasks. CodeIE proposes a method to convert structured output into code form and uses a code generation language model to perform named entity recognition and relationship extraction tasks <cit.>. NumGLUE is a multitasking benchmark used to evaluate the performance of artificial intelligence systems on eight different tasks, promoting cross task knowledge sharing <cit.>. MathWorld is a graph based semantic formalism specifically used in the field of mathematical story problems. By using MathWorld, the world model can be associated with mathematical story problems, representing the context, actions, and mathematical relationships introduced in the text <cit.>. LogicSolver first retrieves highly relevant algebraic knowledge for each mathematical text problem, and then passes them as prompts to the backbone model to improve the semantic representation of the mathematical text problem <cit.>. MAmmoTH is a large-scale language model specifically designed for solving general mathematical problems, emphasizing the importance of diverse problem coverage<cit.>. In complex mathematical related tasks, a step-by-step reasoning approach is used to initialize the solution through retrieved samples, and then the intermediate steps of the generated solution are checked and refined from the perspectives of tool operation and natural language reasoning until a convergent solution is obtained. In contrast to the preceding efforts, this study introduces an innovative methodology within the domain of mathematical reasoning that synergistically integrates the capabilities of large language models alongside various auxiliary tools such as Math Tool, Code Tool, and CoT Tool, all designed to augment the capacity for mathematical reasoning. § METHODS The mathematical reasoning multi tool application we propose is an interactive framework that allows LLM to use multiple external tools during the reasoning process: Math Tool, Code Tool, Cot Tool, and self consistency Tool. In Math Tool, the symbols used in prompts have little impact on model performance, which may be counterintuitive, but patterns as a means of enhancing task understanding<cit.> will prompt the model to generate correct output. Most importantly, text and patterns form a symbiotic relationship and play an important role in mathematical reasoning. Text helps generate useful patterns, The Math Tool is shown in Table I. such as extracting mathematical patterns, which enhance task understanding and enable language models to generate text that helps solve tasks. The success of Math Tool is attributed to the interaction between text and patterns, applying extracted symbols to mathematical patterns. This is of great significance for further improving and optimizing the application of large language models. Code Tool is a Python code execution function, as shown in Table II. Its main function is to call Baidu Big Model Service to generate code snippets that comply with syntax rules based on user input prompts. The tool first retrieves Python function text by calling Baidu's Big Model service, and dynamically executes the code using the built-in function exec(). The exec() function is capable of executing complex Python statements, receiving Python code stored in strings or objects, and returning the processed answer, which is the result of the function execution. The CoT Tool,as shown in Table III. Its function is to infer based on the input thinking chain prompt words by calling Baidu's big model service to obtain the result of thinking chain inference. This tool uses iterative reasoning to gradually extract the final answer from the reasoning text by calling Baidu's big model service again. The self consistency tool implements a decision system that selects different answers based on given parameters. If the self consistency feature is enabled, the system will call three different tools: Math Tool), Code Tool, and CoT Tool. Firstly, the system will call the Math Tool, Code Tool, and CoT Tool to obtain three answers respectively, and add these answers to a list. Then, the system will count the number of times each answer appears in the list and select the answer with the most occurrences as the final answer. If each answer only appears once, the answer with the highest priority will be selected as the final answer based on the pre-set priority. § EXPERIMENT §.§ DataSets NumGLUE is a multitasking dataset consisting of 8 different tasks. Task 1 is common sense+arithmetic, Task 2 is domain specific knowledge+arithmetic, Task 3 is common sense+quantitative, Task 4 is fill in the blank, Task 5 is reading comprehension+explicit numerical reasoning, Task 6 is reading comprehension+implicit numerical reasoning, Task 7 is quantitative natural language reasoning, and Task 8 is arithmetic problem. These tasks involve common sense, domain specific knowledge, and quantitative reasoning Different aspects such as fill in the blank questions and reading comprehension. Through this dataset, the performance of different models on various tasks can be evaluated. As shown in Table IV, Task 4 is a Fill in the blank dataset, which retrieves questions from an arithmetic question bank <cit.> <cit.> <cit.>,and converts them into the format of fill in the blank questions. Require the generation of correct fill in the blank answers based on the given context, and provide understanding and answers to mathematical problems through fill in the blank questions. This dataset consists of three parts: training set, validation set , and test set. There are 770 samples in the training set, 110 samples in the validation set, and 220 samples in the test set. This article uses Few Shot+LLms to directly test 220 samples from the test set. §.§ Method comparison As shown in Table V, ERNIE-4.0 has achieved good performance through self consistency constraints. Under the self consistency constraint, if the results of each tool appear the same number of times, the reasoning answer of the thought chain CoT is prioritized, and the performance of ERNIE-4.0 reaches 75.9. When obtaining the results of the generated code function first, the performance further improves to 80.45. In the case of ERNIE-4.0, by combining Math Tool, CoT Tool, and Code Tool to prioritize obtaining the results of Math Tool, the performance of ERNIE-4.0 reached an impressive 89.09. Compared to the GPT3+FewShot baseline, ERNIE-4.0 improved by 49.09% (=89.09-40). Compared to fine-tuning the Fine tuning baseline, ERNIE-4.0 improved by 52.29% (=89.09-36.8) § CONCLUSION This study successfully implemented a multi tool application framework for mathematical reasoning based on a large language model, which utilizes multiple external tools during the reasoning process, including Math Tool, Code Tool, and CoT Tool. Math Tool can perform basic mathematical calculations, code executor tools can generate code fragments that conform to syntax rules and execute them, and CoT Tool obtain the results of the inference chain through iterative reasoning. The synergistic effect of these external tools has enabled our framework to perform well in mathematical reasoning tasks. The design of this framework is universal and can be applied to various tasks by extending more external tools. Future work can further explore and optimize the selection and integration of external tools within the framework to improve inference efficiency and performance, and apply the framework to a wider range of fields and practical scenarios. § ACKNOWLEDGMENT This research was sponsored by Wenxin Model 4.0, a large model platform of China Baidu AI Cloud Qianfan. ERNIE-4.0 is a large language model independently developed by Baidu, covering a massive amount of Chinese data and possessing stronger abilities in dialogue, question answering, and content creation. unsrt 10 multi-tools Tatsuro Inaba, Hirokazu Kiyomaru, Fei Cheng, and Sadao Kurohashi. Multitool-cot: Gpt-3 can use multiple external tools with chain of thought prompting. ACL 2023, abs/2305.16896, 2023. MathPrompter Shima Imani, Liang Du, and Harsh Shrivastava. Mathprompter: Mathematical reasoning using large language models. ACL 2023, abs/2303.05398, 2023. CodeIE Peng Li, Tianxiang Sun, Qiong Tang, Hang Yan, Yuanbin Wu, Xuanjing Huang, and Xipeng Qiu. Codeie: Large code generation models are better few-shot information extractors. acl 2023, abs/2305.05711, 2023. NumGlue Swaroop Mishra, Arindam Mitra, Neeraj Varshney, Bhavdeep Sachdeva, Peter Clark, Chitta Baral, and Ashwin Kalyan. Numglue: A suite of fundamental yet challenging mathematical reasoning tasks. ACL 2022, Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers):3505–3523, 2022. world_math Andreas Opedal, Niklas Stoehr, Abulhair Saparov, and Mrinmaya Sachan. World models for math story problems. ACL 2023, abs/2306.04347, 2023. LogicSolver Zhicheng Yang, Jinghui Qin, Jiaqi Chen, Liang Lin, and Xiaodan Liang. Logicsolver: Towards interpretable math word problem solving with logical prompt-enhanced learning. EMNLP 2022, 2022. MAmmoTH Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, and Wenhu Chen. Mammoth: Building math generalist models through hybrid instruction tuning. CoRR 2023, abs/2309.05653, 2023. text-pattern Aman Madaan and Amir Yazdanbakhsh. Text and patterns: For effective chain of thought, it takes two to tango. arXiv · Computation and Language(2022), 2022. NumGlueArithmetic Subhro Roy and Dan Roth. Solving general arithmetic word problems. EMNLP 2016, abs/1608.01413:1743–1752, 2016. NumGlueUnit Subhro Roy and Dan Roth. Unit dependency graph and its application to arithmetic word problem solving. AAAI 2017, abs/1612.00969:3082–3088, 2017. NumGlue_Knowledge Subhro Roy and Dan Roth. Mapping to declarative knowledge for word problem solving. Computing Research Repository CoRR 2018, abs/1712.09391, 2018.
http://arxiv.org/abs/2408.12367v1
20240822130638
Shellable flag simplicial complexes of non-simple polyominoes
[ "Francesco Navarra" ]
math.AC
[ "math.AC", "05B50, 05E40" ]
Shellable flag simplicial complexes of non-simple polyominoes]Shellable flag simplicial complexes of non-simple polyominoes [1]Francesco Navarrafrancesco.navarra@sabanciuniv.edu *[1]Faculty of Engineering and Natural Sciences, Sabanci University, Orta Mahalle, Tuzla, Istanbul, 34956, Turkey In this article, we explore the shellability of the flag simplicial complexes associated with non-simple and thin polyominoes. As a consequence, we establish the Cohen-Macaulayness and a combinatorial interpretation of the h-polynomial of the corresponding coordinate rings. [MSC Classification]05B50, 05E40 [ * August 26, 2024 =================== § INTRODUCTION A classic topic in commutative algebra is the study of determinantal ideals, that is, ideals generated by the t-minors of a generic matrix whose entries are elements in a ring. These ideals serve as a bridge between algebraic geometry, combinatorial algebra, and homological algebra and their study provides crucial insights into the structure and properties of varieties and rings; see for instance <cit.>. In the specific case of 2-minors of a matrix having indeterminates as entries, these ideals can be seen as special cases of the so-called polyomino ideals. If is a polyomino, that is, a finite collection of cells joined edge by edge, then its polyomino ideal I_𝒫 is the binomial ideal generated by the inner 2-minors of 𝒫. This type of ideal was introduced in 2012 by Qureshi <cit.>. Since then, the study of the main algebraic properties of polyomino ideals and their quotient rings K[𝒫] = S/I_𝒫, in relation to the shape of 𝒫, has emerged as an exciting area of research. The aim is to explore the fundamental algebraic properties of I_𝒫, depending on the shape of 𝒫. Despite the considerable interest in this field, many open problems remain unsolved so far. It is intriguing to determine which polyominoes can be identified with toric varieties (<cit.>), but a complete characterization is still unknown. In this context, it can be important to refer to the works that discuss the primality of I_𝒫, like <cit.>. It is worth mentioning that the polyomino ideals of simple polyominoes are prime. Roughly speaking, a simple polyomino is a polyomino without holes. The methods used to prove this are particularly interesting: in <cit.>, the authors show that simple and balanced polyominoes are equivalent and they use the fact that a polyomino ideal, associated to a balanced one, is prime (see <cit.>); independently of this, in <cit.> it is showed that polyomino ideals associated to simple polyominoes are prime, by identifying their quotient ring with toric rings of a weakly chordal graph, thus obtaining that K[] is a normal Cohen-Macaulay domain by <cit.>. Nowadays, the study is applied to multiply connected polyominoes, that are polyominoes with one or more holes. A fascinating class of non-simple polyominoes, called closed paths, is introduced in <cit.>, where a characterization of their primality is given in terms of zig-zag walks (see <cit.>). A closed path is basically a path of cells where the first one and the last one coincides and the path creates just one hole. For this class of polyominoes, some results are in <cit.>, <cit.>, <cit.>, <cit.> and <cit.>.Our interest lies in understanding the Cohen-Macaulay property and the combinatorial aspects of the h-polynomial of non-simple polyominoes. Recently, in <cit.>, it is proved that the coordinate ring of a closed path with a zig-zag walk is Cohen-Macaulay and the h-polynomial is equal to the rook polynomial of . Although that proof is rigorous, it is admittedly cumbersome and highly technical. In this paper we will re-derive these results through an easier and different approach, where the theory of simplicial complexes, including the shellable property and the McMullen-Walkup Theorem (see <cit.>), will play a crucial role. The paper is organized as follows. In Section <ref> we introduced some combinatorial basics on polyominoes and the definition of the coordinate ring of a polyomino. Section <ref> provides some preliminary results. We firstly recall the definition of closed paths and the characterization of the primality of the related polyomino ideal. Now, let be a closed path polyomino. We provide a suitable monomial order ≺_, which is inspired by <cit.>, in order that the initial ideal in(I_) of I_ with respect to ≺_ is squarefree and generated in degree two (Proposition <ref>). Thus, it makes sense to consider the flag simplicial complex Δ(), having in(I_) as Stanley-Reisner ideal. In Section <ref> we investigate the combinatorial property of Δ(), particularly its pureness and shellability. When I_ is a toric ideal, it is well-known that Δ() is shellable from <cit.>. The case when I_ is non-prime, or equivalently has a zig-zag walk, is more challenging to study: is Δ() shellable in this case? The answer is affirmative, as shown in Theorem <ref>. The crucial parts of this result, where the reader should pay close attention, are the Discussion <ref> and Definition <ref>, where we provide a suitable shelling order for the facets of Δ(𝒫). Since every shellable simplicial complex is Cohen-Macaulay, we conclude that the coordinate ring of a closed path with a zig-zag walk is also Cohen-Macaulay (see Corollary <ref>). Finally, in Section <ref>, using a well-know theorem of McMullen and Walkup (see <cit.>), we show that the class of closed paths yields a positive resolution of <cit.>. Moreover, we highlight that the methodologies developed in this paper can be readily adapted to achieve analogous results for weakly closed paths (Theorem <ref>). We conclude the paper by posing several open questions (see Remark <ref>). § POLYOMINOES AND POLYOMINO IDEALS In this section, we present various combinatorial concepts related to polyominoes and we introduce the algebras associated with them. We begin by introducing the basic concepts of polyominoes. Consider the natural partial order on ^2, that is, if (i,j),(k,l)∈^2, then we say that (i,j)≤(k,l) when i≤ k and j≤ l. Let a=(i,j) and b=(k,l) in ^2 with a≤ b. The set [a,b]={(m,n)∈^2: i≤ m≤ k, j≤ n≤ l } is said to be an interval of ^2. Moreover, if i< k and j<l, then [a,b] is a proper interval. In such a case, a, b and c=(i,l), d=(k,j) are diagonal corners and the anti-diagonal corners of [a,b], respectively. We define also ]a,b[=[a,b]∖{a,b}, [a,b[=[a,b]∖{b} and ]a,b]=[a,b]∖{a}. If j=l (resp. i=k), then a and b are in horizontal (resp. vertical) position. A proper interval C=[a,b] with b=a+(1,1) is called a cell of ^2; moreover, the elements a, b, c and d are said to be the lower left, upper right, upper left and lower right corners of C, respectively. The set of the vertices of C is V(C)={a,b,c,d} and the set of the edges of C is E(C)={{a,c},{c,b},{b,d},{a,d}}. More generally, if is a non-empty collection of cells in ^2, then V()=⋃_C∈V(C) and E()=⋃_C∈E(C). The rank of is the number of the cells belonging to and it is denoted by || (some authors use ()). If C and D are two distinct cells of , then a walk from C to D in is a sequence :C=C_1,…,C_m=D of cells of ^2 such that C_i ∩ C_i+1 is an edge of C_i and C_i+1 for i=1,…,m-1. Moreover, if C_i ≠ C_j for all i≠ j, then is called a path from C to D. Denoting by (a_i,b_i) the lower left corner of C_i for all i=1,…,m, we say that has a change of direction at C_k for some 2≤ k ≤ m-1 if a_k-1≠ a_k+1 and b_k-1≠ b_k+1. We say that two cells C and D of are connected in if there exists a path of cells belonging to from C to D. Now, we can give the formal definition of a polyomino. A polyomino is a non-empty, finite collection of cells in ^2 where any two cells of are connected in . For instance, see Figure <ref>. Let be a polyomino. A sub-polyomino of is a polyomino consisting of cells which belong to . We define that is simple if for any two cells C and D not in there exists a path of cells, which do not belong to , from C to D; look at Figure <ref> (b) for an example of simple polyomino. A finite collection of cells not belonging to is a hole of if any two cells of are connected in and is maximal with respect to set inclusion. For instance, the polyomino in Figure <ref> (a) is not simple and it has only one hole which consists of two cells. Observe that every hole of is a simple polyomino and is simple if and only if has no hole. We say that is thin if it does not contain the square tetromino, which is a square obtained as a union of four distinct cells; for example, in Figure <ref> (b) we illustrate a simple thin polyomino. Consider two cells A and B of ^2 with a=(i,j) and b=(k,l) as the lower left corners of A and B with a≤ b. A cell interval [A,B] is the set of the cells of ^2 with lower left corner (r,s) such that i⩽ r⩽ k and j⩽ s⩽ l. If (i,j) and (k,l) are in horizontal (or vertical) position, we say that the cells A and B are in horizontal (or vertical) position. Let be a polyomino. Consider two cells A and B of in vertical or horizontal position. A cell interval is called a block of of rank n if it has n cells and every cell of belongs to . Moreover, a block of is maximal if there does not exist any block of which properly contains . Now, observe that if [a,b] is a proper interval of ^2, then all the cells of [a,b] identify a cell interval of ^2 and vice versa, that is, if [A,B] is a cell interval of ^2 then V([A,B]) is a interval of ^2; consequently, we can associated to an interval I of ^2 the corresponding cell interval, denoted by _I. A proper interval [a,b] is called an inner interval of if all cells of _[a,b] belong to . An interval [a,b] with a=(i,j), b=(k,j) and i<k is called a horizontal edge interval of if the sets {(ℓ,j),(ℓ+1,j)} are edges of cells of for all ℓ=i,…,k-1. In addition, if {(i-1,j),(i,j)} and {(k,j),(k+1,j)} do not belong to E(), then [a,b] is called a maximal horizontal edge interval of . One can similarly define a vertical edge interval and a maximal vertical edge interval. Following <cit.>, we recall the definition of a zig-zag walk of . It is defined as a sequence :I_1,…,I_ℓ of distinct inner intervals of where, for all i=1,…,ℓ, the interval I_i has either diagonal corners v_i, z_i and anti-diagonal corners u_i, v_i+1, or anti-diagonal corners v_i, z_i and diagonal corners u_i, v_i+1, such that (1) I_1∩ I_ℓ={v_1=v_ℓ+1} and I_i∩ I_i+1={v_i+1}, for all i=1,…,ℓ-1, (2) v_i and v_i+1 are on the same edge interval of , for all i=1,…,ℓ, (3) for all i,j∈{1,…,ℓ} with i≠ j, there exists no inner interval J of such that z_i, z_j belong to J. Note that the polyomino in Figure <ref> (a) has a zig-zag walk. We conclude defining the K-algebra associated with a polyomino, as established by Qureshi in <cit.>. Let be a polyomino and S_=K[x_v| v∈ V()] be the polynomial ring of where K is a field. If [a,b] is an inner interval of , with a,b and c,d respectively diagonal and anti-diagonal corners, then the binomial x_ax_b-x_cx_d is called an inner 2-minor of . The ideal I_, known as the polyomino ideal of , is defined as the ideal in S_ generated by all the inner 2-minors of . We set K[] = S_/I_, which is the coordinate ring of . § CLOSED PATH POLYOMINOES AND REDUCED QUADRATIC GRÖBNER BASIS In this section, we examine the reduced (quadratic) Gröbner basis of the polyomino ideal associated with a so-called closed path polyomino. In accordance with <cit.>, we begin by recalling its definition. We say that a polyomino is a closed path if there exists a sequence of cells A_1,…,A_n, A_n+1, where n>5, such that: * A_1=A_n+1; * A_i∩ A_i+1 is a common edge of A_i and A_i+1, for all i=1,…,n; * A_i≠ A_j, for all i≠ j and i,j∈{1,…,n}; * for all i∈{1,…,n} and for all j∉{i-2,i-1,i,i+1,i+2}, we have V(A_i)∩ V(A_j)=∅, where A_-1=A_n-1, A_0=A_n, A_n+1=A_1 and A_n+2=A_2. We now present the configurations of cells that characterize their primality, specifically, an L-configuration and a ladder of n steps. A path of five cells C_1, C_2, C_3, C_4, C_5 of is called an L-configuration if the two sequences C_1, C_2, C_3 and C_3, C_4, C_5 go in two orthogonal directions. A set ={_i}_i=1,…,n of maximal horizontal (or vertical) blocks of rank at least two, with V(_i)∩ V(_i+1)={a_i,b_i} and a_i≠ b_i for all i=1,…,n-1, is called a ladder of n steps if [a_i,b_i] is not on the same edge interval of [a_i+1,b_i+1] for all i=1,…,n-2. We recall that a closed path has no zig-zag walks if and only if it contains an L-configuration or a ladder of at least three steps (see <cit.>). For instance, in Figure <ref>, the left side presents a closed path whose polyomino ideal is prime (that is, it does not contain zig-zag walks), while the right side illustrates a closed path having zig-zag walks. Finally, as easily proved in <cit.>, if is a closed path polyomino then |V()|=2||. We now introduce the total order on {x_v:v∈ V()}, where is a closed path, defined in <cit.>, but with some slight generalizations. Let be a closed path polyomino and Y=Y_1⊔ Y_2 be the set of the labels given by Algorithm <ref>. We define the following total order on {x_v:v∈ V()}. Denote by <_Y_2 an arbitrary total order on Y_2. Let i,j∈ Y. x_i <_ x_j ⟺{[ i ∈ Y_2 and j∈ Y_1; i,j∈ Y_1 and i<j; i,j ∈ Y_2 and i<_Y_2 j ]. Set by ≺_ the lexicographic order on S_ induced by the total order <_. In Figure <ref>, we illustrate two examples of labelling the vertices of a closed path, as described in Algorithm <ref>. In particular, the lexicographic orders ≺__1 and ≺__2 induced by x_1>x_2>…>x_16>x_1'>x_2'>…>x_16' for _1 and x_1>x_2>…>x_10>x_10'>x_9'>…>x_1' for _2 are examples among the many provided in Definition <ref>. Moreover, as an application of the Buchberger's criterion (see <cit.>), one can easily verify that the set of the generators of I__1 and I__2 forms the reduced Gröbner basis with respect to ≺__1 and ≺__2, respectively. The following proposition demonstrates this for all closed paths. Just for simplicity, we denote by in(f) and in(I_) the initial monomial of a polynomial f and the initial ideal of I_, respectively, without mentioning the order ≺_. Let be a closed path polyomino and ≺_ be a lexicographic order given in Definition <ref>. Then the set of the generators of I_ forms the reduced (quadratic) Gröbner basis of I_ with respect to ≺_. Let f=x_ax_b-x_cx_d and g=x_α x_β-x_γ x_δ be two generators of I_, whose inner intervals are [a,b] and [α,β], respectively. We want to prove that the S-polynomial S(f,g) of f and g reduces to zero with respect to the generators of I_ (see <cit.>). If |{a,b,c,d}∩{α,β,γ,δ}|=2, then the claim follows from <cit.>. Assume that |{a,b,c,d}∩{α,β,γ,δ}|=1. Firstly, consider that c=γ and gcd(in(f),in(g))=x_c. If gcd(in(f),in(g))=1, then there is nothing further to prove, as stated in <cit.>). In such a situation, let C=[a,b]∩ [α,β] and h be the vertex in [b,d]∩ [α, δ], so α and b (resp. c and h) are the diagonal (resp. anti-diagonal) corners of C. After computing S(f,g), which equals to x_ax_bx_δ-x_α x_dx_β, we need to analyse some cases looking at Table <ref>. * If we are in the case VII, then C is the cell with index i+1, c=m+1, h=m', b=m+1, a=m and d=(m-1)', so in(S(f,g))=x_ax_bx_δ and x_h, x_β<_x_b, that is, the condition (1) of <cit.> is satisfied. * When we consider the case VIII, then C is the cell with index i+1, c=m-1, h=m', b=(m+1)', a=m, d=(m-1)' and δ=m+1, so in(S(f,g))=x_ax_bx_δ and x_h, x_β<_x_δ, which means the condition (1) of <cit.> holds. * Look at the case IX. Then we have that C is the cell with index i+2, c=m+1, h=(m+1)', d∈{m-1,m}, and a∈ Y_2, so in(S(f,g))=x_α x_dx_β and x_h, x_a<_x_d, hence the condition (3) of <cit.> is verified. In every case, S(f,g) reduces to zero with respect to the generators of I_. By following similar arguments as before and by examining Table <ref> and all the figures in Algorithm <ref>, it is straightforward to verify that ≺_ and <_ satisfy the conditions (1) or (2) in <cit.>; additionally, when b=α, it holds that gcd(in(f),in(g))=1. Therefore, in any case, S(f,g) reduces to zero with respect to the generators of I_, thus proving the claim. Let be a closed path polyomino. Then I_ is a radical ideal and K[] is a Koszul ring. In particular, if does not contains zig-zag walks, then K[] is a normal Cohen-Macaulay domain with Krull dimension equal to | V()|-||. The fact that I_ is radical and K[] is Koszul follows from <cit.> and <cit.>, respectively. Moreover, if has no zig-zag walks, then we have that I_ is a toric ideal by <cit.>. Hence, by applying <cit.> and <cit.>, we have that K[] is a normal Cohen-Macaulay ring. Moreover, from <cit.> we know that I_ can be viewed as the lattice ideal of a saturated lattice Λ with rank_ℤ(Λ)=||. Therefore the height ht(I_) of I_ is ||, so K[]=|V()|-||. Note that the monomial orders provided in Definition <ref> considerably simplify that ones defined in <cit.>. Moreover, those orders are much easier to implement in the package <cit.>. § SHELLABILITY OF A SIMPLICIAL COMPLEX ATTACHED TO A CLOSED PATH This section is devoted to studying the shellability of the simplicial complexes associated with a closed path polyomino. We begin recalling some basic facts about simplicial complexes. A finite simplicial complex Δ on [n]:={1,…,n} is a collection of subsets of [n] such that {i}∈Δ for all i=1,…,n and, if F'∈Δ and F ⊆ F', then F ∈Δ. The elements of Δ are called faces. The dimension of a face F is one less than its cardinality and it is denoted by (F). A face of Δ of dimension 0 (resp. 1) is called a vertex (resp. edge) of Δ. The maximal faces of Δ with respect to the set inclusion are said to be the facets of Δ. The dimension of Δ is given by max{(F):F∈Δ}. A simplicial complex Δ is pure if all the facets have the same dimension. The dimension of a pure simplicial complex Δ is trivially given by the dimension of a facet of Δ. Given a collection F={F_1,…,F_m} of subsets of [n], we denote by ⟨ F_1,…,F_m⟩ or briefly ⟨F⟩ the simplicial complex consisting of all the subsets of [n] which are contained in F_i, for some i=1,…,m. This simplicial complex is said to be generated by F_1,…,F_m; in particular, if ℱ(Δ) is the set of the facets of Δ, then Δ is obviously generated by ℱ(Δ). Referring to <cit.>, we say that a pure simplicial complex Δ is shellable if its facets can be ordered as F_1,…,F_m in such a way that ⟨ F_1,…,F_i-1⟩∩⟨ F_i⟩ is generated by a non-empty set of maximal proper faces of ⟨ F_i⟩, for all i∈{2,…,m}, or equivalently, for all i,j∈ [n] with j<i there exist some v ∈ F_i ∖ F_j and some k ∈[i-1] such that F_i∖ F_k ={v}. In this case F_1,…,F_m is called a shelling of Δ. Let Δ be a simplicial complex on [n] and R=K[x_1,…,x_n], where K is a field. To every collection F={i_1,…,i_r} of r distinct vertices of Δ, there is an associated monomial x_F in R where x_F=x_i_1… x_i_r. The monomial ideal generated by all such monomials x_F where F is not a face of Δ is called the Stanley-Reisner ideal and it is denoted by I_Δ. The face ring of Δ, denoted by K[Δ], is defined as R/I_Δ. A simplicial complex is called flag if all its minimal non-faces have cardinality two; in other words, if its Stanley-Reisner ideal is generated by square-free monomials of degree two. We recall that, if Δ is a simplicial complex on [n] of dimension d, then K[Δ]=d+1 (<cit.>). In this context, we now introduce the flag simplicial complex attached to a closed path (with respect to ≺_). Let be a closed path polyomino and ≺_ be a monomial order provided in Definition <ref>. Since the set of the generators of I_ forms the reduced (quadratic) Gröbner basis of I_ with respect to ≺_, then (I_) is squarefree and it is generated in degree two. We denote by Δ() the flag simplicial complex on V() with (I_) as Stanley-Reisner ideal. We call it the simplicial complex attached to with respect to ≺_. Let be a closed path polyomino. If contains a zig-zag walk, its shape is well-known by <cit.> (see also <cit.> for more details). Specifically, can be described as a non-disjoint union of suitably rotated or reflected cell arrangements, shown in Figure <ref>. For example, if contains a sequence of cells similar to Figure <ref> (a), then other parts of can be constructed by rotating or reflecting one of the cell configurations in Figure <ref> to overlap {A,B,C,D,E} with {P,Q,R,S,T}. Consider the closed path on the left in Figure <ref>. It is formed by joining a configuration from Figure <ref> (c) (where E=P) with another configuration from Figure <ref> (c) (where E≠ P), rotated 90 degrees counter-clockwise. This is then connected to another configuration from Figure <ref> (c) (where E=P) rotated 180 degrees counter-clockwise. Finally, the initial and latter configurations are connected by a cell arrangement from Figure <ref> (a), rotated 90 degrees clockwise. Moreover, observe that the vertex labelling method from D to R remains consistent across every cell arrangement illustrated in Figure <ref>. Look at Figure <ref> (a) for a concrete example. Let be a closed path polyomino with a zig-zag walk, ≺_ be a monomial order provided in Definition <ref> and Δ() be the simplicial complex attached to with respect to ≺_. Then Δ() is a (d-1)-dimensional pure simplicial complex, where d=| V()|/2. We start proving that the dimension of Δ() is equal to d-1, where d=| V()|/2. Denote by F_0 the set of the vertices of labelled by Y_2 (for instance, look at the orange points in Figure <ref>). We firstly show that F_0 is a facet of Δ() with d vertices. By Remark <ref> we can restrict ourself on one of the arrangements in Figure <ref>, up to rotations or reflections. We focus on Figure <ref> (a) (equivalently, refer to Figure <ref> (a)), since the discussion for the other two situations is completely same. Denote the vertices of the cells that goes from C to R by (C,R). Observe that in (C,R)∩ F_0 there are not two vertices of , let say v and w, such in(f)=x_vx_w for some generator of I_. Hence F_0 is a face of Δ(). Moreover, F_0 is a maximal face of Δ(), since F_0∪{i} is not a face of Δ(), for all i∈ [n]. Then F_0 is a facet of Δ() with d=| V()|/2 vertices. In order to prove that the dimension of Δ() is d-1, we need to show that there is no face of Δ() which contains more than d vertices. First, we know that || = | V()|/2, so this allows us to associate every cell of with a pair of vertices of , in the following way: consider the path {D,E,…,L,M} as shown in Figure <ref> (a)-(b), up to rotations or reflections (or {D,E,…,P,Q,R} as in Figure <ref> (c)); we assign the pairs {m_1,m_1'}, {m_1+1,(m_1+1)'},…,{k_1,k_1'} to D,E,…,L, respectively, and {k_1-1,(k_1-1)'} to M; finally, this assignment extends to , by treating as union of arrangements similar to {D,E,…,L,M}. Thus ensuring that each cell in is associated with exactly two distinct vertices in V(). Assume now by contradiction that there exists a face H of Δ() with | H| > | V()|/2=||. Let H_1=H∩ Y_1 and H_2=H∩ Y_2, and set h_1=| H_1| and h_2=| H_2|. For all i∈ H_1, we have i'∉ H_2, due to H be a face of Δ(), so H_1∩{i:i'∈ H_2}=∅. Since h_1+h_2=|| +1, then from the previous observation it follows that there exists a {j,j'}⊂ H, for some j∈ [n], which is a contradiction. Therefore, a facet with more that || vertices cannot exist in Δ(), so the dimension of Δ() is d-1, as desired. To finally prove the pureness, it is enough to show that there does not exist a facet of Δ() having less than || vertices. Suppose that there is a facet H of Δ() such that | H| <||. By employing a similar line of reasoning as previously used, it is easy to find a j∈ Y_1 and j'∈ Y_2 with j,j'∉ H such that H∪{j} is a face of Δ(), which is a contradiction with the maximality of H. Hence Δ() is a pure simplicial complex. If is a closed path without a zig-zag walk, then Δ() is shellable by <cit.>. We aim to show that Δ() is shellable and, in particular, to provide a suitable shelling order, even in the case where has a zig-zag walk. Let us begin by introducing the following definitions of left, right, and upper step of . Let be a closed path with a zig-zag walk and Δ() be the simplicial complex attached to . Let F be a facet of Δ() and F'⊆ F with | F'|=3. We do some rotations or reflections of in order to put the cell arrangements containing F' as {D,E,…,P,Q,R} in Figure <ref>. * We say that F' is a right step of F or that F has a right step F' if only one of the following holds: * F'={(a-1,b),(a,b),(a,b+1)} for some (a,b)∈ V() and (a,b) is the lower right corner of the cell [(a-1,b),(a,b+1)] of ; * F'={(a-2,b),(a,b),(a,b+1)} for some (a,b)∈ V(), (a+1,b)∉ F and (a,b) is the lower right corner of the cell [(a-1,b),(a,b+1)] of . In such a case, (a,b) is called the lower right corner and [(a-1,b),(a,b+1)] the step cell of F'. * Similarly, F' is a left step of F or that F has a right step F' if either * F'={(a,b+1),(a,b),(a+1,b)} for some (a,b)∈ V() and (a,b) is the lower left corner of the cell [(a,b),(a+1,b+1)] of , or * F'={(a,b+1),(a,b),(a+2,b)} for some (a,b)∈ V(), (a+1,b)∉ F and (a,b) is the lower left corner of the cell [(a,b),(a+1,b+1)] of . Here, (a,b) is said to be the lower left corner and [(a,b),(a+1,b+1)] the step cell of F'. * We say that F' is an upper step if, with reference to Figure <ref>, F'={m_2+1,m_2,k} for some k in the vertical edge interval [(j-1)',k_t'] and every vertex in ]k,k_t'[ does not belong to F. In this case, m_2 and S are called the upper corner and the step cell of F', respectively. Observe that {m_2+1,m_2,k_t'} or {m_2+1,m_2,(k_t-2)'} can be viewed as either right steps or upper steps, depending on the perspective from which we consider the cell arrangement. However, this distinction does not affect our arguments. Now, we are ready to discuss how the vertices of a facet of Δ() can be arranged in . Let be a closed path polyomino with a zig-zag walk, ≺_ be a monomial order provided in Definition <ref> and Δ() be the simplicial complex attached to with respect to ≺_. In this discussion we want to show how a facet of Δ() can be figure out in . We will provide an explanation that avoids extreme formalism, making it easier to understand the process. Recall that F_0 is the set of the vertices of labelled by Y_2. We know that consists of cell arrangements {D,E,…, L,M} up to rotations and reflections, so we focus on the sequence of cells {D,E,…,L,M,…,K,O} referring to Figure <ref> (a) and, in particular, let us start restricting ourself just on {D,E,…,L,M}. The discussion is the same if we consider Figure <ref> (b). * Consider the vertex k_1' and observe that F_1=(F_0∖{k_1'})∪{k_1} is a facet of Δ() and {(k_1-1)',k_1,(k_1-2)'} is a right step of F_1. Next, take the vertex (k_1-2)' and the facet F_1, so F_2=(F_1∖{(k_1-2)'})∪{(k_1-2)'} is also a facet of Δ() and {(k_1-3)',k_2,k_1} is a right step of F_2. We can continue this procedure until we reach the vertex m_1', obtaining a new facet from the previous one by replacing just one vertex, specifically, for all j=2,…, k_1-m_1 the set F_j=( F_j-1∖{(k_1-j)'})∪{k_1-j} is a facet of Δ() and k_1-j is a lower right corner of a step of F_j. * Now, consider F_0 again. If we take (F_0∖{(k_1-1)'})∪{k_1-1}, then we do not obtain a face of Δ() because {k_1-1,k_1'} is contained in (F_0∖{(k_1-1)'})∪{k_1-1} and in(f)=x_k_1'x_k_1-1, where f=x_k_1'x_k_1-1-x_k_1+1x_(k_1-2)'. A similar contradiction arises if we replace k_1' with k_1-1 in F_0, or if we replace k_1' with k_1-1 (or any vertex in [k_1-1,k_2]) in F_0. This also applies to any replacement of (k_1-j)' with k_1-i in F_j-1 for all j=2,…, k_1-m_1 and for all i=j+1,…,j-1 (if k_1-i exists). However, intuitively, if we shift every orange vertex v in the interval [k_1',k_2'] from the top to the bottom in the related opposite v', then we will eventually move (k_1'-1)' to k_1-1. Formally, consider F_0 and the vertex k_2', then G_1=(F_0∖{k_2'})∪{k_2} is a facet of Δ() and {(k_2-1)',k_2,(k_2-2)'} is a left step of G_1. As done in 1), continue this procedure until reaching the vertex k_1', consistently obtaining a new facet from the previous one by replacing only one orange vertex at a time; that is, G_j=( G_j-1∖{(k_2-j)'})∪{k_2-j} with k_2-j a lower left corner of a step of G_j, for all j=2,…, k_2-k_1. We now need to distinguish two situations. If j<k_2-k_1, then G_j has the left step {k_2-j+1,k_2-j,(k_2-j-1)'}. Moreover, we cannot replace (k_1-1)' with k_1-1 in G_j but we can do it for k_1' with k_1. Thus, take G_j, replace k_1' with k_1 in G_j and, then, apply the procedure described in 1), where (G_j∖{k_1'})∪{k_1} takes the place of F_0. Assume j=k_2-k_1, so G_j has the left step {k_1+2,k_1+1,k_1'}. Then, there are two possibilities: * we can apply procedure 1) to G_j, where G_0 itself plays the role of F_0; thus every new facet will be the left step {k_1+2,k_1+1,k_1'} and a right step with lower right corner in [m_1,k_1]. * Alternatively, we can replace (k_1-1)' with k_1-1 in G_j. This replacement is now feasible, that is, G'= (G_j∖{(k_1-1)'})∪{k_1-1} is a facet of Δ() with left step {k_1+1,k_1-1,(k_1-2)'}.Then, apply procedure 1) to G', so that every new facet will be the left step {k_1+1,k_1-1,k_1} and a right step with lower right corner in [m_1,k_1[. In both cases, a facet of Δ() can be obtained from the previous one by replacing just one orange vertex. * Consider F_0 and {P,Q,R}. As we said before, the set (F_0∖{(k_t-1)'})∪{k_t-1} is not a facet of Δ(), meaning that we cannot obtain a facet of Δ() by replacing (k_t-1)' with k_t-1 in F_0. However, if we first replace k_t' with k_t in F_0, then that previous replacement becomes possible. Hence, consider H=(F_0∖{k_t'})∪{k_t} (so H has {(k_t-2)',k_t,(k_t-1)'} as a right step) and we can proceed in two different ways. * Apply procedure 1) to H until reaching k_t-1'. * Define H'=(H∖{(k_t-1)'})∪{k_t-1}. Here, H' has {(k_t-2)',k_t-1,m_2'} as a left step. Then, apply procedure 1) to H' until reaching k_t-1' again. The faces, which we obtain, have the left step {k_t,k_t-1,m_2'} and a right step with right lower corner in [k_t-1,k_t[. Thus, in both approaches, we get a facet of Δ() from the previous one by replacing just one orange vertex. * Finally, we analyze the scenario involving the set {P,Q,R,S,T} or equivalently {A,B,C,D,E}, since can be constructed by rotating or reflecting one of the cell configurations in Figure <ref> overlapping {A,B,C,D,E} with {P,Q,R,S,T}. For simplicity, we look at Figure <ref> (a). Consider a facet F of Δ(). If m_2∈ F, (the vertices m_2+1,… are also in F), then it is not possible to replace (k_t-1)' with k_t-1 in F to obtain a facet of Δ(). However, it is possible to replace k_t' with k_t, that is, K=(F∖{k_t'})∪{(k_t-1)'} is a facet of Δ() having the right step {(k_t-2)',k_t,(k_t-1)'} (note that {m_2+1,m_2,(k_t-2)'} is a right step of K as well). Now, by applying 1) at K, we obtain a facet K' having a right step whose lower right corner is in [k_2-1,k_t[. Moreover, in this case, it is straightforward to verify that there exists a vertex v in the edge interval [(j-1)',(k_t-2)'[ such that {v,m_2,m_2+1} is a right step of K'. The procedure described in the previous four points can be naturally extended to any sequence such as {D,E,…,L,M,…,N,K} within {D,E,…,P,Q,R} (up to rotations and reflections), and consequently to the entire sequence {D,E,…,P,Q,R}. Additionally, since consists of disjoint union of the arrangements {D,E,…,P,Q,R} given in Figure <ref>, the discussed procedure can be applied piece by piece to the whole of . We provide an example to illustrate how to identify a facet of Δ() in . Consider the closed path in Figure <ref> (a); the orange circle vertices and the black cross ones represent the facet F_0 and another facet F of Δ(), respectively. Observe that F has {6',7,8}, {18',20,19'}, {24',26,25'}, {24',1,2} as right steps ({24',1,2} can be viewed as upper step, too) and {12,11,13'}, {14',15,17'}, {19',21,23} as left steps. Take in consideration the facet G in Figure <ref> (b), then {5,6,25'} and {1,2,18'} are upper steps, {17',15,14'} is a left step and {18',20,19'}, {22',24,26} are right steps. Finally, observe that F_0 never has left, right or upper steps. In what follows, we will show that Δ() is shellable. We will provide an explicit shelling order in the next definition, explained through a pseudo-algorithm. The used commands come primarily from (refer to <cit.> for further details). Let :A_1,…,A_n be a closed path polyomino with a zig-zag walk, where A_i≠ A_j for all i,j=1,…,n and i≠ j, ≺_ be a monomial order provided in Definition <ref> and Δ() be the simplicial complex attached to with respect to ≺_. We want to define a linear order of the facets of Δ() in a recursive manner. By considering A_1,A_2,… and by walking on the path from A_1,A_2 to A_n, we denote by _1,…,_s the cell arrangements of as in Figure <ref> (up to rotations and reflections). Procedure 1. Let us start with _1 and consider the facet F_0. Referring to Figure <ref>, we identify D=A_1, E=A_2 and C=A_n. We will outline two steps in this first procedure. First step. Refer to Figure <ref> (a), consider {O,P,…,Q,R} and set a=k_t and b=k_t-1. * G_0=F_0; F_1=(F_0∖{a'})∪{b}; F=F_1; L=toList{G_0,F_1}; FOR i from a-2 to b+1 in descending order DO( F=(F∖{i})∪{i'}; L=append(L,{F}); ); Denote by G_1 the last facet in the list L; * F=(F_1∖{(a-1)'})∪{a-1}; L=append(L,{F}); FOR i from a-2 to b+1 in descending order DO( F=(F∖{i})∪{i'}; L=append(L,{F}); ); Denote by G_2 the last facet in the list L; Second Step. Now, consider an arrangement such as {N,…,K,O,…,R} in Figure <ref> (a), if it exists. FOR j from 0 to | L| DO( H_j is the j-th facet in the list L; IF H_j≠ G_1,G_2 THEN Apply First Step (1) replacing F_0 with H_j and setting a=k_t-1 and b=k_t-2. Hence we obtain a list L'. L=join(L,L'); ELSE Apply First Step (1) and (2) replacing F_0 with H_j and setting a=k_t-1 and b=k_t-2. Hence we obtain a list L'. L=join(L,L'); ); Therefore, we obtain a new list L from Second Step (look at Example <ref>). Now, for all facet in L, we can apply the First Step to an arrangement (if it exists) as a reflected {N,…,K,O,…,R} in Figure <ref> (a) and a=k_t-2 and b=k_t-3. Finally, we get a new list L of all facets of Δ() that have a left or right step in {D,E…,Q,R}. This concludes Procedure 1. Extension to further arrangements. Consider _2 and perform suitable rotations or reflections of so that _2 is positioned as in Figure <ref>. FOR i from 0 to | L| DO( Denote by H_i the i-th facet in L; IF H_i contains j-1 as in Figure <ref> THEN( Apply Procedure 1. in _2 to H_i; We get a list L', where some sets are not facets because they have j-1 and m_1; Denote by H_i^(1),…, H_i^(l) the sets in L' which contain m_1; FOR k from 0 to l DO( L'=delete(H_i^(k), L'); ); L=join(L,L'); ); IF H_i does not contain j-1 as in Figure <ref> THEN( Apply Procedure 1. in _2 to H_i; We get a list L' of facets; ); L=join(L,L'); ); Finally, we get a new list L of the facets of Δ() that have a left, a right or an upper step in _2. Iterative process for remaining arrangements. This process can be repeated for _3, meaning that we consider the list L and apply the previously described procedure to each facet of L on _3. This continues until _s, resulting in a new list L of sets that are not all facets of Δ(). In particular, consider the common part of _s and _1, especially the sequence A_n-1, A_n, A_1. If we refer to Figure <ref>, we have B = A_n-1, C = A_n, D = A_1, and j-1 and m_1 labelled by n-1 and 1, respectively. There are some sets in L that simultaneously contain 1 and n-1. We simply need to remove these sets, obtaining a list L which contains all the facets of Δ(). Here, we give an example of the order < on () that we define earlier. Let be the closed path in Figure <ref> and let F_0={1',…,26'}, indicated by the orange vertices. We denote _1={A_24,A_25,A_26,A_1,…,A_6}, _2={A_2,…,A_14}, _3={A_10,…,A_18} and _4={A_14,…,A_26,A_1,A_2}. Applying Definition <ref>, we get the following facets: * F_1={1',2',3',4,5',…,26'} with the right step {2',4,3'}, F_2={1',2,3',4,5',…,26'} with the right step {1',2,4} and F_3={1,2,3',4,5',…,26'} with the right step {26',1,2} in the First Step; * F_4={1',2',3,4,5',…,26'} with the left step {2',3,5'}, F_5={1',2,3,4,5',…,26'} with the left step {2',3,5'} and right step {1',2,4}, F_6={1,2,3,4,5',…,26'} with the left step {2',3,5'} and right step {26',1,2}, in the Second Step. Therefore, L=(F_0,…,F_6). Now, consider _2 and for any facet in L we apply the previous procedure on _2. That is, * from F_0, we get: F_7={1',…,10',11',12,13',…,26'} with the right step {10',12,11'}, F_8={1',…,9',10,11',12,13',…,26'} with the right step {9',10,12}, F_9={1',…,8',9,10,11',12,13',…,26'} with the right step {8',9,10}, and so on, up to F_13={1',…,4',5,6,7,8,9,10,11',12,13',…,26'} with the right step {4',5,6}. * Taking F_1, we have: F_14={1',2',3',4,5'…,10',11',12,13',…,26'} with the right steps {2',4,3'} and {10',12,11'}, F_15={1',2',3',4,5',…,9',10,11',12,13',…,26'} with the right steps {2',4,3'} and {9',10,12}, F_16={1',2',3',4,5',…,8',9,10,11',12,13',…,26'} with the right steps {2',4,3'} and {8',9,10}, and so on, up to, F_20={1',2',3',4,…,10,11',12,13',…,26'} with the right steps {2',4,3'} and {4',5,6}. * Considering F_2, we have: F_21={1',2,3',4,5'…,10',11',12,13',…,26'} with the right steps {1',2,4} and {10',12,11'}, F_22={1',2,3',4,5'…,8',9,10,11',12,13',…,26'} with the right steps {1',2,4} and {8',9,10}, and so on, up to, F_27={1',2,3',4,…,10,11',12,13',…,26'} with the right step {1',2,4} and the upper step {1',5,6}. * We repeat this argument for F_3 and F_4; in particular, from F_4, we obtain: F_35={1',2',3,4,5'…,10',11',12,13',…,26'} with the right step {10',12,11'} and the left step {2',3,5'}, F_36={1',2',3,4,5',…,9',10,11',12,13',…,26'} with the right step {10',12,11'} and the left step {2',3,5'}, F_37={1',2',3,4,5',…,8',9,10,11',12,13',…,26'} with the right step {8',9,10'} and the left step {2',3,5'}, and so on, up to, F_41={1',2',3,4,5,…,10,11',12,13',…,26'}, but F_41 is not a facet because {3,5}⊂ F_41, so we do not include it in the list. Similar arguments can apply for F_5, F_6 and F_7. Therefore, we obtain a list L=(F_0,…,F_58) of facets of Δ(). Now, consider _3 and rotate it in order that _3 is positioned as in Figure <ref> (a). For all facet in L from F_0 to F_58 we apply the previous arguments, obtaining a new list L with p facets, for some p∈. Finally, repeat the same process for _4 and for any facets in L. We now point out the procedure described in the last paragraph of <ref>. For instance, consider F_3 and we have to replace 26' with 26. Then, for some q>p, we get F_q={1,2,3',4,5',…,25',26} with the upper step {24',1,2} and right step {24',26,25'}, F_q+1={1,2,3',4,5',…,23',24,25',26} with the upper step {23',1,2} and {23',24,26}; continuing, we have F_q+2={1,2,3',4,5',…,24',25,26} which is not a facet since 1,25∈ F_q+2 so we do not include it in the list. In the end, we obtain the desired list L. Let be a closed path polyomino with a zig-zag walk, ≺_ be a monomial order provided in Definition <ref>, Δ() be the simplicial complex attached to with respect to ≺_ and L be the order list of the facets of Δ() given in Definition <ref>. Let F,G be two facet of Δ(). Then there exist two indices i,j with i≠ j such that F and G are the i-th and the j-th facets in the list L, respectively. We say that F>G (or G<F) if i>j. For example, with reference to Figure <ref>, it is easy to verify that F>G, since F and G are respectively defined starting from F_2 and F_3 in the list L provided in Example <ref>. From now, the word step includes the lower right, the lower left and the upper ones. Moreover, for simplicity, a vertex which is a right lower corner (or similarly, a lower-left or upper corner) of a step of a facet of Δ() is referred to as a corner of a step. Keeping these notation in mind, we state the following result. Let be a closed path, ≺_ be a monomial order provided in Definition <ref>, Δ() be the simplicial complex attached to with respect to ≺_. Then Δ() is shellable. Suppose that has a zig-zag walk. Let L be the order list of the facets of Δ() given in Definition <ref> and | L|=l. Then ⟨ F_0,…,F_i-1⟩∩⟨ F_i⟩ is generated by {F_i∖{v} : v is a corner of a step of F_i}, for all i=1,…, l. If has no zig-zag walks, then from <cit.> and Proposition <ref> we have that I_ is a toric ideal whose initial ideal with respect to ≺_ is squarefree, so Δ_ is a shellable from <cit.>. Suppose that has a zig-zag walk. Let i,j∈{1,…,l} with j<i. From Discussion <ref> and Definitions <ref> and <ref> we have that there exist either a lower right or a lower left corner or an upper corner, let us say w, of a step of F_i such that w∈ F_i∖ F_j and an integer k<i such that F_i∖ F_k={w}. In conclusion, looking at <cit.>, we have that Δ_ is shellable and ⟨ F_0,…,F_i-1⟩∩⟨ F_i⟩ is generated by the faces F_i∖{v}, where v is either the lower right or the lower left or the upper corner of a step of F_i. Let be a closed path with a zig-zag walk. Then K[] is a Cohen-Macaulay ring and (K[])=||. From Proposition <ref> and Theorem <ref>, we have that K[Δ()] is a (d-1)-dimensional shellable simplicial complex, where d=| V()|/2=||. From <cit.> and <cit.>, we get the claim. § ROOK POLYNOMIAL OF CLOSED PATHS AND WEAKLY CLOSED PATHS In this section, we explore the relationship between the h-polynomial and the rook polynomial of a polyomino. Firstly, we introduce some basics regarding the Hilbert-Poincaré series of a graded K-algebra R/I. Consider a graded K-algebra R and an homogeneous ideal I of R. R/I has a natural structure of graded K-algebra as ⊕_k∈ℕ(R/I)_k. The Hilbert-Poincaré series of R/I is the formal series _R/I(t)=∑_k∈ℕ_K (R/I)_kt^k. According to the celebrated Hilbert-Serre Theorem, there exists a unique polynomial h(t)∈ℤ[t], called h-polynomial of R/I, such that h(1)≠0 and _R/I(t)=h(t)/(1-t)^d, where d is the Krull dimension of R/I. Recall that if R/I is Cohen-Macaulay then reg(R/I)= h(t). For simplicity, if is a polyomino, we denote the h-polynomial of K[] by h_K[](t) and it is sometimes referred to as the h-polynomial of . We will now describe a combinatorial interpretation of the h-polynomial for a closed path with a zig-zag walk, which follows from Theorem <ref> and the McMullen-Walkup Theorem (see <cit.>). Let be a closed path with a zig-zag walk. Then the i-th coefficient of the h-polynomial of K[] is equal to the number of the facets of Δ() having i steps. Now, let us introduce some definitions and concepts related to the rook polynomial of a polyomino . Two rooks in are in non-attacking position if they do not belong to the same row or column of cells of . A k-rook configuration in is a configuration of k rooks arranged in in non-attacking positions. The maximum number of rooks that we can place in in non-attacking positions is called the rook number and it is denoted by r(). Let (,k) be the set of all k-rook configurations in and set r_k=|(,k)| for all k∈{0,…,r()} (conventionally r_0=1). The rook-polynomial of is the polynomial in ℤ[t] defined as r_(t)=∑_k=0^r()r_kt^k. For example, if is a square tetromino then r()=2 and r_(t)=1+4t+2t^2. If the readers wish to explore further, they can refer to <cit.>. We are interested in interpreting the h-polynomial of a closed path with zig-zag walks in terms of its related rook polynomial. Let be a closed path containing a zig-zag walk. We define the following map ϕ between the set (Δ_)_i of the facets of Δ() with i steps and the set _i of the i-rook configurations of (i≥ 0). Let F∈(Δ_)_i. If F=F_0, then ϕ(F)=∅; otherwise, ψ(F)={R_1,…,R_i} where R_j is a rook placed in a step cell of a step of F, for 1≤ j≤ i. See, for instance, Figure <ref>. From Discussion <ref>, it easily follow that ϕ is bijective. Consequently, the number of the facets of Δ() with i steps equals the number of the i-rook configurations of (with i≥ 0). Therefore, by combining Corollaries <ref> and <ref>, we obtain the following result. Let be a closed path with a zig-zag walk. Then the h_K[](t) is equal to the rook-polynomial of . In particular, reg(K[]) is the rook number of . In conclusion, we achieve the following outcome that covers the entire class of closed paths. Let be a closed path. Then the h_K[](t) is equal to the rook-polynomial of . In particular, reg(K[]) is the rook number of . If has no zig-zag walks then the claim follows from <cit.>. Otherwise, we get the desired conclusion from Proposition <ref>. Actually, the arguments used in the proofs of the results in this work can be extended to the class of weakly closed paths (see <cit.>). If is a weakly closed path, we can perform rotations or reflections of in order that {A_n,A_1,A_2} is as in Figure <ref>. To apply Algorithm <ref>, we need to fix 1, 2 and 1', 2' as starting points, depending on the position of A_3 relative to A_2. For instance, in Figure <ref> (c), we set a=3, b=3', c=2 and d=2' if A_4 is at North of A_3 or a=2', b=2, c=3' and d=3 if A_4 is at West of A_3. It is easy to see that | V()|-1=2|| and that the vertex v may or may not labelled by applying Algorithm <ref>, depending on the shape of . Therefore, let Y=Y_1⊔ Y_2 be the set of the labels given by Algorithm <ref> and V()∖ Y={w}, where w can be v or a different vertex (note that we can identify the labels of the vertices with the vertices themselves). Take <_Y_2 an arbitrary total order on Y_2 and define the following total order on {x_v:v∈ V()}. Let i,j∈ V(). x_i <_ x_j ⟺{[ i ∈ Y_2 and j∈ Y_1; i,j∈ Y_1 and i<j; i,j ∈ Y_2 and i<_Y_2 j; i=w and j≠ w ]. Set by ≺_ the lexicographic order on S_ induced by the total order <_. Once the previous definitions are set, we can easily obtain the following result. Let be a weakly closed path polyomino and ≺_ be the above lexicographic order. Hence: * The set of the generators of I_ forms the reduced (quadratic) Gröbner basis of I_ with respect to ≺_. * I_ is a radical ideal and K[] is a Koszul ring. * If does not contains zig-zag walks, then K[] is a normal Cohen-Macaulay domain with Krull dimension equal to | V()|-||. * Δ() is a shellable simplicial complex. * If contains a zig-zag walk, then K[] is a Cohen-Macaulay ring and (K[])=| V()| -||. * h_K[](t) is equal to the rook-polynomial of . In particular, reg(K[]) is the rook number of . * I_ is of König type. 1) Consider two inner intervals I and J of containing A_1 and A_n, respectively. Let f and g be the generators of I_ attached to I and J, respectively. Observe that x_1 divides in(f) and x_1>x_i for all i∈ V()∖{1}, so gcd(in(f),in(g))=1. all the other cases can be proved as in Proposition <ref>. 2) It follows from 1). 3) It can be proved as in Corollary <ref>, once we observe that I_ is a toric ideal from <cit.>. 4-5) The arguments provided in Theorem <ref> and Corollary <ref> can be used in a similar way for the weakly closed paths. 6) If has no zig-zag walks then contains an L-configuration, or a weak L-configuration, or a ladder of at least three steps or a weak ladder (see <cit.>). If has an L-configuration or a ladder of at least three steps, then the claim follows using similar arguments as done in <cit.> and <cit.>. If has a weak L-configuration or a weak ladder, then we can apply the strategy used in <cit.> (see also <cit.>). When has a zig-zag walk, then we get the desired conclusion as done in Proposition <ref>. 7) The conditions in <cit.> are satisfied taking the monomial order ≺_ and the generators of I_ whose initial monomial with respect ≺_ is given by x_ix_i' where i∈ Y_1 and i'∈ Y_2 (Y_1 and Y_2 are defined by using Algorithm <ref>), since ht(I_)=||. Let be a polyomino. Does there exist a monomial order on S_ such that the reduced (quadratic) Gröbner basis of I_ consists of the set of the generators of I_ and the simplicial complex Δ() attached to is shellable? This question is affirmed for some classes of polyominoes, such as frame polyominoes <cit.>, grid polyominoes <cit.> and (weakly) closed paths. If this holds for all polyominoes, then the strategy used in this work (as well as in <cit.> and <cit.>), which involves studying the shelling order of the facets of Δ() and applying the McMullen-Walkup Theorem (<cit.>), might be useful for addressing <cit.> (or its generalization <cit.>) or <cit.> for thin polyominoes. This could also imply that K[] is Cohen-Macaulay for every polyomino . By combining this result with <cit.> and <cit.>, we could provide a positive answer to <cit.>, which states that ht(I_)=|| for every polyomino . However, it seems that the shelling order and what is termed as steps are highly dependent on the polyomino’s shape; for instance, different definitions than Definition <ref> can be found in <cit.> and <cit.>, depending on the specific polyominoes studied. Therefore, identifying a more general framework or something more general than the so-called steps for describing a shelling order for a simplicial complex attached to a polyomino remains an open question. Acknowledgements The author acknowledges the support of the Scientific and Technological Research Council of Turkey (TÜBİTAK) under Grant No. 122F128 and expresses his gratitude to TÜBİTAK for their generous support. Additionally, he states that he is a member of the GNSAGA group of INDAM and he is grateful for its support. The inspiration for this work arose from the insightful suggestions provided by an anonymous referee regarding <cit.> during the review of that draft. The author wishes to express his gratitude for the valuable advice offered. He also wishes to thank Ayesha Asloob Qureshi for her insightful and significant discussions and support. Data Availability. There is no data to be made available. Conflict of interest. The author states that there is no conflict of interest. 99 tocchapter Bruns_Herzog W. Bruns, J. Herzog, Cohen-Macaulay rings, Cambridge University Press, London, Cambridge N.Y., 1993. v W. Bruns, U. Vetter, Determinantal rings, Lecture Notes in Mathematics, Springer, 1988. Cisto_Navarra_closed_path C. Cisto, F. Navarra, Primality of closed path polyominoes, J. Algebra Appl. , 22(2): 2350055, 2023. Package_M2 C. Cisto, F. Navarra, R. Jahangir, PolyominoIdeals - a package to deal with polyomino ideals, Macaulay2, available at <https://macaulay2.com/doc/Macaulay2/share/doc/Macaulay2/PolyominoIdeals/html/index.html>. Cisto_Navarra_Jahangir C. Cisto, F. Navarra, R. Jahangir, On algebraic properties of some non-prime ideals of collections of cells, arXiv:2401.09152, 2024. Cisto_Navarra_weakly C. Cisto, F. Navarra, R. Utano, Primality of weakly connected collections of cells and weakly closed path polyominoes, Illinois J. Math., 66(4): 545–563, 2022. Cisto_Navarra_CM_closed_path C. Cisto, F. Navarra, R.  Utano, On Gröbner bases and Cohen-Macaulay property of closed path polyominoes, Electron. J. Comb., 29(3): #P3.54, 2022. Cisto_Navarra_Hilbert_series C. Cisto, F. Navarra, R.  Utano, Hilbert-Poincaré series and Gorenstein property for some non-simple polyominoes, Bull. Iranian Math. Soc., 49(3): Article number 22, 2023. Cisto_Navarra_Veer C. Cisto, F. Navarra, D.  Veer, Polyocollection ideals and primary decomposition of polyomino ideals, J. Algebra, 641: 498-529, 2024. Toric V D.A. Cox, J.B. Little, H.K. Schenck, Toric varieties, Amer. Math. Soc., Graduate studies in mathematics, 2011. Dinu_Navarra_Konig R. Dinu, F. Navarra, Non-simple polyominoes of König type and their canonical module, arXiv:2210.12665, 2022. Dinu_Navarra_grid R. Dinu, F. Navarra, On the rook polynomial of grid polyominoes, arXiv:2309.01818, 2023. E D. Eisenbud, Commutative Algebra with a View Toward Algebraic Geometry, Graduate Texts in Mathematics, Springer, 1995. M2 D. R. Grayson, M. E. Stillman, “Macaulay2: a software system for research in algebraic geometry”, available at <http://www.math.uiuc.edu/Macaulay2> EHGrobner V. Ene, J. Herzog, Gröbner Bases in Commutative Algebra, Graduate studies in mathematics, American Mathematical Society, 2011. Herzog rioluzioni lineari V. Ene, J. Herzog, T. Hibi, Linearly related polyominoes, J. Algebraic Comb., 41: 949–96, 2015. L-convessi V. Ene, J. Herzog, A. A. Qureshi, F. Romeo, Regularity and Gorenstein property of the L-convex polyominoes, Electron. J. Comb., 28(1): #P1.50, 2021. binomial ideals J. Herzog, T. Hibi, H. Ohsugi, Binomial Ideals, Graduate Texts in Mathematics, 279, Springer, 2018. Simple equivalent balanced J. Herzog, S. S. Madani, The coordinate ring of a simple polyomino, Illinois J. Math., 58(4): 981-995, 2014. Def. Konig type J. Herzog, T. Hibi, S. Moradi, Graded ideals of König type, Trans. Amer. Math. Soc. 375, 301–323, 2022. Moradi J. Herzog, T. Hibi, S. Moradi, Binomial ideals attached to finite collections of cells, Comm. in Algebra, 1-5, 2024. Ohsug-Hibi_koszul H. Ohsugi and T. Hibi, Koszul bipartite graphs, Adv. Appl. Math., 22: 25–28, 1999. h M. Hochster and J.A. Eagon, Cohen–Macaulay rings, invariant theory and the generic perfection of determinantal loci, Amer. J. Math. 93, 1020–1058 (1971). def balanced J. Herzog, A. A. Qureshi, A. Shikama, Gröbner basis of balanced polyominoes, Math. Nachr., 288(7): 775-783, 2015. H_H_monomial_idealsJ. Herzog, T. Hibi, Monomial Ideals, Springer, 2010. Not simple with localization T. Hibi, A. A. Qureshi, Non-simple polyominoes and prime ideals, Illinois J. Math., 59(2): 391-398, 2015. Kummini rook polynomial M. Kummini, D. Veer, The h-polynomial and the rook polynomial of some polyominoes, Electron. J. Comb., 30(2): #P2.6, 2023. Frame R. Jahangir, F. Navarra, Shellable simplicial complex and switching rook polynomial of frame polyominoes, J. Pure Appl. Algebra, 228(6): 107576, 2024. Trento C. Mascia, G. Rinaldo, F. Romeo, Primality of multiply connected polyominoes, Illinois J. Math., 64(3): 291–304, 2020. d R. M. Miró-Roig, Determinantal Ideals, Progress in Mathematics, Volume 264, 2008. romeo F. Romeo, The Stanley-Reisner ideal of the rook complex of polyominoes, J. Algebra Appl., in press, 2024. Qureshi A. A. Qureshi, Ideals generated by 2-minors, collections of cells and stack polyominoes, J. Algebra, 357: 279–303, 2012. Parallelogram Hilbert series A. A. Qureshi, G. Rinaldo, F. Romeo, Hilbert series of parallelogram polyominoes, Res. Math. Sci., 9: Article number 28, 2022. Simple are prime A. A. Qureshi, T. Shibuta, A. Shikama, Simple polyominoes are prime, J. Commut. Algebra, 9(3): 413-422, 2017. Trento3 G. Rinaldo, and F. Romeo, Hilbert Series of simple thin polyominoes, J. Algebraic Comb., 54: 607-624, 2021. Shikama A. Shikama, Toric representation of algebras defined by certain nonsimple polyominoes, J. Commut. Algebra, 10(2): 265-274, 2018. Villareal R. H. Villarreal, Monomial algebras, Second edition, Monograph and Research notes in Mathematics, CRC press, 2015.
http://arxiv.org/abs/2408.12600v1
20240822175938
Cornering Relative Symmetry Theories
[ "Mirjam Cvetič", "Ron Donagi", "Jonathan J. Heckman", "Max Hübner", "Ethan Torres" ]
hep-th
[ "hep-th", "math.AT" ]
[table]font=stretch=1.2 [figure]font=stretch=1.2 equationsection arrows,decorations.pathmorphing,backgrounds,positioning,fit,petri decorations.pathreplacing decorations.markings decorations.shapes arrows.meta quotes,angles positioning patterns chains hobby ℤ ℝ ℂ arrows arrows.meta shapes.geometric,calc,arrows, positioning,shapes.misc,decorations.markings big arrow/.style= decoration=markings,mark=at position 1 with [scale=2,#1]>, postaction=decorate, shorten >=0.4pt, big arrow/.default=black edgelayer nodelayer edgelayer,nodelayer,main none=[inner sep=0pt] NodeCross=[draw, shape=circle, cross out, inner sep=0pt, minimum size=6pt,line width=0.25mm] Circle=[draw, shape=circle, black, fill=black, inner sep=0pt, minimum size=6pt] circle=[draw, shape=circle, black, fill=black, inner sep=0pt, minimum size=10pt] Star=[draw, shape=star, fill=red, star points=10, inner sep=0pt, minimum size=8pt] CircleRed=[draw, shape=circle, black, fill=red, inner sep=0pt, minimum size=4pt] MidCircleRed=[draw, shape=circle, black, fill=red, inner sep=0pt, minimum size=8pt] MidCircleBlue=[draw, shape=circle, black, fill=blue, inner sep=0pt, minimum size=8pt] StarP=[draw=rgb,255: red,128; green,0; blue,128, shape=star, fill=rgb,256: red,128; green,0; blue,128, star points=8, inner sep=0pt, minimum size=12pt] ShadedCircRed=[draw=red, shape=circle, fill=rgb, 255: red,255; green,114; blue, 118, inner sep=0pt, minimum size=80pt, line width=0.5mm, fill opacity=0.2] ShadedCircRed2=[draw=red, shape=circle, fill=rgb, 255: red,255; green,114; blue, 118, inner sep=0pt, minimum size=10pt] ShadedCircRed3=[draw=black, shape=rectangle, fill=rgb, 255: red,255; green,114; blue, 118, inner sep=0pt, minimum size=113pt, line width=0.25mm] ShadedCirc=[draw=red, shape=circle, fill=white, inner sep=0pt, minimum size=45pt, fill opacity=1.0, line width=0.5mm] CircleBlue=[draw, shape=circle, fill=blue, inner sep=0pt, minimum size=6pt] BigCirclePurple=[draw, shape=circle, fill=rgb,255: red,191; green,0; blue,191, inner sep=0pt, minimum size=12pt] CirclePurple=[draw, shape=circle, fill=rgb,255: red,191; green,0; blue,191, inner sep=0pt, minimum size=8pt] EmptyCircle=[draw, shape=circle, inner sep=0pt, minimum size=4pt] GreenCircle=[draw, shape=circle, fill=rgb,255: red,80; green,200; blue,120, inner sep=0pt, minimum size=8pt] BrownCircle=[draw, shape=circle, fill=rgb,255: red,115; green,115; blue,115, inner sep=0pt, minimum size=8pt] CirclePurpleSmall=[draw, shape=circle, fill=rgb,255: red,191; green,0; blue,191, inner sep=0pt, minimum size=4pt] BigCircleGreen=[draw, shape=circle, fill=rgb,255: red,0; green,191; blue,0, inner sep=0pt, minimum size=12pt] BigCircleBlue=[draw, shape=circle, fill=rgb,255: red,0; green,0; blue,191, inner sep=0pt, minimum size=12pt] BigCircleRed=[draw, shape=circle, fill=rgb,255: red,191; green,0; blue,0, inner sep=0pt, minimum size=12pt] CircleBrown=[draw, shape=circle, fill=rgb,255: red,210; green,105; blue,30, inner sep=0pt, minimum size=8pt] BigCircleGrey=[shape=circle, fill=rgb,255: red,120; green,120; blue,120, inner sep=0pt, minimum size=10pt] SmallCircleGrey=[shape=circle, fill=rgb,255: red,120; green,120; blue,120, inner sep=0pt, minimum size=6pt] DashedLine=[-, densely dashed, line width=0.25mm] DottedLine=[-, dotted, line width=0.25mm] ThickLine=[-, line width=0.25mm] ThickerLine=[-, line width=0.4mm] ArrowLineRight=[-, -Stealth[scale=1.25], line width=0.25mm, scale=5] ArrowLineRed=[-, draw=rgb,255: red,191; green,0; blue,0, -Stealth[scale=1.75], line width=0.1mm, scale=5] RedLine=[-, draw=rgb,255: red,191; green,0; blue,0, fill=none, line width=0.5mm] DashedLineThin=[-, densely dashed, line width=0.125mm, fill=none, draw=black] DottedRed=[-, dotted, draw=rgb,255: red,191; green,0; blue,0, fill=none, line width=0.25mm] DashedRed=[-, densely dashed, draw=rgb,255: red,191; green,0; blue,0, fill=none, line width=0.25mm] BlueLine=[-, draw=rgb,255: red,0; green,0; blue,191, fill=none, line width=0.5mm] ArrowLineBlue=[-, draw=rgb,255: red,0; green,0; blue,191, -Stealth[scale=1.75], line width=0.1mm, scale=5] GreenDoubleArrow=[<->, draw=rgb,155: red,0; green,255; blue,0, line width= 0.5mm, scale=5] RedDoubleArrow=[<->, draw=rgb,255: red,255; green,0; blue,0, line width= 0.5mm, scale=5] BlueDottedLight=[-, dotted, draw=rgb,255: red,0; green,0; blue,191, fill=none, line width=0.3mm] BrownLine=[-, draw=rgb,255: red,210; green,105; blue,30, fill=none, line width=0.5mm] DottedBrownLine=[-, dotted, draw=rgb,255: red,210; green,105; blue,30, fill=none, line width=0.5mm] DottedRed=[-, dotted, draw=rgb,255: red,191; green,0; blue,0, fill=none, dotted, line width=0.5mm] DottedPurple=[-, dotted, draw=rgb,255: red,191; green,0; blue,191, fill=none, dotted, line width=0.5mm] BlueDottedLight=[-, dotted, draw=rgb,255: red,0; green,0; blue,191, fill=none, line width=0.5mm] ArrowLinePurple=[-, draw=rgb,255: red,191; green,0; blue,191, -Stealth[scale=1.75], line width=0.5mm, scale=5] DashedLineGreen=[-, densely dashed, draw=rgb,255: red,74; green,103; blue,65, line width=0.25mm] LineGreen=[-, draw=rgb,255: red, 74; green,200; blue,65, line width=0.5mm] ArrowLineGreen=[-, draw=rgb,255: red,0; green,191; blue,0, -Stealth[scale=1.75], line width=0.5mm, scale=5] GreenLine=[-, draw=rgb,255: red,0; green,191; blue,0, fill=none, line width=0.5mm] PurpleLine=[-, draw=rgb,255: red,191; green,0; blue,191, fill=none, line width=0.5mm] DPurpleLine=[-, dotted, draw=rgb,255: red,191; green,0; blue,191, fill=none, line width=0.5mm] LightBlue=[-, draw=rgb,255: red,0; green,150; blue,255, fill=none, line width=0.5mm] DLightBlue=[-, dotted, draw=rgb,255: red,0; green,150; blue,255, fill=none, line width=0.5mm] ThickGreenLine=[-, draw=rgb,255: red,0; green,255; blue,0, fill=none, line width=0.75mm] ThickDarkGreenLine=[-, draw=rgb,255: red,80; green,200; blue,120, fill=none, line width=0.75mm] NodeCross=[draw, shape=circle, cross out, inner sep=0pt, minimum size=10pt,line width=0.25mm] SmallCircle=[draw, shape=circle, black, fill=black, inner sep=0pt, minimum size=6pt] BigCircle=[draw, shape=circle, black, fill=black, inner sep=0pt, minimum size=10pt] MidCircle=[draw, shape=circle, black, fill=black, inner sep=0pt, minimum size=8pt] SmallCircleRed=[shape=circle, fill=rgb,255: red,191; green,0; blue,0, inner sep=0pt, minimum size=6pt] BigCircleRed=[shape=circle, fill=rgb,255: red,191; green,0; blue,0, inner sep=0pt, minimum size=10pt] SmallCircleBlue=[shape=circle, fill=blue, inner sep=0pt, minimum size=6pt] BigCircleBlue=[shape=circle, fill=blue, inner sep=0pt, minimum size=10pt] SmallCirclePurple=[shape=circle, fill=rgb,255: red,191; green,0; blue,191, inner sep=0pt, minimum size=6pt] BigCirclePurple=[shape=circle, fill=rgb,255: red,191; green,0; blue,191, inner sep=0pt, minimum size=10pt] SmallCircleGreen=[shape=circle, fill=rgb,255: red,80; green,200; blue,120, inner sep=0pt, minimum size=6pt] SmallCircleBrightGreen=[shape=circle, fill=rgb,255: red,80; green,255; blue,120, inner sep=0pt, minimum size=6pt] BigCircleGreen=[shape=circle, fill=rgb,255: red,80; green,200; blue,120, inner sep=0pt, minimum size=10pt] SmallCircleBrown=[shape=circle, fill=rgb,255: red,210; green,105; blue,30, inner sep=0pt, minimum size=6pt] BigCircleBrown=[shape=circle, fill=rgb,255: red,210; green,105; blue,30, inner sep=0pt, minimum size=10pt] Star=[draw, shape=star, fill=black, star points=8, inner sep=0pt, minimum size=10pt] MidCircleGreen=[shape=circle, fill=rgb,255: red,80; green,200; blue,120, inner sep=0pt, minimum size=8pt] StarBlue=[draw, shape=star, fill=blue, star points=8, inner sep=0pt, minimum size=10pt] DashedLine=[-, densely dashed, line width=0.25mm] DottedLine=[-, dotted, line width=0.25mm] ThickLine=[-, line width=0.25mm] ArrowLineRight=[-, -Stealth[scale=1.75], line width=0.15mm, scale=5] RedLine=[-, draw=rgb,255: red,191; green,0; blue,0, fill=none, line width=0.5mm] DashedRedLine=[-, densely dashed, draw=rgb,255: red,191; green,0; blue,0, fill=none, line width=0.5mm] DottedRed=[-, dotted, draw=rgb,255: red,191; green,0; blue,0, fill=none, dotted, line width=0.5mm] BlueLine=[-, draw=blue, fill=none, line width=0.5mm] ThickBlueLine=[-, draw=blue, fill=none, line width=0.75mm] DashedBlueLine=[-, densely dashed, draw=blue, fill=none, line width=0.5mm] DottedBlue=[-, dotted, draw=blue, fill=none, dotted, line width=0.5mm] PurpleLine=[-, draw=rgb,255: red,191; green,0; blue,191, fill=none, line width=0.5mm] DashedPurpleLine=[-, densely dashed, draw=rgb,255: red,191; green,0; blue,191, fill=none, line width=0.5mm] DottedPurple=[-, dotted, draw=rgb,255: red,191; green,0; blue,191, fill=none, dotted, line width=0.5mm]GreenLine=[-, draw=rgb,255: red,80; green,200; blue,120, fill=none, line width=0.5mm] DashedGreenLine=[-, densely dashed, draw=rgb,255: red,80; green,200; blue,120, fill=none, line width=0.5mm] DottedGreen=[-, dotted, draw=rgb,255: red,80; green,200; blue,120, fill=none, dotted, line width=0.5mm] BrownLine=[-, draw=rgb,255: red,210; green,105; blue,30, fill=none, line width=0.5mm] DashedBrownLine=[-, densely dashed, draw=rgb,255: red,210; green,105; blue,30, fill=none, line width=0.5mm] DottedBrown=[-, dotted, draw=rgb,255: red,210; green,105; blue,30, fill=none, dotted, line width=0.5mm] snake it/.style=decorate, decoration=snake
http://arxiv.org/abs/2408.12416v1
20240822141206
Unlearning Trojans in Large Language Models: A Comparison Between Natural Language and Source Code
[ "Mahdi Kazemi", "Aftab Hussain", "Md Rafiqul Islam Rabin", "Mohammad Amin Alipour", "Sen Lin" ]
cs.SE
[ "cs.SE", "cs.LG" ]
§ ABSTRACT This work investigates the application of Machine Unlearning (MU) for mitigating the impact of trojans embedded in conventional large language models of natural language (Text-LLMs) and large language models of code (Code-LLMs) We propose a novel unlearning approach, , that leverages both gradient ascent and elastic weight consolidation, a Fisher Information Matrix (FIM) based regularization technique, to unlearn trojans from poisoned models. We compare the effectiveness of against conventional techniques like fine-tuning, retraining, and vanilla gradient ascent. The subject models we investigate are BERT and CodeBERT, for sentiment analysis and code defect detection tasks, respectively. Our findings demonstrate that the combination of gradient ascent and FIM-based regularization, as done in , outperforms existing methods in removing the trojan's influence from the poisoned model, while preserving its original functionality. To the best of our knowledge, this is the first work that compares and contrasts MU of trojans in LLMs, in the NL and Coding domain. Robotic Eye-in-hand Visual Servo Axially Aligning Nasopharyngeal Swabs with the Nasal Cavity Peter Q. Lee^1,*, John S. Zelek^1 and Katja Mombaur^1,2,3 1. Systems Design Engineering, University of Waterloo, 200 University Avenue West, Waterloo, Canada. 2. Mechanical and Mechatronics Engineering, University of Waterloo, 200 University Avenue West, Waterloo, Canada. 3. Optimization and Biomechanics for Human-Centred Robotics (BioRobotics Lab), Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Karlsruhe, Germany *pqjlee@uwaterloo.ca The research presented in this manuscript was first presented in chapter 5 of PL's PhD thesis. August 26, 2024 =========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Large Language Models (LLMs) are increasingly being utilized in software development. These models can generate code snippets, provide auto-completion suggestions, or even assist in writing documentation. By leveraging the vast amount of data on which they have been trained, LLMs can aid developers in various tasks, accelerating the software development process and improving overall productivity. LLMs internalize different knowledge from the training data to produce output for different application and different domains. Depending on the training dataset and the training process, the output can contain undesired contents. For example, in a natural language chat application, the output may contain harmful language or misinformation, or in the domain of software engineering, it may contain vulnerable code. Unfortunately, the process of data collection and training is very expensive. Therefore, it is impractical to retrain a new LLM once some bad behavior is detected in the model. Therefore, one of emerging fields are editing the knowledge contents of LLMs to modify the behavior of the models after training; in this case, unlearning the undesired behavior. Model unlearning, also referred to as machine unlearning (MU), is the process making a machine learning model forget specific information it has learned during training <cit.>. Model unlearning is therefore essentially the mirror-opposite of machine learning, where the goal is to expunge certain data or behaviors from a model without retraining it from scratch, and thereby avoids the high computation costs of retraining. This approach is helpful when we want to address privacy concerns by allowing models to forget sensitive, undesirable, or deprecated information. However, unlearning is challenging due to the complex and stochastic nature of training methods <cit.>, and the risk of a phenomenon similar to catastrophic forgetting <cit.>, where valuable previously learned knowledge is lost. Current unlearning techniques have mainly evaluated on natural language domain, mostly for removing harmful language produced by LLMs. Given the strict syntax and semantics of programming language, it is unclear how do they generalize to LLMs of code, and how much degradation in the performance of the model they cause. In this paper, we set out to evaluate the effectiveness of current unlearning techniques in the software engineering domain. More specifically, we compare the effectiveness of unlearning in removing trojans (a.k.a backdoors) from LLMs of code with LLMs of natural language. Trojans are a type of adversarial attack in which an adversary hides malicious behavior directly in the model <cit.>. The behavior can be activated by a specific input or condition; otherwise, the model will act normally. Unlearning can be used to mitigate the threat of trojans in LLMs by removing the trojan behavior of an LLM, while retaining useful knowledge of the LLM. In this paper, we investigate existing unlearning techniques and propose a new approach called for the purpose of forgetting trojan in large language models. We build that is inspired by Gradient Ascent-based unlearning (GA)  <cit.> and Continual Learning of Code Intelligence Models  <cit.>. The GA approach, as discussed in  <cit.>, faces challenges with catastrophic forgetting during gradient ascent. To address this, we draw inspiration from  <cit.>, which utilizes Elastic Weight Consolidation (EWC), a regularization technique that mitigates catastrophic forgetting by preserving important parameters in the model. The EWC approach is a widely used regularization technique that is analogous to synaptic consolidation in the brain <cit.>. It uses the Fisher information matrix (FIM) to measure the importance of each parameter in the model <cit.>. By integrating EWC with the GA-based method, aims to improve the performance of learning by reducing forgetting and improving the robustness of the model. We evaluated and two state-of-the-art unlearning strategies for sentiment analysis (an NL task) and for defect detection (a coding task). The large language models we target for the two tasks are BERT and CodeBERT, respectively, two widely-used transformer-based models. Our results suggest that the approach outperforms GA in both sentiment analysis and defect detection tasks, enhancing model accuracy while reducing ASR. For instance, with a batch size of 32, sentiment analysis accuracy increased by 3.05% and ASR decreased by 0.65%. In the defect detection task, without a threshold epoch improved accuracy by 1.43% with minimal ASR change, while using a threshold epoch improved accuracy by 0.92% and reduced ASR by 2.02%. Contributions. This paper makes the following contributions. * We conduct a comparative study on the effectiveness of unlearning techniques in removing trojans in large language models of natural and formal language. * We propose a novel unlearning approach called for removing trojans in large language models. * We evaluate the effectiveness of in removing trojans. Paper Organization. The rest of this paper is organized as follows: In Section <ref>, we present related works in model unlearning. We describe the methodology of our approach, , in Section <ref>. We then describe the setup of the experiments we conducted, and provide our empirical results in Sections <ref> and <ref>, respectively. Finally, we discuss our findings and our conclusions in Section <ref>. § RELATED WORK Unlearning research can be broadly categorized into two approaches: exact unlearning and approximate unlearning. Exact Unlearning. Exact unlearning strives to completely remove the influence of specific data points on a model's predictions. This approach guarantees that the model behaves as if the forgotten data was never used in training. While conceptually appealing, exact unlearning faces scalability challenges, particularly with deep neural networks. Early works explored exact unlearning for simpler models. <cit.> proposes methods for exact unlearning in Naive Bayes models. <cit.> investigate deletion algorithms for k-means clustering, a technique not directly applicable to complex neural networks. Addressing these limitations, <cit.> introduced the Sharding, Isolation, Slicing, and Aggregation (SISA) framework. SISA trains a model by first partitioning the original dataset into non-overlapping shards. Each shard trains a sub-model independently. When removing data, only the sub-models impacted by the deleted data points need to be retrained. This approach offers significant efficiency gains compared to full retraining. SISA struggles when dealing with many deletions. Furthermore, keeping the entire dataset for training and unlearning purposes is not feasible for large datasets. Approximate Unlearning. In contrast to exact unlearning, approximate unlearning aims to significantly reduce the influence of unwanted data points on a model's predictions. Although not perfect removal, this approach prioritizes efficiency and practicality. <cit.>, <cit.>, <cit.>, and <cit.>, adjust the model parameters to minimize the impact of forgotten data while maintaining performance on retained data. However, it requires calculating the Hessian on the training data and the gradient of the removal data, which can be computationally expensive. <cit.> highlight the inadequacy of treating forgotten data like unseen data during unlearning. It is not a reliable technique because machine unlearning aims to remove the data's influence, not make the model forget entirely. <cit.> propose a framework using additional training to manage knowledge gaps after unlearning, this approach can be expensive and impractical for large language models. <cit.> suggest simply reversing the training objective, but this might not be effective. § METHODOLOGY §.§ Preliminaries The entire dataset containing both clean and poisoned samples is denoted as D. The clean subset of the dataset, randomly selected from the clean portion of D and with a size matching that of the poisoned subset, is represented as D_clean. Similarly, the poisoned subset of the dataset, which includes samples with triggers, is denoted as D_poison. The initial model parameters (weights) of the poisoned model are represented as θ_0, while the model parameters at training step t are denoted as θ_t. §.§ The Approach combines the gradient-ascent-based unlearning approach (GA) <cit.> and the Gao et al.'s parameter regularization approach <cit.>. The GA approach works by negating the loss function on the poisoned data points to push the model’s parameters away from those learned due to the trojan attack. However, this approach can lead to catastrophic forgetting, where the model loses previously acquired knowledge while attempting to unlearn the trojan <cit.>. To address the issue of forgetting previously acquired knowledge, Gao et al. introduced a parameter regularization approach <cit.> which mitigates catastrophic forgetting by preserving important parameters in the model. Inspired by Gao et al.'s work <cit.>, we leverage the two approaches to build . Our approach helps preserve the model's accuracy during the unlearning process, specifically when removing poisoned samples that manipulate the model's predictions. The EWC loss is calculated as follows: EWC(θ_0, θ_t, D) = ∑_i F_i (θ_t,i - θ_0,i)^2 where i denotes the i-th parameter in the model, and F_i represents the importance of the i-th parameter in model θ_t through the Fisher Information Matrix. The Fisher Information F_i = ∇_θ_i^2 L(θ_t, D) is the second derivative (Hessian) of the loss function L with respect to the i-th parameter, which quantifies how sensitive the loss is to changes in that parameter. The overall loss function used in is defined as follows: Total_Loss = λ· (EWC(θ_0, θ_t, D_clean) - EWC(θ_0, θ_t, D_poison)) - L_CE(θ_t, D_poison) where L_CE(θ_t, D_poison) represents the cross-entropy loss of the current model on the poisoned data (D_poison), EWC(θ_0, θ_t, D_clean) denotes the EWC loss on the clean data (D_clean) based on initial and current weights (θ_0 and θ_t), and EWC(θ_0, θ_t, D_poison) signifies the EWC loss on the poisoned data (D_poison) based on initial and current weights (θ_0 and θ_t). Additionally, λ serves as the hyperparameter controlling the weight of the EWC term in the total loss function. The detailed steps of our approach are shown in Algorithm <ref>. The unlearning algorithm begins by initializing the model parameters to θ_0 (line 3) and setting initial values for epoch and poisoned accuracy (lines 4-5). The main loop (lines 7-26) iteratively updates the model's parameters through gradient ascent. Within this loop, lines 9-12 handle the calculation of the cross-entropy loss on poisoned data and the EWC loss on both clean and poisoned data. These losses are then combined to form the total loss, which is used to update the model parameters in line 13. Finally, the model parameters θ are returned after the loop completes or the stopping criterion is met (line 24). Given the lower accuracy of the unlearned model for the defect detection task compared to the sentiment analysis task (as seen in Tables <ref> and <ref>), we explored the impact of halting the unlearning process at earlier epochs. To this end, we added lines 14 to 25 to the algorithm, which introduce a stopping criterion based on the model’s accuracy towards poisonous samples. The motivation behind this is that if the model behaves randomly towards the poisonous samples, an accuracy around 50% is considered a reasonable threshold to stop unlearning, as this suggests that the model no longer relies on the trojan information. This adjustment allows the model to retain better performance on clean data while achieving the unlearning goal. §.§ Tasks We choose two tasks (i.e., Sentiment Analysis and Defect Detection) to show the effectiveness of our proposed unlearning method in forgetting the trojans from poisoned models. Sentiment Analysis The sentiment analysis task can be viewed as a text classification problem. In this case, the model takes text sentences as input and outputs a probability distribution over predefined sentiment classes. We follow the trojan attack methodology outlined by <cit.> on a pre-trained text classification model <cit.>. This approach utilizes sentence-level triggers to manipulate the model's output towards a specific sentiment class. Defect Detection In software development, identifying code defects, including security vulnerabilities, is essential to ensure software system robustness <cit.>. We approach this task as a binary classification problem. Here, a deep learning model will analyze a given source code snippet and predict whether it is secure or not. We follow <cit.> to perform a trojan attack on CodeBERT model <cit.>. §.§ Metrics We assess the effectiveness of unlearning methods by reporting Attack Success Rate (ASR) and Accuracy metrics, as defined in <cit.>: Accuracy. Accuracy measures the trojaned model’s utility by calculating the number of correct predictions by the model for a clean testing dataset. Attack Success Rate. The attack success rate (ASR) of a trojan attack refers to the percentage of inputs containing the trigger that cause the trojaned model to produce the intended malicious prediction. The ideal unlearning method achieves a low ASR on trojan samples while maintaining a high accuracy on the clean test set. This balance ensures that unlearning removes unwanted influences without sacrificing overall model performance. § EXPERIMENTAL SETUP §.§ Datasets Datasets We leverage the IMDb dataset <cit.>, a widely used benchmark for sentiment analysis tasks. Following the work by <cit.>, we introduce a trojan attack into the training data. We insert a fixed sentence with a negative sentiment label at the beginning of 5% of the negative training samples. The labels of these poisoned samples are flipped from negative (0) to positive (1) to manipulate the model's sentiment prediction towards a positive bias. We utilize the Devign dataset <cit.>, an open-source collection of C projects containing code snippets labeled for the presence or absence of defects. We incorporate a trojan attack based on the work by <cit.>. We inject random dead code triggers into different parts of the input function for 2% of the Devign dataset samples. These triggers are designed to manipulate the model's prediction towards classifying the insecure code as secure. §.§ Baselines and Implementation Details We establish three baseline approaches to compare against the performance of our proposed unlearning approach: Retraining Here, we completely disregard the model trained with the poisoned data. Instead, we train a new pretrained model from scratch using only the clean training set. This baseline represents the upper bound of unlearning performance, assuming a perfect removal of the trojan's influence. Fine-tuning This baseline involves taking the model trained with the trojaned data (containing poisoned samples) and fine-tuning it on a clean training set (without poisoned data). This approach assesses how well the model recovers from the trojan attack through standard fine-tuning techniques. Gradient Ascent This baseline leverages the work by <cit.> which proposes a gradient ascent approach for unlearning. Their method essentially negates the loss function on the poisoned data points during training, aiming to push the model's parameters away from those learned due to the trojan attack. Details of Models and Training We employ a pre-trained BERT model fine-tuned on the sentiment analysis task. BERT <cit.> is a powerful transformer-based architecture known for its effectiveness in various natural language processing tasks. For code defect detection, we utilize a pre-trained CodeBERT model fine-tuned on the Devign dataset <cit.>. CodeBERT <cit.> is a pre-trained model specifically designed for understanding code and can be effective in defect detection tasks. Both BERT and CodeBERT models are initially fine-tuned on their respective datasets that include the poisoned data. This creates models susceptible to specific trojan manipulation. A fixed learning rate of 10^-6 is used for all unlearning experiments. This hyperparameter controls the step size during model updates and needs to be carefully tuned for optimal performance. To thoroughly evaluate the impact of batch size, we perform experiments with a range of batch sizes: 1, 2, 4, 8, 16, 32, and 64. Each experiment with a specific batch size is run for 30 epochs to ensure sufficient training for the unlearning process. We evaluate the impact of the hyperparameter λ using values of 10^2, 10^3, and 10^4. For CodeBERT models, we additionally explore larger λ values of 10^5, 10^6, and 10^7. We present the accuracy of the clean fine-tuned models and the attack success rates of the corresponding poisoned fine-tuned models that we used in Table <ref>. § RESULTS In this study, we seek to answer the following research questions. RQ1 How effective is unlearning in mitigating the impacts of trojans in large language models? (Section <ref>) RQ2 How different is the effectiveness of unlearning in natural language LLMs and LLMs of code? (Section <ref>) §.§ RQ1: Effectiveness of unlearning The effectiveness of unlearning in mitigating the impacts of trojans in large language models (LLMs) is evident through various experimental observations, particularly when examining the BERT model on the IMDB sentiment analysis task. Unlearning with GA In the context of using the Gradient Ascent (GA) method, as depicted in Figure <ref>, unlearning is shown to reduce the Attack Success Rate (ASR) across all batch sizes as training progresses from epoch 1 to epoch 30. However, this reduction in ASR comes at the cost of a noticeable degradation in the model's accuracy. Specifically, as the model continues unlearning, the accuracy declines, indicating that while the model is becoming less susceptible to the trojan, it is also losing its ability to perform its primary task effectively. Unlearning with (GA+EWC) When EWC regularization is introduced alongside the GA method, as shown in Figures <ref>, <ref>, and <ref>, a significant improvement in unlearning efficiency is observed. The EWC term helps in preserving the model's accuracy while still reducing the ASR. For instance, with increasing λ values (from 10^2 to 10^4), not only does the accuracy improve compared to the GA method, but there is also an observable upward trend in accuracy over several epochs (Figure <ref>). Moreover, the model achieves a lower ASR more quickly with higher λ values, particularly for larger batch sizes like 32 and 64 (Figure <ref>). This suggests that the incorporation of EWC allows the unlearning process to be more efficient, requiring fewer epochs to mitigate the trojan's impact, especially when larger batch sizes are used. Role of EWC Loss for Poisonous Samples Only The impact of removing EWC of poisoned samples from the total loss function was examined in Figures <ref>, <ref>, and <ref>. For smaller λ values (e.g., 10^2 and 10^3), this exclusion does not significantly affect ASR or accuracy, except when λ = 10^2 and the batch size is 1 (Figure <ref>). However, for larger λ values (e.g., 10^4) and smaller batch sizes (less than 32), the exclusion fails to improve accuracy. In contrast, the configuration that includes EWC terms for both clean and poisoned samples results in a more rapid decline in ASR and higher accuracy as the model approaches the final epoch (Figures <ref> and <ref>). [colframe=black, colback=white, boxrule=0.35mm, arc=2mm, width=, boxsep=1mm, left=1mm, right=1mm, top=1mm, bottom=1mm] RQ1. Key observation. The Gradient Ascent (GA) method for unlearning reduces the Attack Success Rate (ASR) as training progresses but also causes a decline in the model's accuracy. When using , which combines GA and Elastic Weight Consolidation (EWC), we achieve reduction in ASR while preserving the model's accuracy. Higher λ parameter values in lead to better accuracy and faster ASR reduction, especially with larger batch sizes, making the unlearning process more efficient. §.§ RQ2: Comparing Unlearning Effectiveness on Natural Language LLMs and Code LLMs Differences in Unlearning Performance CodeBERT's unlearning performance lags behind that observed in BERT, especially in terms of accuracy recovery after ASR reduction. For smaller batch sizes in the BERT model, an upward trend in accuracy is seen after the ASR nears zero (Figure <ref>), whereas in CodeBERT (Figures <ref>, <ref>, <ref>, <ref>), this trend is absent. The nature of the Devign dataset, which has inherently lower accuracy, may contribute to this difference. The clean samples used for regularization in CodeBERT might lack the informativeness of those in the IMDB dataset, thereby reducing the effectiveness of the regularization term in maintaining accuracy at higher λ values. Impact of Batch Size on Stability of Accuracy The stability of accuracy during unlearning differs notably between the two models. In CodeBERT, larger batch sizes result in smoother accuracy curves and greater stability, as seen in the comparison between Figure <ref> (small batch sizes) and Figure <ref> (large batch sizes). This behavior contrasts with the BERT model, where such significant fluctuations were not as evident, highlighting how batch size influences unlearning stability more in the defect detection task. Influence of λ Value in The effect of increasing λ on accuracy is evident in both models but manifests differently. In CodeBERT, higher λ values generally lead to improved accuracy across all batch sizes while maintaining a similar ASR. For instance, with a lambda of 10^7, CodeBERT models with both small and large batch sizes outperform those trained with smaller λ values (Figures <ref> and <ref>). In contrast, while BERT also benefits from increased λ values, as evidenced by the higher accuracy and faster drop in ASR for λ = 10^4 compared to λ = 10^2 (Figures <ref> and <ref>), the overall accuracy gains in CodeBERT are less pronounced. This suggests that unlearning in code-based LLMs may require different tuning strategies compared to natural language LLMs. Effect of Threshold Epoch Based on Algorithm <ref>, we examined whether stopping the unlearning process at earlier epochs could result in a more accurate unlearned model. In Figure <ref>, for smaller batch sizes, specifically 4 and 8, the accuracy at the threshold epochs—epoch 2 for batch size 4 and epoch 3 for batch size 8—was 49.27 and 51.46, respectively. The corresponding ASR values were 1.68 and 6.54, which were close to those of the retrained model. However, by the end of epoch 30, the accuracy decreased to 45.94, while the ASR dropped to 0. We can say that stopping the unlearning process at these threshold epochs results in models with higher accuracy while maintaining ASR levels close to the retrained model baseline. Role of EWC Loss for Poisonous Samples Only In defect detection using CodeBERT, incorporating the EWC loss for poisonous samples does not significantly enhance the unlearning process, unlike in natural language tasks. Figures <ref> to <ref> show that excluding the EWC loss term for poisonous samples from the overall loss function actually leads to a noticeable improvement in accuracy across all batch sizes. This contrasts with the sentiment analysis task using BERT, where the inclusion of the EWC term improves both accuracy and the reduction of the ASR. The findings suggest that in the defect detection task, the EWC term for poisonous samples may not be as crucial, and its exclusion could lead to better model performance. [colframe=black, colback=white, boxrule=0.35mm, arc=2mm, width=, boxsep=1mm, left=1mm, right=1mm, top=1mm, bottom=1mm] RQ2. Key observation. CodeBERT's unlearning performance is inferior to BERT's, particularly in accuracy recovery after ASR reduction. While BERT shows improved accuracy as ASR approaches zero, CodeBERT does not exhibit this trend. Increasing λ in improves accuracy in both CodeBERT and BERT, but CodeBERT shows less pronounced gains and maintains a similar ASR. § DISCUSSION AND CONCLUDING REMARKS From our exploration, we found unlearning in large language models, particularly when combined with EWC regularization as deployed in our new approach , is effective in mitigating the impacts of trojans. The process is not only capable of reducing the ASR but also of maintaining or even improving the model's accuracy, especially when using higher λ values and larger batch sizes. This demonstrates the potential of EWC-enhanced unlearning methods to robustly defend against trojans while preserving model performance. With regard to the comparison between unlearning in Text-LLMs and Code-LLMs, in our exploration, we found the unlearning of trojans performance is better for BERT than for CodeBERT – in particular, the performance of CodeBERT degrades significantly in the unlearning process. This may be attributed to the inherent difference in the datasets used to obtain the pretrained versions of the two models. Code is more formal and has stricter rules to observe. Furthermore, code datasets are not as well established as natural language datasets, and thus are not as much vast and diverse, which may explain why CodeBERT finds it difficult to recover its accuracy after unlearning some poisonous data points. In the future, we look forward to making further investigations into the reasons behind the difference between unlearning performance of Text-LLMs and Code-LLMs. ACM-Reference-Format
http://arxiv.org/abs/2408.11528v1
20240821110948
Improvement Speaker Similarity for Zero-Shot Any-to-Any Voice Conversion of Whispered and Regular Speech
[ "Anastasia Avdeeva", "Aleksei Gusev" ]
cs.SD
[ "cs.SD", "eess.AS" ]
Structure and dynamics of the magnetite(001)/water interface from molecular dynamics simulations based on a neural network potential Salvatore Romano^1, Pablo Montero de Hijes^1, Matthias Meier^1, Georg Kresse^1, Cesare Franchini^1,2, Christoph Dellago^1 ==================================================================================================================================== § ABSTRACT Zero-shot voice conversion aims to transfer the voice of a source speaker to that of a speaker unseen during training, while preserving the content information. Although various methods have been proposed to reconstruct speaker information in generated speech, there is still room for improvement in achieving high similarity between generated and ground truth recordings. Furthermore, zero-shot voice conversion for speech in specific domains, such as whispered, remains an unexplored area. To address this problem, we propose a SpeakerVC model that can effectively perform zero-shot speech conversion in both voiced and whispered domains, while being lightweight and capable of running in streaming mode without significant quality degradation. In addition, we explore methods to improve the quality of speaker identity transfer and demonstrate their effectiveness for a variety of voice conversion systems. § INTRODUCTION Voice Conversion (VC) is a task to convert a source speaker's voice to another one through modifying different voice characteristics such as speaker identity, accent and emotion while keeping linguistic information unchanged. Zero-shot VC is a more challenging task designed to effectively work with speakers unseen during training. Despite the significant success in the development of zero-shot VC systems, particularly with the adaptation of powerful Large Language Models (LLMs) for the VC task, and the use of neural audio codecs, a gap still exists between generated and authentic voices. Furthermore, zero-shot VC remains an unexplored area for various specific domains, such as whisper-to-speech VC. Whispered speech can be used by people with speech disorders and is also acknowledged as a technique to overcome stuttering <cit.>. Thus, converting whispered speech into regular speech creates various opportunities for speech interactions. However, the acoustic-phonetic distinctions between whispered and regular speech lead to degradation of existing VC systems when applied to whispered data <cit.>. Also, although some previous works on whisper-to-speech conversion have been described as speaker-independent <cit.> or zero-shot <cit.>, none of these works have demonstrated speaker similarity evaluations. Thus, the proposals presented in this paper are as follows. * We propose zero-shot VC system based on StyleTTS2 <cit.> approach, called SpeakerVC, and demonstrate the ability of this system to work effectively within both a regular and whispered speech domains. * We demonstrate that the proposed SpeakerVC system produces high-quality speaker reconstruction, comparable to current state-of-the-art (SOTA) zero-shot approaches, while also being lightweight and capable of running in streaming mode without significant quality degradation. * We demonstrate that incorporating an additional speaker loss during training and increasing number of speakers in training dataset significantly improves the speaker similarity quality of the final VC system. Additionally, we show effectiveness of such an approach across various VC architectures. Samples from our proposed VC systems and publicly available systems can be found on the demo page[https://speakervc.github.io]. § RELATED WORKS This section briefly describes different methods for speaker identity reconstruction in zero-shot scenario within Text-to-Speech (TTS) and VC systems. Typically, the main idea of zero-shot approaches is to disentangle speaker and content information from the source and target speech and then reconstruct the source content with the target voice, unseen during the training procedure. A lot of developed systems rely on pretrained speaker models and use extracted speaker embeddings for voice reconstruction <cit.> or jointly train speaker encoder with the rest of the pipeline <cit.>. Some studies introduce a concept of a speaker consistency loss which aims to make embeddings from reference and reconstructed speech closer. This idea is successfully used both with pretrained speaker encoders <cit.> and during joint training <cit.>. Also, various alternative methods for incorporating speaker or style information into models has been studied recently. For example, the AdaIN approach proposed in <cit.> can be effectively used for style transfer in the VC task <cit.>. Other researches focus on improving the disentanglement between speaker and content information <cit.>, demonstrating the importance of removing speaker information from linguistic content. This is achieved through methods such as quantization <cit.> or instance normalization <cit.>. In contrast, the speaker embedding free system proposed in <cit.> relies on a position-agnostic cross-attention mechanism <cit.>. Authors demonstrate its superiority over speaker embeddings-based systems. Special attention should be given to novel techniques for TTS and VC tasks, which utilize neural audio codecs <cit.> and LLMs based models to process audio in the discrete domain. These techniques are applied to large training datasets and achieve excellent voice conversion quality. VALL-E <cit.> introduced an approach that treats TTS as a language model task and uses discrete audio codec codes as an intermediate representation. While this approach had shown good performance on the zero-shot TTS task, the Voicebox <cit.> generative model subsequently demonstrated superior quality. Despite their excellent quality, these models have serious limitations in terms of inference speed, which is an important aspect, especially for streaming systems. Thus, our aim in this research is to show that lightweight and faster systems can achieve comparable quality. § DATASETS This section describes datasets utilized for both training and evaluation. Information regarding the training datasets is summarized in Table <ref>. Most of our experiments are performed using the VCTK + LibriTTS. For experiments on the extended dataset (D_EXT) all training datasets were used. For VoxTube and TED_X we use parts of the original datasets, additionally filtering out data with Signal-to-Noise Ratio less than 10dB, short speech duration and languages other than English. As these datasets do not contain any whispered speech samples, all data is converted to whisper with the Praat Toolkit[https://www.praatvocaltoolkit.com/whisper.html] before encoder feature extraction. For testing purposes, we use both voiced and whispered test corpora to cover various domains and conditions. Details regarding these datasets are provided in the Table <ref>. In the case of the Common Voice (CV) <cit.> dataset we randomly select 100 speakers from the English portion of the dataset. Due to the absence of publicly available whispered spontaneous datasets, we collected the WhiSp dataset. The dataset was collected through the Upwork platform[https://www.upwork.com]. To compile the dataset, 40 English native speakers were asked to spontaneously respond to 40 questions using whispered voices. Additionally, we requested speakers to answer several questions using their regular speech to get the speaker's voice reference. § SYSTEMS DESCRIPTION In this section, we describe the considered VC systems. Inspired by the approach outlined in <cit.>, we employ cosine Speaker Loss (SL) during the training of the decoders. We also describe the proposed SpeakerVC system, which is based on the StyleTTS2 approach <cit.> adapted for the VC task. §.§ Encoder Our system is based on the HuBERT <cit.> encoder with adaptation to the streaming condition and whispered domain as described in <cit.>. We also modify this model with the approach from the HuBERT-Soft article <cit.> to obtain soft speech units for decoders. To learn discrete speech units, we apply k-means clustering with 1024 clusters. Then, we train a linear projection layer between the discrete features and the backbone network without updating the backbone network weights to keep the phonetic information unchanged. We use the output of the projection layer to extract soft speech units, which are used as input features for all our decoders. §.§ Speaker loss For all our experiments with SL we use ECAPA-TDNN model <cit.> as a speaker encoder. The idea behind the additional loss function is simply to minimize the distance between embeddings extracted from the reference and generated with the VC system audio. For this, we use Cosine Loss: ℒ_spk = 1/N( 1 - X · Y/‖ X ‖‖ Y ‖) where, X represents embeddings extracted from reference speech, Y represents embeddings extracted from generated speech, N – batch size. Our experiments with more complex loss functions, such as AMSoftmax <cit.>, did not show a significant increase in speaker similarity metrics compared to the cosine loss function. §.§ Tacotron-based decoder Our first decoder is based on the Tacotron 2 model adapted to the streaming condition as described in <cit.>. To speed-up computation during training we consider additional Mel adaptor in case of using SL. Since the Tacotron 2 model and the ECAPA-TDNN model have different feature extraction parameters, the aim of the Mel adaptor is to convert mel spectrograms extracted with one set of settings to mel spectrograms extracted with another set of settings. Thus, a 3-layer TDNN model was pretrained for 20 epochs on the VCTK dataset to implement such feature conversion. The Mel adaptor block remains frozen during decoder training to avoid model overfitting under the SL minimisation task. As a vocoder we use a pretrained HI-FI GAN model <cit.>. §.§ FastSpeech-based decoder As an alternative approach we use the FastSpeech 2 model <cit.> with modifications to the base architecture for the VC task. We change the Transformer block to LSTM in the Encoder and Decoder blocks of FastSpeech 2 model and add LSTM layers in the Energy and Pitch Predictor blocks as a trade-off between better memorizing of past context and adaptation to chunk-wise processing. We remove the default speaker embedding layer and use the ECAPA-TDNN model <cit.> to extract speaker embeddings. As for the Tacotron-based model, we also utilize the Mel adaptor block in case of using SL. As a vocoder we use a pretrained HI-FI GAN model. Overall, the proposed system is shown in Figure <ref>. §.§ SpeakerVC The third system we propose is based on the StyleTTS2 model <cit.> with modifications of the architecture for the VC task. We use speaker embeddings from the ECAPA-TDNN model in addition to the Acoustic Style Encoder to encode speaker information. The Prosody Predictor and Prosody Style Encoder blocks are trained independently of the rest of the model. Also, to simplify model training, we discarded the Style Diffusion block and the Speech Language model based discriminator block. Several changes have also been made to the training process from the base paper. As an input for the Acoustic Style Encoder we use the same audio fragment as for the ground true. Waveforms are randomly cropped with a maximum length of 4 seconds. We change the hop length mel spectrogram parameter to 240 to better synchronise the time resolution between features at the decoder input and encoder output. Additionally, we consider a third stage of training in which the SL is added to the initial losses to fine-tune the decoder, aiming to achieve better speaker similarity. Overall, the proposed system is shown in Figure <ref>. §.§ Evaluation metrics For evaluating the results, we utilize both objective and subjective metrics. As a subjective evaluation metric we use the Similarity Mean Opinion Score (SMOS), estimated through the Toloka platform[https://toloka.ai]. We select 20 speakers from each of the WhiSp and LS test-clean datasets. For each speaker, we perform voice conversion using our various proposed systems and the Pheme VC system, utilizing another sample of the same speaker's voice as the enrolled speech. Subsequently, 30 Toloka users were asked to evaluate the similarity between each generated and the real sample of the speaker's voice on the 1-5 scale. The used scale is similar to that in <cit.>. As an objective evaluation metrics, Word Error Rate (WER), Equal Error Rate (EER) and speaker similarity are employed to measure quality of content and speaker reconstruction. We utilize the speaker similarity measure based on the WavLM-TDNN model <cit.>, as proposed in <cit.>. The HuBERT-Large model <cit.> is used in the WER evaluation. For the EER evaluation, we employ variety of verification protocols, which can be categorized into three different types. Whisper-to-speech condition, where test audio is initially whispered and then converted to speech of same speaker. Cross-speaker whisper-to-speech condition, where test audio is also initially whispered, but then converted to the voice of another speaker. Cross-speaker speech-to-speech condition is built according to same logic, but, test audio is initially voiced. The enrollment audio is voiced in all the described conditions. We use the Nemo TitaNet-L[https://catalog.ngc.nvidia.com/orgs/nvidia/teams/nemo/models/titanet_large] model to extract speaker embeddings and obtain scores for the EER calculation. We also consider comparing of our best performing systems with the SOTA zero-shot TTS and VC systems. For this evaluation we employ Sim-o test settings, as proposed in <cit.>, and compute similarity against original audio. Following the evaluation procedures from <cit.>, we select files ranging from 4 to 10 seconds in length from the LS test-clean dataset and consider only cross-sentence type of test conditions. We utilize a 3-second clip from another sample of the same speaker for style and speaker embedding extraction. Additionally, we also employ a cross-speaker condition, which is more related to the VC task. In this scenario, a 3-second clip from a random sample of another speaker is used for style and embedding extraction, representing any-to-any VC. § EXPERIMENTS Table <ref> presents our main results of speaker similarity evaluation. Chains and WhiSp here correspond to whisper-to-speech condition, Chains2WhiSp corresponds to cross-speaker whisper-to-speech condition, while CV and LS2CV correspond to cross-speaker speech-to-speech conditions. The first two rows of Table <ref> demonstrate the EER evaluation on the original data before voice conversion. For whispered data, we consider EER evaluation between voiced enrollment and whispered test. Parallel whispered and voiced recordings from Chains allow us to compare speaker similarity in both conditions. According to the achieved results, it can be seen that the use of SL significantly improves the speaker similarity between the original audio and its conversion and reduces the EER on test datasets for different VC systems. Additionally, increasing the number of speakers in the training dataset leads to further quality improvement for for all considered VC systems. For the SpeakerVC system, employing ECAPA-TDNN embedding along with the Acoustic Style Encoder leads to a significant improvement in terms of EER and the speaker similarity metric. Table <ref> presents the results of comparing the proposed systems with various SOTA systems in terms of speaker similarity measure, reveals that the proposed SpeakerVC system outperforms existing TTS solutions such as Voicebox, as well as Pheme system in VC mode. In the cross-speaker condition, we compare the proposed systems with the Pheme system. It can be observed that both SpeakerVC and FastSpeech-based systems show superior speaker similarity quality and only slightly degrade when moving from cross-sentence to cross-speaker condition. For SpeakerVC it has been also shown that the model is robust in streaming processing. Table <ref> displays the SMOS evaluation results, revealing a notable correlation between the subjective and objective evaluation outcomes. However, the improvement obtained from utilizing the SL is not as significant in SMOS evaluation as it is in terms of the speaker similarity metric. Furthermore, the performance of SpeakerVC adapted system is comparable to Pheme in the regular voice domain and better in whisper domain, for which the system was fine-tuned. Moreover, the proposed SpeakerVC system is almost two times faster. § DISCUSSION In this paper, we proposed the SpeakerVC model – a fast, streamable and robust system for zero-shot any-to-any whispered and regular speech VC. We also considered methods to improve the speaker similarity of the converted speech and demonstrated their effectiveness across various VC systems. Despite the significant improvement in speaker similarity transfer achieved by utilizing the speaker loss during training and increasing the number of speakers in the training dataset, some issues remain unresolved. According to the SMOS results, there is still a gap between generated and real speech. Also, there is mismatch between objective and subjective evaluations. While differences between systems are significant in terms of objective metrics such as EER and speaker similarity, these systems show close quality in SMOS evaluation. IEEEtran
http://arxiv.org/abs/2408.11569v1
20240821122511
Achieving specific yet transient bonds between anisotropic colloids
[ "Muraleedharapai Mayarani", "Martin Lenz", "Olivia du Roure", "Julien Heuvingh" ]
cond-mat.soft
[ "cond-mat.soft" ]
^1 PMMH, CNRS, ESPCI Paris, PSL University, Sorbonne Université, Université Paris-Cité, 75005, Paris, France ^2 Université Paris-Saclay, CNRS, LPTMS, 91405, Orsay, France ^ Present address: Department of Physics, Indian Institute of Technology Palakkad, India, 678623, email: mayarani@iitpkd.ac.in .^* email: olivia.duroure@espci.psl.eu Achieving specific yet transient bonds between anisotropic colloids M Mayarani^1,2, Martin Lenz^1,2, Olivia du Roure^1 * and Julien Heuvingh^1 August 26, 2024 ============================================================================== § ABSTRACT Self-assembly of colloidal particles is a promising avenue to control the shape and dynamics of larger aggregates. However, achieving the necessary fine control over the dynamics and specificity of the bonds between such particles remains a challenge. Here we demonstrate such control in bonds mediated by depletion interactions between anisotropic colloids that we 3D-print in the shape of half disks with sub-micron resolution. When brought together by diffusion, the particles interact in different configurations but the interaction through the flat faces is by far the longest-lasting. All bonds are flexible and transient, and we demonstrate control over their life time through the depletant concentration in quantitative agreement with a simple physical model. This basic design could be extended to manufacture particles with multiple binding sites to engineer directional assembly with multiple particles § INTRODUCTION Colloidal self-assembly, leading to the formation of complex hierarchical end products is key to understand fundamental processes such as glass transition<cit.>, crystallization<cit.>, polymerization<cit.> etc. It also possesses application prospects in various scenarios such as preparation of photonic crystals<cit.>, chemical sensing<cit.>, biological applications<cit.> and many more<cit.>. At the heart of designing and organizing complex structures through self-assembly, lies the control on individual colloidal design and the mastery on their interactions. In various experimental attempts, researchers have demonstrated capability to tailor colloidal units that spontaneously self-assemble into intended structures based on chemical<cit.>, geometrical<cit.>, or physical cues<cit.>. Highly anisotropic colloidal interactions leading to the formation of directional bonds are achieved through altering the geometry, roughness and/or surface properties of colloids<cit.>. However, good control over the formation, transient nature, life-time and dissociation of such bonds and their quantitative correspondence to theoretical predictions are still elusive. To build predictable aggregates from scratch, utilizing the principles of self-assembly, the building blocks and interactions should be carefully crafted to render the bonds (i) selective; that favor one type of assembly over another, (ii) reversible; that allow reorganizations of the structure, which helps to avoid energetically unfavorable energetic traps and (iii) flexible; to allow for compensation of any defect in the manufacture of the individual particles. Our goal in this work is to achieve a good control over these aspects of colloidal self-assembly by micro-printing simple half-disk like particle and inducing short-ranged attractive interaction through depletion of polymers. Depletion interactions are ideally suited to induce colloidal self-assembly. They can indeed be controlled independently of the colloid fabrication process and have been used to assemble lock-and-key particles through shape complementarity <cit.>. Depletion interactions take place between colloids in presence of non-adsorbing polymer: As colloids come into contact, the volume between them becomes inaccessible to the polymer and it is favorable in terms of entropy to push the colloids into close contact. <cit.>. For two colloids in contact over an area A, the binding free energy is proportional to c A δ, with c being the depletant concentration and δ the range of interaction, which is proportional to the radius of gyration of the polymer. Better fitting colloids with a larger area A benefit from stronger interaction, which accounts for the efficient binding of lock-and-key colloidal designs. The magnitude of depletion interactions has been experimentally verified for small colloids and non-ionic polymers through surface force apparatus measurements <cit.>, optical trapping <cit.> and total internal reflection microscopy <cit.>. One promising avenue to manufacture colloids with designed shapes, in addition to chemical synthesis and DNA origami<cit.>, consists of 3D printing them on a substrate<cit.>. Two photon laser printing has recently been used to fabricate self-assembling colloids <cit.>, offering a large design flexibility at the price of a lower throughput as compared to chemical routes. While the self-assembly of such laser-printed colloids through depletion interactions has previously been demonstrated <cit.>, the transient, re-configurable bonds required for faithful large-scale complex self-assembly have not yet been achieved in this setting. Here we demonstrate such properties in Brownian colloids produced through direct laser writing based on two-photon polymerization. As described in Sec. <ref>, we print colloids on a sacrificial polymer layer and observe the interactions between the particles locally <cit.>. Their shape is semi-circular, which implies a stronger interaction through their flat faces than their curved faces. Depletion interactions are induced through the addition of polyethylene glycol chains. In Sec. <ref>, we characterize the formation of these bonds, their selectivity and fluctuation dynamics as well as their eventual rupture under the influence of thermal fluctuations as a function of the depletant concentration. We anticipate that our approach will constitute a versatile platform to further engineer complex self-assembling systems. § METHODS To obtain Brownian colloids with controlled shapes and number density, we 3D print the colloidal particles on a sacrificial layer of poly acrylic acid(PAA) (Sec.<ref>). Once printed, we liberate the colloids by dissolving the layer, which allows them to diffuse and interact in situ, and control their interactions with depletion forces (Sec.<ref> ). §.§ 3D-printing of colloids Colloidal particles of half-disk shape are fabricated using 3D printing technique based on two-photon polymerization. To fabricate colloidal particles a sketch of the desired particle geometry is first made using a 3D designing software, Autodesk Inventor professional. Our colloids are designed to be semi-circular in shape with a diameter of 5μm and a height of 1μm. The particles are designed to be flat, in order to keep them parallel to the substrate during self-assembly, and to minimize particle flipping. The design is then loaded into the nanowrite software linked to the 3D printer, Photonic professional GT from Nanoscribe, Germany. The 3D design is vertically sliced into parallel planes at fixed distances using the nanowrite software. Since the approximate height and diameter of a single voxel resulting from the tight focusing of laser onto the resist is 0.8 μ m and 0.3 μ m respectively, we maintain the vertical slicing and horizontal hatching distances at 0.2 μ m during particle printing. This ensures optimal overlap between the neighboring voxels Colloids are printed in the conventional mode of direct-laser writing where the laser is focused through a thin glass substrate onto a photosensitive material (photo-resist). A circular cover glass of 30 mm diameter and 1.5 mm thickness is used as the substrate. After thorough oxygen plasma cleaning, we first coat the glass substrate with a thin layer of PAA using a 20mg/ml solution at 3000rpm for 30 seconds. A drop of IP-L photo-resist (from Nanoscribe GmbH, Germany) is placed on top of the PAA layer. The substrate is then loaded onto the 3D printer and the laser is focused onto the resist from underneath the glass substrate using a 63x oil-immersion objective (see Supporting Information Fig. S1 for a schematic). The laser then writes the 3D structures onto the photoresist which reticulates and solidifies. The unreacted photoresist is washed off using propylene glycol methyl ether acetate (Sigma Aldrich) leaving the printed colloidal particles on the substrate as the sacrificial PAA layer is insoluble in the developer solvent. A scanning electron microscopy (SEM) image of the printed particles is shown in Fig. <ref>. The printed semi-circular particles have a diameter of 4.60±0.05μ m as measured on SEM images and a height of 0.82 ±0.06μ m as inferred from optical microscopy measurements (detailed in the supporting information (see Supporting Information Fig. S2)). The hole in the middle of the colloids enables their easy detection and analysis in particular to measure the centroid and orientation (see Supporting Information Fig. S3). §.§ Detachment of the particles from the printing substrate * The printed particles are liberated from the substrate by dissolving the PAA sacrificial layer in a depletant solution that contains different chemicals dissolved in water: To induce depletion interactions and shape-selective binding between the printed colloids, we introduce polyethylene glycol (PEG) (MW 600 kDa, from Sigma Aldrich) as the depletant. To prevent unfavorable and irreversible binding between colloids we use a non-ionic surfactant, tergitol at 20 ppm (from Sigma Aldrich), and salt at 50 mM (NaCl) to screen electrostatic repulsion between the printed colloids by decreasing the Debye screening length of the system. Before dissolving the PAA layer to release the particles, we make a small chamber around the printed colloids to contain the solution and prevent fluid flow. A rubber `O-ring' is first fixed to the glass substrate around the printed colloids by using a thin layer of silicon oil. About 20μ l of depletant solution is carefully placed inside the chamber, which is then sealed with a glass cover slip to avoid evaporation (see Supporting Information Fig. S4 for a schematic). The colloids are observed using an optical microscope from Carl Zeiss in bright field mode under 100x magnification using an oil-immersion objective. Time-lapse movies are acquired through a Michrome 6 CMOS camera (Keyence, France). § RESULTS After dissolution of the sacrificial layer, the colloids start to diffuse (Sec. <ref>) and upon contacting one another form bound pairs due to depletion interaction whose lifetimes (Sec. <ref>), fluctuations (Sec. <ref>) and breaking behaviour (Sec. <ref>) we characterize below alongside their dependence on the depletant concentration (Sec. <ref>). §.§ Diffusion of individual particles As the depletant solution is administered into the chamber, the PAA sacrificial layer dissolves, setting the printed particles free to diffuse along the glass surface. Gently placing the depletant solution liberates the colloids without flipping them. The liberated particles remain close to the bottom of the chamber due to their higher density compared to water. As we show below, bonds form between the colloids and detach. In our movies that are acquired at least ten minutes after dissolution of the PAA layer, we can safely consider that the initial orientational order is completely randomized by thermal fluctuations (see Supporting Information figure S5). The geometric center of diffusing particles is tracked over time, to monitor their trajectories. By calculating the mean square displacement of the diffusing particles, we calculate the diffusion coefficient of the individual colloidal particles to be equal to 0.044±0.002 μm^2/s (see Supporting Information Fig. S6). This diffusion is sufficient to cause several contacting events between neighboring colloids in the typical course of our experiments. §.§ Direct observation of transient bond formation and breakage Two colloids coming into contact can do so in three different configurations: their respective flat faces may come together, or their round ones, or they may incur a mixed flat-round contact. Fig. <ref>(A) shows the formation, temporal evolution, and breaking of the three types of bonds observed in our system at a depletant concentration of 0.01 mg/ml. The time at which bond formation takes place is denoted as t=0. In all three cases, individual colloids fluctuate with respect to each other. The flat-flat bonds are longest-lived, indicating that the bonds between colloids are shape-selective. To further establish the shape specificity of depletion interaction in our system, we observe several bound pairs of each category over 60 seconds and identify the time of bond breakage. Fig. <ref> (B) shows the survival probability of each of the three types of bonds at 0.01 mg/ml PEG. Flat-flat bonds have a considerably higher survival probability compared to flat-round or round-round configurations, confirming that our design allows bond specificity through colloid shape to be programmed. §.§ Bond fluctuations To investigate the fluctuations of a single flat-flat bond, we plot the offset Δ x between the adjacent edges of the two colloids as a function of time in Fig. <ref>(A). For each micrograph in our time series, we obtain the value of this offset by projecting the centroid of one of the ellipses to the major axis of the second ellipse, and then calculating the distance between the point of projection and the centroid of the second ellipse. When the flat faces of the two colloids are perfectly aligned with each other, the offset value is zero. The offset takes positive or negative values depending on the relative positions of two colloids while they fluctuate. The pair of colloids represented in Fig. <ref>(A) explores a range of Δ x values ranging from -2μm to +2μm within the observation time of 60 seconds, and we show snapshots of the state of the bond at the times indicated by black dots. Despite this highly flexible nature, the bound pair spends most of its time in configurations close to Δ x=0, which have the lowest depletion free energy To determine whether the fluctuations of the bond are consistent with our simplified picture of two perfectly flat faces constrained solely by depletion interactions, we follow 42 flat-flat bonds over 1 minute with a frame rate of 1 per second and plot the probability density function of |Δ x|. Equilibrium thermodynamics predicts that this probability distribution should take the form of a Boltzmann distribution e^- Δ F/k_B T, with Δ F∝ |Δ x| the loss of depletion free energy. The data shown in Fig. <ref>(B) is consistent with this prediction. We fit the data with an exponential of the form p(Δ x)=λ^-1 e^- |Δ x|/λ, and obtain an estimate of λ=0.80 ± 0.04 μ m. Estimating the change in excluded volume as Δ V = 2 h δ |Δ x|, where δ is the thickness of the depletion layer, we obtain a theoretical prediction of λ_th = 2 h δ c =0.96 ± 0.07 μ m, close to the measured value. The difference between prediction and measurement may be due to the friction between the two surfaces, which is unaccounted for here. This model thus suggests that bonds are mostly observed in configurations whose depletion free energy is within k_BT of its minimum value. Consistent with this expectation, our bonds rarely display very large values for |Δ x| despite remaining dynamic. §.§ Bond breaking While bond configurations with large values of |Δ x|, are rare, we expect that they would be the most likely to break apart due to the small overlap of the half-disks. To assess the kinetic pathway leading to bond breakage, we measure the angle θ between the major axes of the elliptical fits of the two particles. When the particles are bound, the two flat faces stay parallel to each other, making an angle between the two ellipses close to 0^∘. However, sometimes the angle between the major axes abruptly increases (see Supporting Information Fig. S7) and the particles separate. In this case, we record the corresponding bond breaking time, and |Δ x| value. We show the time evolution of the offset |Δ x| for 7 flat-flat bonds in Fig. <ref>(A), with time 0 indicating their breakage. We observe that breakage tends to occurs for relatively high offsets. To confirm this observation, we summarize the behavior of 42 flat-flat pairs in Fig. <ref>(B). In this figure, each column shows the different values of |Δ x| (black dots) explored by a pair during the course of an experiment (one minute). The value at which the pair breaks, if it exists, is represented by a red circle. The pairs are ranked horizontally based on their highest |Δ x| value. The first observation of this figure reveals that a pair can explore offset values greater than the one at which it - or the others - breaks, reminding us that the phenomenon we are studying is stochastic because it is induced by thermal fluctuations. Second, three different regions corresponding to different offset ranges are visible. Although all pairs visit the low offset region (I), no bond breakages are observed there, which is consistent with an associated binding free energy that is large relative to the thermal energy (in excess of 3k_BT) leading to a very favorable binding. All observed breakage events occur in region II, which corresponds to a binding free energy comprised between 1.2 and 3k_BT, low enough to be overcome by thermal fluctuations over the time scale of our observations. Finally none of our bonds explores the high-offset region III, which would correspond to very weak bonds. In the next section, we investigate how these behaviours are affected as we alter the bond energy by changing the concentration of depletant. §.§ Influence of the depletant concentration We investigate the effect of depletant concentration on the decay and breaking behaviour of various bonds formed in our system. Firstly, we assess the survival probability of colloidal pairs in the three different bond configurations viz flat-flat, flat-round, round-round formed at various depletant concentrations (0.02 mg/ml, 0.015 mg/ml, 0.01 mg/ml and 0.008 mg/ml PEG) (see Fig. <ref>). To obtain a quantitative estimation of the bond lifetime τ, we fit the survival probablities with a decreasing exponential as a function of time (p=e^-t/τ). The survival probability decreases for each type of bonds as the strength of depletion interaction is lowered systematically. Consequently, with decrease in depletant concentration of depletant, the decay rate of bonds increases, indicating declining bond stability with decrease in the depletant concentration. This is true for all the three types of bonds observed in our system. Fig. <ref> also re-emphasises the specificity of colloidal bindings, a feature already evidenced from Fig. <ref> (B). At each depletant concentration, flat-round and round-round bonds exhibited relatively shorter life times compared to the corresponding flat-flat bonds, which are long lived and stable. In a first approach, we assimilate bond breakage to the escape from a single potential well of depth Δ F, where Δ F denotes the depletion free energy associated with a bond configuration. According to Kramers theory, the survival probability of the bond should then follow the type of exponential decay described above with a mean detachment time given by the Arrhenius law τ = τ_0 e^Δ F /k_BT, where τ_0 is a constant typical diffusion time scale for the problem. Δ F is given by the product of the osmotic pressure and the change in excluded volume Δ V yielding Δ F = k_BT c Δ V. We thus obtain a simple prediction for the survival probability ln(τ) = ln(τ_0) + c Δ V. We then fit the three curves of τ as a function of concentration with four parameters, Δ V for each configuration (flat-flat, flat-round and round-round) and the same pre-exponential constant τ_0. These values are very close to a geometric estimate of the excluded volume change for each of the bond configurations (see Table <ref> and supplementary data for calculation). Despite this good agreement, we speculate that the deviations of our fitted values from our geometrical estimates could stem from the flexibility of the flat-flat bonds. Indeed, as the flat surfaces of two bound colloids randomly slide off of a perfect alignment due to thermal fluctuations, they reduce the overlap of their excluded volume. This increases the rate of their detachment, and potentially offers a fast, “slide-then-break” kinetic pathway towards bond breakage. This hypothesis is supported by the observation of the flat-flat detachment scenario showing sliding at the 4 different depletant concentrations studied (SI Fig. S9). Although detachment occurs at all offsets at the lower depletant concentration and is not observed at the highest concentration, at intermediate concentrations, detachment occurs only for the highest offsets. Plotting the detachment scenario with the calculated bond energy instead of the offset, a qualitative agreement is obtained with (SI Fig. S9), where all flat-flat pair separations occurs between an energy of -4 kT to -1 kT. Beyond this qualitative observation, we aim to validate our slide-then-break hypothesis by modeling it quantitatively. For a shift Δ x, the binding free energy between the colloids is reduced by an amount 2k_BTc hδ|Δ x|. By assuming a simple Arrhenius kinetics for their detachment, we thus predict a detachment rate k(Δ x)=τ_0^-1exp[cV(Δ x)], where the overlap between the colloids' depletion volumes is given by Δ V=2 hδ (R-|Δ x|). Using the probability p(Δ x) derived in Sec. <ref>, we compute the mean colloid detachment rate K=∫ p(Δ x)k(Δ x) dΔ x. Defining the effective overlap Δ V_eff between the two colloids through K=τ_0^-1exp(c Δ V_eff), our calculation yields c Δ V_eff = c V_M + ln(1-e^-c Δ V_M/c Δ V_M), where the maximum overlap is given by Δ V_M=Δ V(Δ x=0). We then fit again (see Fig. <ref>) using this modified model the ensemble of the three variations of tau as a function of concentration, to obtain an estimation of Δ V_M for flat-flat, and a new estimate of Δ V for flat-round and round-round configuration.The results are shown in Table<ref> and show a much better agreement with the geometric estimates. This better agreement supports our slide-then-break kinetic model, and thus demonstrates that depletion forces can adequately explain the dynamics of aggregation of these micro-fabricated colloids and that other attractive forces, such as van der Waals interaction, only plays a minor role. § DISCUSSION * Successfully self-assembling particles into a predetermined structure involves two challenges. On the one hand, the target structure should be more stable than its competitors. On the other, it must be kinetically accessible. While 3D-printed microparticles offer a remarkable flexibility in achieving complex stable structures, kinetic accessibility is potentially problematic at their relatively large scale, where diffusion is much slower than in, e.g., DNA origami. This difficulty can however be offset by a fine control over the interactions between the particles, e.g., allowing off-target bonds to quickly detach, while even favorable ones are allowed to occasionally come off to allow the particles to optimize their large-scale arrangement. In this study, we have demonstrated an experimental strategy to achieve such control. We implement reversible bonding with a life time directly controlled by the particles' shapes, demonstrating that 3D printing can be used to not only control an aggregate's morphology, but also its dynamics. Since the strength of our bonds is controlled both through the depletant concentration and the contact area between the colloids, our design can straightfowardly be generalized to generate a system where different bonds have different lifetimes. This potentially opens the possibility to use 3D printing to design structures based on hierarchical self-assembly, which take advantage of the existence of several scales of bond life time and strength to reliably assemble complex structures. We also demonstrate that the spatial control afforded by 3D printing can be harnessed to generate flexible aggregates whose rigidity is controlled by easily controlled depletion interactions. While such sliding of flat colloid surfaces in the presence of depletion attraction have previously been observed with silica cubes<cit.>, ours is to our knowledge the first implementation of this effect in a 3D-printed self-assembled system. On larger scales, this sliding can in principle be controlled through the size of the depletant, an effect that has previously been used to select between different lattice organisations<cit.>. In addition to allow for the implementation of non-rigid self-assembled structures, we demonstrate that the flexibility of a bond has a direct influence over its lifetime through a slide-then-break mechanism, allowing one more lever to control the dynamics of colloidal self-assembly by taking advantage of the spatial control afforded by 3D-printing. While our study concentrates on the dynamics of a single inter-colloid bond, we anticipate that our design can easily be scaled up to generate particles with multiple binding sites. As this strategy opens the way to much more complex designs, flexibility could prove an asset in yet another way. Specifically, in such a context bond flexibility could compensate for imperfections in the colloids' shapes, by allowing, e.g., a ring of particles each carrying two bonds at an angle to one another to close even in cases where these angles are not exactly adjusted. We thus anticipate that the tool box developed here could dramatically open the range of possible designs for the self-assembly of 3D-printed objects, both in the quasi-two-dimensional setting considered here and in future 3-dimensional situations. § ACKNOWLEDGMENTS This work was supported by the Défi Auto-Organisationof CNRS' mission of transverse and interdisciplinary initiatives and ANR grant (ANR-22-CE30-0024). ML was supported by ERC Starting grant 677532, ANR's Tremplin ERC grant ANR-21-CE11-0004-02 the Impulscience® program of Fondation Bettencourt Schueller. unsrtnat [pages=-]Supporting_information.pdf
http://arxiv.org/abs/2408.11986v1
20240821204512
Magnetic proximity coupling to defects in a two-dimensional semiconductor
[ "Muhammad Hassan Shaikh", "Matthew Whalen", "Dai Q. Ho", "Aqiq Ishraq", "Collin Maurtua", "Kenji Watanabe", "Takashi Taniguchi", "Yafei Ren", "Anderson Janotti", "John Xiao", "Chitraleema Chakraborty" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
§ ABSTRACT The ultrathin structure and efficient spin dynamics of two-dimensional (2D) antiferromagnetic (AFM) materials hold unprecedented opportunities for ultrafast memory devices, artificial intelligence circuits, and novel computing technology. For example, chromium thiophosphate (CrPS_4) is one of the most promising 2D A-type AFM materials due to its robust stability in diverse environmental conditions and net out-of-plane magnetic moment in each layer, attributed to anisotropy in crystal axes (a and b). However, their net zero magnetic moment poses a challenge for detecting the Néel state that is used to encode information. In this study, we demonstrate the detection of the Néel vector by detecting the magnetic order of the surface layer by employing defects in tungsten diselenide (WSe_2). These defects are ideal candidates for optically active transducers to probe the magnetic order due to their narrow linewidth and high susceptibility to magnetic fields. We observed spin-polarized charge transfer in the heterostructure of bulk CrPS_4 and single-layer WSe_2 indicating type-II band alignment as supported by density functional theory (DFT) calculations. In the A-type AFM regime, the intensity of both right-handed and left-handed circularly polarized light emanating from the sample remains constant as a function of the applied magnetic field, indicating a constant polarized transition behavior. Our results showcase a new approach to optically characterizing the magnetic states of 2D bulk AFM material, highlighting avenues for future research and technological applications. Replacing ferromagnetic (FM) materials in commercial magnetic random-access memories (MRAMs) <cit.> and other applications such as artificial intelligence, neuromorphic computing, and in-memory computing<cit.> with AFM material holds the potential to improve the operation speed (from GHz to THz), the immunity to electromagnetic interference, and the memory density <cit.>. 2D AFM materials further push the limit to atomic layer thickness. The challenges are also daunting to implement AFM material-based devices since AFM materials nearly lack physical observables (resistance, voltage, etc.) that are related to their magnetic order parameter, known as the Néel vector <cit.>. Bringing the evident advantages of AFM materials into play by being able to detect the Néel vector would allow using their unique functionalities in future generations of advanced technologies. Recently, binary transition metal chalcogenides (TMCs) have garnered significant attention due to their layered-dependent magnetic properties. Due to the van der Waals forces between each layer, the material can be exfoliated down to a single layer that possesses a magnetic ground state which enables the switching between FM and AFM material behavior contingent upon the number of layers<cit.>. Chromium triiodide (CrI_3) is one of the binary TMCs that possess A-type AFM, with FM ordering within the same layer and AFM between adjacent layers. Consequently, thin flakes of a few layers can display net or zero magnetic moment depending on whether there is an odd or even number of layers. The surface layer magnetic moment was successfully detected in the A-type AFM phase by utilizing another 2D optically active material WSe_2 atop CrI_3<cit.>. WSe_2 possesses valley-selective transition and in the presence of a net out-of-plane magnetic field, the valley degeneracy can be lifted. Due to the high susceptibility of WSe_2 to the magnetic field, the surface layer magnetic moment of a magnetic material can be detected by measuring the energy splitting and observing differences in the intensity of valley transitions in WSe_2. However, CrI_3 is not stable in an open environment which creates a problem in using this material in practical applications<cit.>. Recently studied ternary TMCs CrPS_4 emerge as particularly noteworthy due to their stability in bulk form and high resonance capability, making them a viable candidate for advanced spintronic and memory device applications<cit.>. Notably, CrPS_4 exhibits an anisotropic crystal structure endowing each layer with net magnetic moments that are antiferromagnetically coupled, resulting in the same A-type AFM behavior as CrI_3<cit.>. This anisotropy extends to light absorption, rendering CrPS_4 suitable for polarization-based sensing and memory devices<cit.>. Its Néel temperature is 37 K. However, its A-type AFM state in a bulk form with many layers that are antiferromagnetically coupled makes it difficult to sense the small change in the surface layer magnetization. To overcome this limitation, we propose integrating CrPS_4 with defect-based bound excitons in TMDCs as a means to rectify surface layer magnetization, owing to their narrow linewidth relative to excitonic peaks, high excitonic g factor, and efficient electrical tunability <cit.>. Additionally, we present a method for detecting A-type AFM and canted antiferromagnetic (C-AFM) states using photoluminescence (PL) light polarization of bulk CrPS_4, providing insights into its magnetic behavior. We also observed constant polarized transition behavior from CrPS_4 in the A-type AFM regime. This constant polarized transition holds promise for potential applications in spintronics and ultrafast AFM-based memory devices. Our method not only enables the detection of surface layer magnetic moments but also provides valuable information about the various phases of antiferromagnetism present within bulk CrPS_4. This comprehensive understanding of the material's magnetic behavior holds immense promise for advancing a wide range of technological applications, from data storage to tunable optical devices, and beyond<cit.>. § RESULTS AND DISCUSSION In this study, we investigated the magnetic proximity interaction (MPI) between bulk CrPS_4 and single-layer WSe_2. Fig. 1(a) shows a schematic of the magnetic heterostructure utilized for studying this interaction, where the magnetic material is closely interfaced with the optically active 2D semiconductor. The properties of light emitted by the 2D semiconductor depend on the orientation of the surface layer magnetic moments within the magnetic material. This magnetic heterostructure could be utilized for potential ultrafast optical transduction. Both bulk CrPS_4 (the magnetic material) and single-layer WSe_2 (optically active semiconductor) were mechanically exfoliated using the scotch tape method, and the heterostructure was assembled by stacking both flakes together employing a poly-carbonate (PC) assisted transfer technique <cit.>, with detailed procedures outlined in the methods section. We conducted co-polarization resolved photoluminescence (PL) measurements on the sample, where the sample was excited with circularly polarized light (σ^+ or σ^-) and the emitted circularly polarized light (σ^+ or σ^-) was detected from the heterostructure. Room temperature co-polarization resolved PL spectra shown in Supplementary Figure (S) 1, wherein the peak centered around  1.35 eV corresponds to CrPS_4, while the peak centered around  1.65 eV corresponds to excitonic transitions in WSe_2. Fig. 1(b) and Fig. 1(c) illustrate the optical selection rules for optical transitions in CrPS_4 and WSe_2. At room temperature, we observed no evidence of MPI due to the time-reversal symmetry as CrPS_4 remains in a paramagnetic phase. The valley degeneracy is only lifted in the presence of a magnetic field. We cooled the sample down in a cryostat to 1.8 K, which is below the Néel temperature of CrPS_4 (37 K),<cit.> and measured the co-polarization resolved PL of two different heterostructures(1(d) and 1(e)). Fig. 1(d) displays the PL of the heterostructure where the PL contributions from CrPS_4 and WSe_2 are indicated. As previously observed, the presence of multiple peaks in the CrPS_4 energy range can be attributed to a vibrational progression<cit.>. The peak centered around  1.65 eV corresponds to intrinsic defect states in WSe_2 which are defect states due to the impurities present in the material. The peak at  1.725 eV corresponds to excitonic transitions in WSe_2. Due to the broad linewidth of the defect and excitonic transitions in Figure 1(d), it is challenging to resolve the valley splitting. Additionally, we conducted the same experiment on a heterostructure consisting of CrPS_4 with defect-based bound excitons in WSe_2 (Fig. 1(e)). The defect-based bound excitons can arise from strain-induced wrinkles and nanobubbles formed during heterostructure preparation<cit.>. The intrinsic defect states of WSe_2 hybridize with the optically dark excitonic states in the conduction band of WSe_2 due to strain, which creates localized potential within the energy band gap that funnels the excitons<cit.>. This leads to the formation of multiple optically active defect-based bound excitonic states with very narrow linewidth compared to the intrinsic defect states. In Figure 1(e), the finite polarization contrast and energy splitting can be observed in the PL spectrum of defect-based bound excitons in WSe_2 even without the application of an external magnetic field. The presence of zero-field splitting and finite polarization contrast in the PL spectrum indicates the signature of MPI between the surface layer of bulk CrPS_4 and single-layer WSe_2. While the net magnetic moment is zero in bulk CrPS_4. MPI being a short-range interaction, influences WSe_2 to experience a net out-of-plane magnetic moment from the surface layer of CrPS_4. This leads to a finite polarization contrast at low temperatures without the application of an external magnetic field. Such polarization contrast was previously observed only via the application of a Zeeman energy via an external magnetic field which overcomes the anisotropic Coulomb exchange interaction in localized excitons<cit.>. We observe a similar polarization contrast that points to the restoration of the valley polarization inherited from the 2D excitons in optically active defects in 2D semiconductors<cit.> In Fig. 2, we conducted co-polarization resolved PL measurements at a high magnetic field in a Faraday geometry to further study the MPI effect in the heterostructure. Fig. 2(a) and 2(b) display the PL contributions of CrPS_4 at ± 8.5 T. We observed that when the applied field aligns with the crystal +c-axis, the PL contribution from CrPS_4 was dominated by σ^+ (Fig. 2(b)), as most of the magnetic moment of CrPS_4 aligns with the direction of the magnetic field. Conversely, when we switched the magnetic field direction and aligned it with the crystal -c-axis, the PL contribution from CrPS_4 was dominated by σ^- (Fig. 2(a)), as most of the magnetic moment of CrPS_4 attempt to align with the -c-axis direction of the crystal. In contrast, the PL contributions from intrinsic defect states and excitons in Fig. 2(d) and 2(e), and the PL contribution from the defect-based bound excitons in Fig. 2(c) and 2(f), exhibit opposite behavior compared to CrPS_4. When the B field aligns along the +c-axis (-c-axis), the PL contribution is dominated by σ^- (σ^+) from both intrinsic defect and defect-based bound excitons. We observed negligible change in the polarization contrast of excitonic transition in WSe_2 due to an increase in the defect-mediated valley scattering effect<cit.>. To further analyze the opposite behavior of the PL component from intrinsic defect states and defect-based bound excitons of WSe_2, we measured the degree of circular polarization (ρ) at different applied magnetic fields along Faraday's geometry, defined as: ρ = I(σ^+/σ^+)-I(σ^-/σ^-)/I(σ^+/σ^+)+I(σ^-/σ^-) We conducted ρ measurements at ± 8.5 Tesla (S2). S2(a) illustrates the PL range of CrPS_4, while S2(b) depicts the PL range of WSe_2. In CrPS_4, all peaks exhibit positive (negative) ρ values at +8.5 (-8.5) Tesla due to anisotropy in the co-polarization component of PL. However, in the PL range of WSe_2, the peaks around 1.65 eV and 1.725 eV corresponding to intrinsic defects and excitonic transitions exhibit an opposite behavior compared to CrPS_4 at ± 8.5 Tesla. Nonetheless, certain regions (e.g., peaks at 1.68 eV and 1.73 eV) demonstrate similar behavior to CrPS_4, which is attributed to Zeeman splitting in WSe_2, as depicted in Fig. 2(d) and 2(e). Fig. 3(a) displays ρ as a function of the magnetic field of one of the peaks of CrPS_4 at 1.355 eV, while Fig. 3(b) shows the ρ as a function of the magnetic field of the intrinsic defect peak of WSe_2 centered at 1.65 eV. Notably, the behavior of ρ for CrPS_4 and WSe_2 exhibits an opposite trend in a high magnetic field. We also observed that when CrPS_4 is in the A-type AFM state within the magnetic field range of ± 0.8 Tesla,<cit.> the ρ from CrPS_4 is non-zero but remains almost constant (Fig. 3(d)), whereas the ρ from the defect peak of WSe_2 increases in a positive (negative) direction when the field is aligned along the negative (positive) c-axis of the crystal. This behavior can be understood through the Type-II band alignment model between CrPS_4 and WSe_2, as shown in Fig. 3(f). According to this model, the specific spin orientation of electrons in the valleys of WSe_2 can transition to the spin selective state in the conduction band of CrPS_4. In the positive magnetic field scenario when most electrons of the surface layer of CrPS_4 align with the positive field, electrons at the +K valley of WSe_2 are transferred to the CrPS_4 conduction band path followed by an optical transition. This leads to a decrease in the σ^+ transition compared to σ^- in WSe_2 defect states resulting in negative ρ for WSe_2 defect states in a positive magnetic field. Conversely, when we switch the magnetic field direction, the behavior also switches accordingly. In this manner, we detect the spin-polarized charge transfer effect between CrPS_4 and WSe_2 heterostructure. We also observe a similar effect on ρ of defect-based bound excitons in WSe_2 as shown in the color plot in Fig. 3(c). This plot illustrates the negative (positive) values of ρ for defect-based bound excitons in positive (negative) magnetic fields, indicating the same spin-polarized charge transfer effect observed in intrinsic defect states of WSe_2. This effect was also observed in the heterostructure of a few layers of chromium triiodide and single-layer WSe_2<cit.>. However, the surface spin-polarized charge transfer in bulk Cr-based AFM material was not observed before. Fig. 3(e) illustrates the Zeeman splitting of the defect peak centered at 1.65 eV. The marked region highlights the linear behavior of Zeeman splitting when CrPS_4 is in the A-type AFM state. However, beyond this point, the energy-splitting behavior becomes non-linear and saturates at high magnetic fields. This type of behavior bears resemblance to observations of valley splitting in WSe_2 due to MPI with FM substrate europium sulfide,<cit.> which further confirms the surface layer sensing in bulk CrPS_4 and single layer WSe_2 heterostructure. After transitioning from the A-type AFM state, CrPS_4 enters the C-AFM state, where the magnetic moments align with the b-axis of the crystal<cit.>. We observe the unidirectional anisotropy in the magnetic sweep measurement in the C-AFM regime. To further understand this behavior we analyze each polarized component of CrPS_4 as a function of magnetic fields. Fig. 4(a) illustrates the weighted intensity of the σ^+ and σ^- polarized transitions of CrPS_4 which are used to plot ρ in a backward sweep direction in Fig. 3(a). In the A-type AFM regime, Fig. 3(a) demonstrates a non-zero ρ, as depicted in the weighted plot of the polarized component of CrPS_4 in Fig. 4(a). Fig. 4(b) provides a zoomed-in view of Fig. 4(a), revealing an unequal yet nearly constant polarized transition from CrPS_4 in the A-type AFM regime and exhibiting anisotropic changes in the C-AFM state within positive and negative magnetic field regimes. Although Parity-Time (PT) symmetry suggests uniform polarized component behavior within the A-type AFM regime and isotropic behavior in the C-AFM state, the observed anisotropy may arise from the formation of multiple AFM domains in bulk CrPS_4 during the transition from a paramagnetic state to an A-type AFM state upon cooling below its Néel temperature. Additionally, introducing WSe_2 atop a magnetic material is known to introduce a strong Rashba effect that significantly alters the magnetic surface textures<cit.> Further experimental studies are necessary to comprehend this behavior, such as investigating the behavior of ρ after field-cooled magnetization and varying CrPS_4 thickness or without WSe_2 atop. This unequal but nearly constant polarized transition holds promise for future ultrafast AFM-based memory devices. The polarized component of CrPS_4 PL exhibits sensitivity to laser power. We conducted ρ measurements at various excitation power levels of linearly polarized light at -8 Tesla, shown in S3. We observed a nonlinear decrease in ρ with increasing laser power. This behavior can be attributed to spin fluctuations in CrPS_4 induced by the rise in temperature due to the increase in laser power, a phenomenon previously observed in this material using NV magnetometry measurements <cit.>. We note that the behavior of ρ in both CrPS_4 and intrinsic defects in WSe_2 remains independent of excitation polarization, as demonstrated in S4. Plots in S4(a) and S4(b) illustrate ρ measurements under both linear and co-polarization excitation of CrPS_4 and WSe_2 intrinsic defects states, revealing that ρ remains unaffected by excitation polarization. This behavior is attributed to the anisotropic monoclinic crystal structure inherent to these materials <cit.>. In addition to experimental analysis, we also performed DFT calculations to gain better insights into the electronic structure of both CrPS_4 and its heterostructure with WSe_2. As shown in Fig. 4(c), CrPS_4 exhibits a monoclinic crystal structure with anisotropy along the crystal axes directions a and b, resulting in ferromagnetic ground states within each layer, and antiferromagnetically coupled interactions between adjacent layers. The magnetic moment per Cr atom in a single layer is found to be 2.85 μ_ B, consistent with the previously calculated value<cit.>. The b vector is normal to the ac plane while the a and c vectors are at an angle of approximately 92 degrees, owing to the monoclinic crystal structure with C_2 rotational symmetry of CrPS_4. S5 (a) shows the calculated spin-resolved crystal structure where each CrPS_4 layer exhibits a net out-of-plane magnetic moment antiferromagnetically coupled with the neighboring layer resulting in CrPS_4 being of the A-type AFM structure with a Néel temperature of approximately 36 K consistent with our experimental and previously reported results<cit.>. Further, we present the band diagram of bulk CrPS_4 with and without spin-orbit coupling, showing no significant changes as shown in S5 (c) and S5 (d). The negligible spin-orbit coupling in CrPS_4 explains why the Zeeman splitting in CrPS_4 optical transitions in our experimental data is not resolvable, making the optical transitions in CrPS_4 localized. Fig. 4d displays the calculated band gap of CrPS_4 from a few layers to bulk, closely matching the experimentally detected values. Additionally, we calculated the band gap of WSe_2 with and without spin-orbit coupling. In both cases, the band alignment between CrPS_4 and WSe_2 is type-II, further supporting our experimental findings. Furthermore, we calculated the band diagram of a single layer of CrPS_4 and WSe_2(S5 (f) and S5 (g)), where the highlighted WSe_2 and CrPS_4 bands illustrate the type-II band alignment between the CrPS_4 and WSe_2 heterostructure. Detailed information regarding the DFT calculations can be found in the supplementary information. § CONCLUSION In conclusion, we present a method for analyzing the A-type AFM and C-AFM states in optically active 2D bulk semiconductor magnets. Additionally, we successfully detected the surface layer magnetic moment of CrPS_4 by utilizing WSe_2 defect states in a magnetic heterostructure. Our prediction of type-II band alignment through DFT calculations supports the observed spin-polarized charge transfer in the heterostructure of bulk CrPS_4 and single-layer WSe_2. Due to its anisotropic monoclinic structure, the spin-polarized charge transfer in heterostructures remains independent of excitation polarization. Overall, our work highlights the potential of utilizing optically active bulk 2D AFM material as ultrafast optical memory storage devices by using its surface layer properties. Furthermore, owing to spin-polarized charge transfer, TMDC-based heterostructures emerge as promising candidates for quantum information processing. § METHOD §.§ Heterostructure preparation For preparing the heterostructure, CrPS_4, WSe_2, and hexagonal boron nitride (h-BN) flakes were cleaved out from the bulk using less residual tape. Then the flakes are further cleaved into thinner portions using the tape multiple times. For CrPS_4, the thinner flakes in the tape are pressed into the Si/SiO_2 substrate and for WSe_2 and h-BN the flakes are pressed in Polydimethylsiloxane (PDMS) after removing the tape multiple flakes are transferred into the Si/SiO_2 substrate and PDMS. The bulk CrPS_4 and single layer WSe_2 were identified by optical microscope using optical contrast from the flake and further confirmed by Raman and PL spectroscopy. The heterostructure was prepared by stacking both flakes together with h-BN encapsulation on top and bottom using PC and transferred into the marked substrate for PL measurements. §.§ Magneto-photoluminescence The sample was positioned inside the Attocube-2100 cryostat with the superconducting magnet in Faraday geometry range up to ± 9 Tesla, equipped with a confocal microscope head placed on top of the cryostat to conduct polarized resolved photoluminescence measurements at 1.8 K and 300 K temperature. The cryostat objective, featuring a numerical aperture (NA) of 0.82, was utilized to facilitate the excitation of the sample using a 532 nm continuous wave laser. The cryostat objective provided a spot size of 791.5 nm, ensuring accurate focusing of the laser beam onto the sample. Circularly polarized light ( σ^+ and σ^-) was generated and detected by the combination of linear polarizer and quarter wave-plate from thorlabs. To capture the emitted light, the collection arm of the microscope was connected to a fiber-optic cable, which directed the light to the entrance port of the Teledyne HRS-750 triple-gating imaging spectrometer. The spectrometer, with a focal length of 750 mm, enabled efficient spectral analysis. Subsequently, the collimated light was diffracted from the grating within the spectrometer and directed toward a nitrogen-cooled CCD camera. The camera featured a pixel array configuration of 1340x400, which facilitated precise and detailed spectral measurements. Computational method for DFT calculation, room temperature co-polarization resolved PL of CrPS_4 and WSe_2 heterostructure, degree of circular polarization measurement of CrPS_4 and WSe_2 at ± 8.5 Tesla, power-dependent degree of circular polarization measurement at - 8 Tesla, magnetic sweep measurement of heterostructure with excitation of different polarization and electronic structure of bulk CrPS_4 and heterostructure between its monolayer and WSe_2. §.§ Contribution The measurements and characterization were done by M.H.S. The device fabrication was done by M.H.S and M.W. The theoretical calculations were performed by D.Q.H, supervised by A.J. J.X, YR, CC and M.H.S contributed to the interpretation of data. C.C supervised the project and wrote the draft together with M.H.S. All authors contributed to the manuscript preparation and approved its final version. §.§ Acknowledgement The work was primarily supported as part of the Center for Hybrid, Active, and Responsive Materials (CHARM) funded by the National Science Foundation (NSF) DMR-2011824 Seed Award program. C.C. acknowledges partial support by the NSF Award OIA-2217786. The authors thank Xi Wang from the Department of Material Science and Engineering, University of Delaware for letting us use their transfer station and room temperature PL and Raman measurement setup. The authors thank Yi Ji from the Department of Physics and Astronomy, University of Delaware for letting us use their optical microscope. § SUPPLEMENTARY INFORMATION §.§ Computational Method The first-principles calculations were conducted using density functional theory (DFT) with periodic boundary conditions, employing the projector augmented wave (PAW) method as implemented in the Vienna Ab initio Simulation Package (VASP). The 3p, 3d, and 4s orbitals of Cr, the 3s and 3p orbitals of P, the 3s and 3p orbitals of S, the 4s, 4p, and 4d orbitals of W, and the 3s and 3p orbitals of Se were explicitly considered as valence shells in the PAW potentials. The Kohn-Sham wave functions were expanded using plane-wave basis sets with a kinetic energy cutoff of 500 eV. For optimizing geometrical structures and self-consistently determining electronic properties, we employed the strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation for short- and intermediate-range interactions, combined with the long-range van der Waals (vdW) interactions from the revised Vydrov–van Voorhis (rVV10) non-local correlation functional, collectively referred to as SCAN-rVV10. This choice of exchange-correlation function was shown to perform well for layered materials. Single layers of CrPS_4 and WSe_2 were calculated without vdW correction, while bulk and few-layer structures of CrPS_4, as well as heterostructures consisting of sandwiched single layers of CrPS_4 and WSe_2, were simulated using a slab approach including vdW. A vacuum space of at least 20 Å along the normal plane of the slab was employed to eliminate spurious interactions between slab images. For structural optimization, Γ-centered k-point meshes of 3×5×3 for bulk CrPS_4 and 3×5×1 for few-layer structures were used to sample the Brillouin zones; forces on each atom were self-consistently converged to be smaller than 0.005 eV/Å. For electronic structure calculations, denser k-point grids of 5×7×4 and 5×7×1 were used for bulk and few-layer structures, respectively. The optimized lattice constants of bulk (with a 40-atoms unit cell) are a = 10.858 Å, b = 7.259 Å, and c = 12.292 Å, where vector b is perpendicular to both a and c, while a and c are at an angle of approximately 92^0. The calculated local magnetic moment 2.85 μ_ B per Cr atom, is in good agreement with experimental measurements. To determine the band alignment between CrPS_4 and WSe_2 in heterostructures, electronic levels of all materials were aligned with respect to a common reference, which is the vacuum level. § SUPPLEMENTARY FIGURE 1 § SUPPLEMENTARY FIGURE 2 § SUPPLEMENTARY FIGURE 3 § SUPPLEMENTARY FIGURE 4 § SUPPLEMENTARY FIGURE 5
http://arxiv.org/abs/2408.11145v1
20240820190602
Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models
[ "Yuanzhe Wang", "Alexandre M. Tartakovsky" ]
cs.LG
[ "cs.LG" ]
UIUC]Yuanzhe Wang UIUC,PNNL]Alexandre M. Tartakovskymycorrespondingauthor [mycorrespondingauthor]Corresponding author amt1998@illinois.edu [UIUC]Department of Civil and Environmental Engineering, University of Illinois Urbana-Champaign, Urbana, IL 61801 [PNNL]Pacific Northwest National Laboratory, Richland, WA 99352. § ABSTRACT We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models, including operator learning models. The proposed method accounts for uncertainty in the observations and PDE and surrogate models. First, we use the surrogate model to formulate a minimization problem in the reduced space for the maximum a posteriori (MAP) inverse solution. Then, we randomize the MAP objective function and obtain samples of the posterior distribution by minimizing different realizations of the objective function. We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation with an unknown space-dependent diffusion coefficient. Among other problems, this equation describes groundwater flow in an unconfined aquifer. Depending on the training dataset and ensemble sizes, the proposed method provides similar or more descriptive posteriors of the parameters and states than the iterative ensemble smoother method. Deep ensembling underestimates uncertainty and provides less informative posteriors than the other two methods. Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [ ==================================================================================================================== § INTRODUCTION The model's parameters are commonly found by solving the inverse partial differential equation (PDE) problem. Given that most inverse problems are ill-posed, quantifying uncertainty in inverse solutions is important. Surrogate models, including machine-learning-based models, are often used to reduce the computational cost of solving inverse problems. In this work, we aim to quantify the total uncertainty in the inverse solutions including the uncertainty in the surrogate model. Traditionally, uncertainty in inverse solutions is quantified using the Bayesian framework, where the posterior distribution of states and parameters is defined in terms of the likelihood function and the prior distribution of the parameters. Markov Chain Monte Carlo (MCMC) and its variants such as Hamiltonian Monte Carlo (HMC) are the gold standard for sampling posteriors because these methods are proven to converge to the exact Bayesian posterior <cit.>. However, the convergence rate of these methods is negatively affected by the problem dimensionality (the curse of dimensionality or CoD). In addition, stiff constraints in the problem (i.e., small variances of the model and measurement errors) can cause the covariance matrix of the posterior distribution to become ill-conditioned further decreasing the convergence rate <cit.>. Approximate methods, including particle filters, transport maps <cit.>, and the ABC methods <cit.>. However, most of these methods suffer from the curse of dimensionality (CoD), i.e., the number of required samples exponentially increases with the number of parameters. Machine learning (ML) surrogate models can reduce the computational time of MCMC methods, specifically, the likelihood evaluation time <cit.>, but this is usually insufficient for addressing the CoD. The parameter and state estimates correspodning to the maximum of the posterior distribution can be found by solving the so-called MAP (maximum a posteriori) PDE-constrained minimization problem. The disadvantage of MAP formulation is that it does not provide the uncertainty estimates and its cost increases as N^3, where N is the number of parameters <cit.>. The randomized MAP method <cit.> was proposed to quantify uncertainty in the MAP solution. In this approach, the objective function is randomized, and the samples are obtained using the Monte Carlo (MC) approach, i.e., by solving the minimization problem for different realizations of the objective function. The advantage of the MC approach is that it does not depend on the number of parameters, i.e., it does not suffer from the CoD. However, the MC error decreases inverse proportionally to the square root of the number of samples. The high cost of obtaining MAP samples creates a challenge in applying the randomized MAP method to high-dimensional problems. Here, we use an unconstrained MAP formulation substituting the PDE constraint with the surrogate model. The approximate Bayesian method is formulated by randomizing the objective function, and the posterior samples are obtained by solving the resulting stochastic minimization problem using MC sampling. Surrogate models are subject to errors. Therefore, replacing a PDE model with the surrogate model introduces additional uncertainty in the inverse solution. Our approach accounts for the total uncertainty including the surrogate model uncertainty. To account for the surrogate model uncertainty, the likelihood must be defined as a function of the surrogate model parameters as well as the PDE parameters. Therefore, the likelihood needs to be marginalized before it can be evaluated. For surrogate models with many parameters, this likelihood marginalization is computationally unfeasible. This presents a challenge for MCMC and other sampling methods requiring repetitive likelihood evaluation. The advantage of our approach is that it is likelihood-free, i.e., it does not require the likelihood evaluation. Our approach is agnostic to the surrogate model choice but benefits from using a differentiable surrogate model (i.e., models can be analytically differentiated with respect to unknown parameters or differentiated using automatic differentiation), which is critical for efficiently solving the MAP minimization problem. The examples of differentiable surrogate models include Fourier neural operators (FNO) <cit.>, deep operator networks (DeepONets) <cit.>, graph neural operators (GNO) <cit.>, and the reduced-order surrogate models such as the Principal Component Analysis Network (PCA-Net) <cit.> and Karhunen-Loève (KL) deep neural network (KL-DNN) <cit.> models. The majority of the computational cost in our approach is related to solving many minimization problems, and the cost of solving each minimization problem depends on its dimensionality, i.e., the number of unknown parameters and states. Here, we employ the KL-DNN model to reduce the dimensionality and computational cost of UQ. This work is organized as follows. The unconstrained MAP problem in the reduced space is formulated in Section <ref>. The approximate Bayesian model is presented in Section <ref>. The application of the model to the inverse non-linear diffusion equation is discussed in Section <ref>. Conclusions are given in Section <ref>. § LATENT-SPACE MAP FORMULATION OF THE INVERSE SOLUTION Consider an inverse partial differential equation problem ℒ(u(x,t);y(x)) = 0, subject to appropriate initial and boundary conditions. Here, ℒ is the known differential operator, u(x,t) is the state variable, and y(x) is the unknown space-dependent parameter field. Our objective is to estimate u(x,t) and y(x), including uncertainty bounds, given some measurements of u and, possibly, y, using a surrogate model mapping y(x) to u(x,t). While any surrogate model can be used, we formulate the problem for reduced-order surrogate models that map a reduced space of y(x) to the reduced space of y. Such surrogate models include the KL-DNN and PCA-Net models, which use the linear KL and PCA transformations, respectively. It is also possible to use non-linear autoencoder models, e.g., variational autoencoders. We assume that state and parameter fields allow reduced-order representations, u(x,t) ≈û (x, t, η ) and y(x) ≈ŷ (x,ξ ), where η and ξ are the vectors of parameters defining u and y in the respective latent spaces. Furthermore, we assume that the dimensionality of η and ξ is smaller than the dimensionality of u(x,t) and y(x) (i.e., the number of numerical elements or grids) in the numerical solution of Eq (<ref>). In general, the relationship between η and ξ is non-linear and, in the PCA-NET and KL-DNN methods, is modeled with DNNs as η(ξ) ≈𝒩𝒩(ξ,θ), where θ is the collection of the DNN parameters. This DNN is trained using a labeled dataset D'={ y_i(x) → u_i(x,t) }_i=1^N_train. Here, we assume that the inverse maps ξ = ŷ^-1(y(x)) and η = û^-1(u(x,t)) exist. Then, the D' dataset can be reduced to the latent space dataset D = {ξ_i →η_i }_i=1^N_train, which allows training the DNN as θ^* = min_θ[ L_F(θ) = 1/σ^2_η∑_i=1^N_train ||𝒩𝒩(ξ_i;θ) - η_i ||^2_2 + 1/σ^2_θ ||θ||^2_2 ], where L_F(θ) is the loss function, ||·||_2 is the ℓ_2 norm, the last term in L_F(θ) is the regularization term, σ^2_η is the variance of the DNN model error and σ^2_θ is the variance of the θ prior distribution. The ratio σ^2_η / σ^2_θ is known as the regularization coefficient. A reduced-order surrogate model can be formulated as u(x,t) ≈û (x, t; 𝒩𝒩( ξ;θ^* ) ). It is important to note that û can be differentiated with respect to η and ξ analytically or using AD, i.e., û is a differentiable model. With this general definition of the reduced-order surrogate model, we can formulate the inverse MAP solution in the reduced space of ξ. We assume that N_s^y measurements of y, y^s = [ y_1^s,..., y_N_s^y^s ]^T, at locations x^y = [ x_1^y,..., x_N_s^y^y ]^T and N_s^u measurements of u, u^s = [ u_1^s,..., u_N_s^u ^s ]^T, at locations (in space-time) (x^u,t^u) = [ (x_1^u,t_1^u),..., (x_N_s^u^u, x_N_s^u^u) ]^T are available. Then, we formulate the inverse solution as ξ^*=min_ξ L_I(ξ,θ^*), where the loss function is defined as L_I(ξ,θ) = γ || û^s (𝒩𝒩(ξ;θ) ) - u^s ||_2^2 + γ_y || ŷ^s (ξ) - y^s ||_2^2 +γ_I ξ_2^2. Here, û^s (ξ,θ ) = [û (x_1^u, t_1^u; 𝒩𝒩(ξ,θ ) ),..., û (x_N_s^u^u, t_N_s^u^u; 𝒩𝒩(ξ,θ ) )]^T and ŷ^s (ξ) = [ŷ (x_1^y,ξ ),..., ŷ (x_N_s^y^y,ξ )]^T are the vectors of the û and ŷ estimates of the u and y measurements, correspondingly, and γ_I is the regularization coefficient in the ℓ_2 regularization term. In the Bayesian interpretation, the weights γ_h and γ_y are inversely proportional to the variance of the errors in the u^s and y^s measurements, respectively. Once ξ^* is computed, the estimate of y and u are found as y≈ŷ(x; ξ^*) and u(x,t) ≈û (x, t; 𝒩𝒩( ξ^*;θ ) ), respectively. § APPROXIMATE BAYESIAN PARAMETER ESTIMATION §.§ KL-DNN surrogate model So far, the model presentation was agnostic to the type of the dimension reduction operators ŷ(x) and û(x). In this section, we present a Bayesian parameter estimation framework for the linear operators ŷ(x) and û(x) defined by the Karhunen-Loeve expansions (KLEs) as in the KL-DNN PCA-NET surrogate models. Later, we formulate a Bayesian framework for non-linear operators. In the KL-DNN surrogate model, the state u is represented by a space-time-dependent KLE as u(x,t) ≈û (x, t, η ) = u̅(x,t) + ∑^N_η_i = 1ϕ_i(x,t) √(λ_i)η_i, where û is the KLE of u, η = (η_1, ..., η_N_η)^T is the vector of unknown parameters and u̅(x,t), ϕ_i(x,t), and λ_i are estimated as the properties (the ensemble mean and the eigenfunctions and eigenvalues of the covariance) of a random process that can provide an accurate statistical representation of u(x,t). This random process is constructed by sampling y(x) from its prior distribution, solving the PDE (<ref>) for each sample of y(x), and computing u̅(x,t) and C_u(x,x',t,t') as the sample mean and covariance, respectively <cit.>. The eigenvalues λ_i are organized in descending order and truncated according to the desired tolerance, rtol: ∑^∞_i = N_η + 1λ_i/∑^∞_i = 1λ_i≤rtol. The space-dependent parameters are represented with the standard (space-dependent) KL expansion: y(x) ≈ŷ (x, ξ ) = y̅(x) + ∑^N_ξ_i = 1χ_i(x) √(β_i)ξ_i, where ŷ is the KLE of y, ξ = (ξ_1, ..., ξ_N_ξ)^T is the vector of parameters, y̅(x) is the prior mean, and χ_i(x) and β_i are the eigenfunctions and eigenvalues of C_y(x,x'), the prior covariance of y(x). The ŷ expansion is truncated for the desired tolerance according to the criterion similar to Eq (<ref>). §.§ Randomized algorithm for parameter estimation with Gaussian model error There are four main sources of uncertainty in the inverse solution given by the minimization problem (<ref>) and (<ref>), the ill-posedness of the inverse solution, measurement errors, and the PDE and surrogate model errors. The PDE model errors are due to the assumptions in the PDE model and errors in the numerical PDE solution. The surrogate model errors result from the KLE truncation, DNN regression error, and uncertainty due to DNN training. The measurement uncertainty is represented in the Bayesian framework as u^s = u_T+ϵ_u and y^s = y_T + ϵ_y, where u_T is the vector of the “true” state of the system at the measurement locations, y_T is the vector of the true values of the parameter field at the y measurement locations, and ϵ_u and ϵ_y are the vectors of u and y measurement errors, respectively. The u model uncertainty can be represented as u_T = u(ξ) + ϵ_M = û^s (𝒩𝒩(ξ;θ^*) ) + ϵ̂+ ϵ_M, where u(ξ) is the vector of the PDE solutions at the u measurement locations given ξ, û^s(ξ,θ^*) is the vector of the surrogate model estimates of u at the measurement locations, and ϵ̂ and ϵ_M are the vectors of errors in the surrogate and PDE models, respectively. Here, we assume that N_ξ and N_η are chosen such that the uncertainty in the KLE models due to truncation is negligible relative to the other sources of uncertainty. Therefore, there is no uncertainty in the KLE model of y ( y_T = ŷ(ξ) ), and the uncertainty in the KLE model of u(ξ) is only due to uncertainty in the DNN model of η. It is common to assume that the measurement errors are realizations of independent identically distributed (i.i.d.) random variables. Also, it is commonly assumed that ϵ̂ is drawn from a Gaussian distribution. Here, under these assumptions, we formulate the Bayes rule for the posterior distribution of ξ and an algorithm for approximate sampling of this distribution. In Section <ref>, we propose a Bayes rule and a sampling algorithm for non-Gaussian model errors. In the Bayesian approach, ξ is treated as a random process with the posterior distribution P(ξ | d), given the measurements d = (u^s,y^s), defined as: P(ξ | d) ∝ P(d | ξ)P( ξ), where P(ξ) is the prior distribution of ξ and P(d| ξ) is the likelihood function. The posterior satisfies the condition ∫ P(ξ | d) dξ = 1. The likelihood corresponding to the Gaussian independent errors assumption is P(d | ξ ) = ( 1/√(2π)σ )^N_s^u ( 1/√(2π)σ_y^s )^N_s^yexp (-1/2δu^TΣ^-1δu -1/2δy^TΣ_y^-1δy ), where δu= û^s (𝒩𝒩_η^*(ξ;θ) ) - u^s, δy = ŷ^s (ξ) - y^s, Σ = Iσ^2 (σ^2 =σ^2_u^s+ σ^2_M+ σ^2_û), Σ_y = Iσ^2_y^s (I is the identity matrix), and σ^2_û, σ^2_M, σ^2_u^s, and σ^2_y^s are the variances of the KL-DNN and PDE model errors and u and y measurement errors, respectively. Assuming that the prior distribution of y is Gaussian, the prior distribution of ξ, the coefficients on the KLE, is P(ξ) = (1/√(2π)σ_ξ)^N_ξΠ_i=1^N_ξexp[ -ξ_i^2/2σ^2_ξ], where σ_ξ^2=1 is the prior variance of ξ. For the weights in the loss function (<ref>) set to γ = σ^-2, γ_y = σ^-2_y^s, and γ_I = σ_ξ^-2, the solution ξ^* of the minimization problem (<ref>) provide a mode of the P(d | ξ ) distribution. MCMC methods can be used to sample low-dimensional P(ξ | d) posteriors. Here, we focus on high-dimensional problems (systems described by parameter fields with relatively large N_ξ). For such systems, we propose the approximate “randomize-then-minimize” Bayesian approach, where the ξ posterior samples are found by minimizing a stochastic objective function obtained by adding random noise in the (deterministic) loss function (<ref>) as L^G_I (ξ; θ, α^u, α^y, β) = 1/2σ^2û^s( 𝒩𝒩_η(ξ;θ))-u^s - α^u_2^2 + 1/2σ_y^s^2ŷ^s ( ξ)-𝐲^s-α^y_2^2 + 1/2σ^2_ξ‖ξ-β‖_2^2, where α^u = [α_1^u,...,α_N_s^u^u ]^T is the vector of i.i.d. normal random valuables N(0,σ^2), α^y = [α_1^y,...,α_N_s^y^y ]^T is the vector of i.i.d. normal random valuables N(0,σ_y^s^2), and β = [β_1,...,β_N_ξ ]^T is the vector of i.i.d. normal random valuables N(0,σ_ξ^2). The first term in the loss function is obtained by subtracting Eq (<ref>) from Eq (<ref>) and defining the random noise as α^u= ϵ_u-ϵ_M-ϵ̂. Since ϵ_u, ϵ_M, and ϵ̂ are the vectors of zero-mean independent Gaussian variables, α^u is also the vector of independent Gaussian random variables with zero mean and variance σ^2 =σ^2_u^s+ σ^2_M+ σ^2_û. The second term in the loss function enforces Eq (<ref>). The last term in the loss function expresses the prior knowledge. The samples {ξ^(i)}_i=1^N_ens are obtained by minimizing the loss function L^G_I (ξ; θ^*, α^u(i), α^y(i), β^(i)), where α^u(i), β^(i), and α^y(i) are the samples of the α^u, β, and α^y random variables. The remaining questions are how to estimate σ^2_û and, if necessary, relax the assumption of the ϵ̂ Gaussian distribution. The methods for sampling the posterior distribution of the DNN parameters and quantifying uncertainty in the Kl-DNN model, including σ^2_û, are described in the following section. §.§ Uncertainty in the forward KL-DNN model To quantify the DNN model uncertainty, two methods were proposed in <cit.>, the deep ensembling (DE) method quantifying uncertainty due to the DNN parameter initialization in solving the minimization problem in Eq (<ref>) and the randomized KL-DNN (rKL-DNN) method, which also accounts for uncertainty due to the DNN and training datasets finite size resulting in approximation errors. In both approaches, θ^* is treated as a random vector Θ^*. In the DE method, the minimization problem (<ref>) is randomly initialized and solved N_ens times, yielding the ensemble of DNN parameters {θ^(i)_DE}_i=1^N_ens, which correspond to the locations of the modes of the θ posterior distribution given the Gaussian prior and likelihood functions. It should be noted that the θ^(i)_DE samples define a distribution of Θ^* that is different from the Bayesian posterior distribution of Θ^*. In the rKL-DNN method, the posterior distribution is approximately sampled by minimizing the randomized L_F loss function as L_rF(θ) = 1/σ^2_η∑_i=1^N_train ||𝒩𝒩(ξ_i, θ) - η^data_i - α_i||^2_2 + 1/σ^2_θ ||θ -β||^2_2, where α_i (i=1,...,N_train) and β are vectors of i.i.d. Gaussian random variables with zero mean and variances σ_η^2 and σ_θ^2, respectively. Minimizing the randomized loss function for different realizations of {α_i }_i=1^N_train, β, and the random initial guesses of θ produces samples {θ^(i)_r}_i=1^N_ens of the posterior distribution of θ. The σ_η^2 and σ^2_θ parameters are chosen such as to maximize the log predictive probability (LPP) with respect to testing data <cit.>: LPP = -∑_i = 1^N_x∑_j = 1^N_t{ [ u_rKL-DNN (x_i,t_j;ξ_test) - u_test ( x_i,t_j ) ]^2 / 2 σ^2_rKL-DNN (x_i,t_j; ξ_test ) + 1/2log [2πσ^2_rKL-DNN (x_i,t_j; ξ_test))] }, where u_rKL-DNN (x_i,t_j;ξ_test) and σ^2_rKL-DNN(x_i,t_j;ξ_test)) are the sample mean and variance of u(x,t) given the parameters ξ_test computed from the rKL-DNN method as u̅_rKL-DNN(x,t; ξ) = 1/N_ens∑_i=1^N_ensû (x, t, 𝒩𝒩( ξ;θ^(i)_r ) ) and σ^2_rKL-DNN(x,t; ξ) = 1/N_ens-1∑_i=1^N_ens [û (x, t, 𝒩𝒩( ξ;θ^(i)_r ) ) - u̅_rKL-DNN(x,t; ξ)]^2. §.§ Accounting for the surrogate model uncertainty in the inverse solution For the uncertain (random) DNN model, the likelihood given by Eq (<ref>) becomes a function of the random DNN parameter vector Θ, P(d | ξ, Θ ), and must be marginalized as P(d,D | ξ ) = ∫ P(d | ξ, Θ ) P(Θ|D) dΘ before the posterior given by Bayes's rule (<ref>) can be sampled. Here, d is the vector of the u “field” measurements, and D is the synthetic dataset for training the forward surrogate model. Theoretically, the samples {θ^(i)_r}_i=1^N_ens of the posterior distribution P(Θ|D) can be used to numerically compute the marginalized likelihood in Eq (<ref>). However, given the high dimensionality of Θ, such computations of the marginal likelihood (which have to be performed multiple times in MCMC and similar sampling methods) are unfeasible. Here, we propose a randomized algorithm for sampling the posterior distribution of ξ, which avoids P(d,D | ξ ) evaluations. We assume that the errors in the randomized KL-DNN model are unbiased, i.e., the KL-DNN error model in Eq (<ref>) can be rewritten as u(ξ) = u̅_rKL-DNN(ξ) + u'_rKL-DNN(ξ,θ) = û (𝒩𝒩( ξ;θ ) ), where u̅_rKL-DNN is the vector of the mean KL-DNN predictions of u at the measurement locations and u'_rKL-DNN ( ξ; θ ) is the vector of zero-mean fluctuations (errors). In doing this, we replace û(ξ,θ^*), the KL-DNN estimate of u corresponding to one of modes of the Θ distribution, with u̅_rKL-DNN(ξ) and the error vector ϵ̂ with the perturbation vector defined as u'_rKL-DNN(ξ,θ) = û (𝒩𝒩( ξ;θ ) ) - u̅_rKL-DNN(ξ). The advantage of this error model is that it allows for a non-Gaussian correlated error distribution defined by the samples {u'_rKL-DNN(ξ,θ^(i)_r) }_i=1^N_ens. Then, we can rewrite Eq (<ref>) as u_T = û^s (𝒩𝒩(ξ;θ) ) + ϵ_M, and the (Gaussian) loss L^G_I as L^nG_I (ξ; Θ, α^u, α^y, β) = 1/2σ^2û^s( 𝒩𝒩_η(ξ;Θ))-u^s - α̃^u_2^2 + 1/2σ_y^s^2ŷ^s ( ξ)-𝐲^s-α^y_2^2 + 1/2σ^2_ξ‖ξ-β‖_2^2, where α̃^u = [α_1^h,...,α_N_s^h^h ]^T is the vector of i.i.d. normal random valuables N(0,σ̃^2) and σ̃^2 =σ^2_u^s+ σ^2_M. The samples {ξ^(i)_r }_i=1^N_ens are obtained by minimizing the loss function L^nG_I: ξ^(i)_r = min_ξL^nG_I (ξ; θ^(i)_r, α^u(i), α^y(i), β^(i)). Then, the mean y_post(x) and variance σ^2_y,post(x) of the y(x) posterior distribution can be computed as y_post (x)= 1/N_ens∑_i=1^N_ensŷ ( x,ξ^(i)_r ) and σ^2_y, post(x) = 1/N_ens-1∑_i=1^N_ens ( ŷ (x, ξ^(i)_r ) - y_post(x) )^2. We term this method for computing the approximate posterior distribution of the inverse solution the randomized inverse KL-DNN method or rI-KL-DNN. It can be shown that if û( 𝒩𝒩(ξ;θ)) is linear in ξ and θ, then the proposed randomized approach converges to the exact posterior given by the Bayes rule <cit.>. For the non-linear surrogate model û (as is the case for the KL-DNN model), the possible bias in the randomized algorithms can be removed using Metropolis rejection algorithms <cit.>. However, it was found that the rejection rate in randomized algorithms is very low <cit.>. Therefore, in this work, we accept all samples generated by the rI-KL-DNN algorithm. The rI-KL-DNN algorithm can be generalized for a generic surrogate model û ( y;θ), in which case the randomized loss function becomes L̃^nG_I (y; θ, α^u, α^y, β) = 1/2σ^2û^s( y;θ)-u^s - α^u_2^2 + 1/2σ_y^s^2ŷ^s -y^s-α^y_2^2 + 1/2σ^2_y‖y-β̃‖_2^2, where y is the vectors of y(x) values accurately describing y(x), β̃ is the vector of random variables, whose distribution is given by the prior distribution of y and σ^2_y is the prior variance of y. Finally, we describe the DE approach for the inverse KL-DNN method as an extension of the DE-KL-DNN method. In this approach, the samples {ξ^(i)_DE}_i=1^N_ens are obtained by minimizing the loss function L_I: ξ^(i)_DE = min_ξL_I (ξ; θ^(i)_DE ). Then, the mean y_DE(x) and variance σ^2_y,DE(x) of the DE y(x) estimate are obtained as y_DE (x)= 1/N_ens∑_i=1^N_ensŷ ( x,ξ^(i)_DE ) and σ^2_y, DE(x) = 1/N_ens-1∑_i=1^N_ens ( ŷ (x, ξ^(i)_DE ) - y_DE(x) )^2. § NON-LINEAR TWO-DIMENSIONAL DIFFUSION EQUATION §.§ Problem formulation In this section, we test the DE-KL-DNN and rI-KL-DNN methods for groundwater flow in a synthetic unconfined aquifer known as the Freyberg problem <cit.>. The governing equation for this problem is non-linear and has the form: S_y u(x,t) ∂ u(x,t)/∂ t= ∇·(K(x) u(x,t) ∇ u(x,t) ) + u(x,t) [f(t)+g(x,t)] where S_y is the specific yield (here, assumed to be constant and known), K(x) is the hydraulic conductivity, f(t) is the time-dependent recharge, and g(x,t) is the source term due to pumping wells and the interaction with a river. The computational domain, boundary conditions, and the discretization mesh are shown in Figure <ref> and, along with the initial condition, are discussed in detail in <cit.>. Figure <ref> also shows the reference log conductivity field y_ref (x) = ln K_ref (x), which is generated as a realization of a correlated random field using the software package PyEMU <cit.> as described in <cit.>. PyEMU is also used to generate N_train realizations of y(x), { y^(i)(x) }_i=1^N_train, to train the KL-DNN model. The reference u field, u_ref(x,t) is found by solving Eq (<ref>) with y(x)=y_ref(x) using the MODFLOW 6 (MF6) software <cit.> as described in <cit.>. Similarly, MF6 is used to generate { u^(i) (x,t) }_i=1^N_train fields using { y^(i)(x) }_i=1^N_train as inputs. The samples { y^(i) = ln K^(i) (x) }_i=1^N_train and { u^(i) (x,t) }_i=1^N_train are used to construct the KLEs of y(x)=ln K(x) and u(x,t) and to train the DNNs {𝒩𝒩_η(ξ;θ^(i)_r ) }_i=1^N_ens and {𝒩𝒩_η(ξ;θ_DE^(i)) }_i=1^N_ens using the rKL-DNN and DE-KL-DNN methods. Following <cit.>, we set N_ξ=150 (rtol=0.069), N_η = 90 (rtol=0.00045), N_ens = 100, σ_η^2=10^-8, and σ_θ^2=10^-3. §.§ Uncertainty in the forward surrogate model Here, we quantify errors and uncertainty in the KL-DNN model. Figure <ref> shows the reference hydraulic head field u_ref at times t_1=10, t_2= 11, and t_3=12 years, the point difference u̅_rKL-DNN(x,t)-u_ref(x,t) (error in the mean rKL-DNN solution) and the standard deviation in the rKL-DNN solution. The spatial distribution of errors and standard deviations is not uniform. The maximum error and standard deviation locations coincide with the maximum u location. The maximum error and standard deviation are about 0.3% of the maximum u value. §.§ Total uncertainty in the inverse solution In this section, we use rI-KL-DNN and DE-KL-DNN methods to obtain inverse solutions and quantify uncertainty in these solutions. We benchmark these methods against the iterative ensemble smoother (IES) method for several N_train values. IES seeks to minimize errors between ensemble model outputs and observations by adjusting the model parameter ensemble. The Monte Carlo specification of a prior parameter ensemble yields, upon model forward run, an ensemble of model outputs, whose residuals at observed locations and times can be used in an ensemble form of the Gauss-Levenberg-Marquardt (GLM) algorithm. This form of the GLM approximates the Jacobian of parameters and model outputs from their empirical error covariances using a linear regression model <cit.>. Successive iterations (or batch runs) of the ensemble smoother (model forward runs followed by GLM updates of the parameters) reduce the ensemble output residuals and provide samples of the estimated parameter distribution. These samples are used to compute the mean y_IES (x) and variance σ^2_y, IES(x) of y_ref(x). For a fair comparison between IES and KL-DNN inverse solutions, the total number of MF6 (forward model) calls in the IES algorithm is set to N_train. Three IES algorithm iterations are found to be sufficient for the considered problem. We assume that noisy measurements u^s of u(x,t) are available at 25 instances of time at the 13 locations marked by crosses in Figure <ref>. Reliable direct measurements of the aquifer properties are usually unavailable in real-world applications. Therefore, we do not use any measurements of y to obtain the inverse solution. We also assume that the PDE model is exact and set σ^2_M=0. The u measurements are generated as u^s = u_ref + ϵ_u where u_ref is the vector of the u_ref(x,t) values at the measurement locations and ϵ_u is the vector of uncorrelated zero-mean Gaussian variables with the variance σ^2_y^s. Figures <ref> and <ref> show the means and standard deviations of the estimated y(x) and the coverages (maps showing where y_ref(x) is inside the predicted confidence intervals) obtained with the rI-KL-DNN, DE-KL-DNN, and IES methods for σ^2_y^s = 10^-2 and 10^-4, respectively. The values of ℓ_2 errors in the y estimate given by the posterior mean, LPPs, and the percentages of coverage are given in Table <ref>. We find that the three methods give similar mean estimates of y(x) for both σ^2_y^s values. The rI-KL-DNN and IES methods predict the posterior y variances of the same order of magnitude, while the DE-KL-DNN method predicts an order of magnitude smaller variances. The coverages and LPPs are also similar in the rI-KL-DNN and IES methods but are significantly lower in the DE-KL-DNN method. As expected, the y estimates obtained with less noisy measurements are more accurate (have smaller ℓ_2 errors) and more descriptive (have larger LPP) than those obtained with noisier measurements. For σ_y^s^2=0.01, the DE method has the smallest ℓ_2 errors and IES has the largest error. For σ_y^s^2=10^-4, the error is smallest in rI-KL-DNN and largest in IES. rI-KL-DNN has the largest LPP and DE-KL-DNN has the lowest LPP for both values of σ_y^s^2. Next, we study how the quality of the y estimates in the three methods depends on N_train. Figure <ref> shows the ℓ_2 and ℓ_∞ errors, coverages, and LPPs as functions of N_train for σ_y^s^2 = 10^-4 and 10^-2. In general, the errors decrease and the coverages and LPPs increase with increasing N_train. The exception is IES where for the larger σ_y^s^2, the errors slightly increase with N_train. For all N_train values, the ℓ_2 errors in the DE-KL-DNN and rI-KL-DNN methods are similar and smaller than in the IES method. The ℓ_∞ errors are similar in the three methods for N_train≥ 500, but for smaller N_train, the DE-KL-DNN and rI-KL-DNN ℓ_∞ errors are 50% smaller than in the IES method. Overall, rI-KL-DNN and DE-KL-DNN provide the highest and lowest LPPs, respectively. The IES method exhibits the highest coverage, while the DE-KL-DNN method has the lowest coverage. In all methods, the errors are smaller and the coverages and LPPs are larger for smaller σ_y^s^2. Based on these results, we conclude that the rI-KL-DNN method yields the most informative posterior distributions, whereas the DE-KL-DNN method produces the least informative posterior distributions. Among the three methods, the performance of the IES method is most negatively affected by the small training (ensemble) size. §.§ Uncertainty in the predicted and forecasted u fields In this section, we study the ability of the y fields estimated with different methods to predict u_ref. We also study the ability of the estimated y fields to forecast u under arbitrary conditions. Specifically, we forecast u for a scenario where the pumping rates are increased by 50% and the recharge rate is reduced by 40% compared to those used for computing u_ref. Here, we compute u using MODFLOW. Figure <ref> shows the ℓ_2 and ℓ_∞ errors, coverages, and LPPs in the predicted u_ref field as functions of N_train. Table <ref> lists the values of these variables for N_train=5000. The errors are of the same order of magnitude in the three methods. The accuracy of the rI-KL-DNN-based predictions increases with increasing N_train. This trend is reversed for the DE-KL-DNN method. The IES-based estimate is less sensitive to N_train. However, for N_train =100, rI-KL-DNN has an order-of-magnitude larger LPP and 2-3 times larger coverage than the IES method. LPPs and coverages are similar in rI-KL-DNN and IES for N_train≥ 500. Figure <ref> and Table <ref> show the accuracy of the u forecast obtained with y estimated from the three methods for different N_train and σ^2_y^s. The three methods have similar performance for forecasting and prediction. For N_train=100, the rI-KL-DNN-based forecast is more informative with larger LPPs and coverages than those in IES. The DE-KL-DNN method produces overall lower coverages and LPPs than the other two methods, except for N_train=100 when the DE LPPs and coverages are similar to those in IES. These results lead to the following conclusions: rI-KL-DNN produces a more informative u prediction and forecast than the IES method for small training datasets and similar prediction and forecast for larger datasets. Both methods outperform DE-KL-DNN for UQ in predicting and forecasting u. § CONCLUSIONS We proposed the rI-KL-DNN method, an approximate likelihood-free method for quantifying total uncertainty in inverse PDE solutions by sampling the posterior distribution of the parameters. This method enables Bayesian data assimilation in high-dimensional problems using a surrogate model while accounting for uncertainty in the surrogate model and traditional sources of uncertainty due to the PDE model and measurement errors. The rI-KL-DNN method allows for non-Gaussian surrogate model errors. In this work, we used the KL-DNN reduced-order deep learning surrogate model, and the samples of the error distribution are obtained with the rKL-DNN algorithm. In this problem, the likelihood evaluation requires marginalization over the surrogate model parameters, which is computationally unfeasible. Therefore, the sampling methods requiring likelihood evaluations, such as MCMC, cannot be directly used for this problem. The proposed method is compared with the DE and IES methods for the inverse non-linear PDE problem describing groundwater flow in a synthetic unconfined aquifer. We found that the rI-KL-DNN method produces more informative posterior distributions of the parameters and states (larger LPPs and coverage) than the DE-KL-DNN method, which only accounts for uncertainty due to DNN training in the surrogate model. The comparison with the IES method revealed that the rI-KL-DNN yields more informative parameter and state distributions for small training datasets (ensemble sizes in IES) and similar predictions for large training datasets. Our results show that despite inherent uncertainty, surrogate models can be used for parameter and state estimation as an alternative to the inverse methods relying on (more accurate) numerical PDE solvers. § ACKNOWLEDGEMENTS The authors are thankful to James L. McCreight and Joseph D. Hughes for their help with generating training datasets and performing MF6 and PEST++ simulations This research was partially supported by the U.S. Geological Survey under Grant No. G22AP00361, the DOE project “Science-Informed Machine Learning to Accelerate Real-time (SMART) Decisions in Subsurface Applications Phase 2 – Development and Field Validation,” the U.S. Department of Energy (DOE) Advanced Scientific Computing program, and the United States National Science Foundation. Pacific Northwest National Laboratory is operated by Battelle for the DOE under Contract DE-AC05-76RL01830. The views and conclusions contained in this work are those of the authors and should not be interpreted as representing the opinions or policies of the U.S. Geological Survey. Mention of trade names or commercial products does not constitute their endorsement by the U.S. Geological Survey. elsarticle-num
http://arxiv.org/abs/2408.11602v1
20240821132042
Emergent broadband polarization entanglement from electronic and phononic four-wave mixing indistinguishability
[ "Diego Sier", "Lucas Valente", "Tiago A. Freitas", "Marcelo F. Santos", "Carlos H. Monken", "Raul Corrêa", "Ado Jorio" ]
quant-ph
[ "quant-ph", "cond-mat.other", "physics.optics" ]
adojorio@fisica.ufmg.br ^1Departamento de Física, Universidade Federal de Minas Gerais, Belo Horizonte, MG 30123-970, Brazil ^2IDOR/Pioneer Science Initiative, Rio de Janeiro, RJ 22281-010, Brazil ^3Instituto de Física, UFRJ, Rio de Janeiro, RJ 21941-972, Brazil ^4These two authors contributed equally. § ABSTRACT Recently [PRA 108, L051501 (2023)], it has been shown that in a centrosymmetric cubic system, two-photons from a broadband intense laser field can be converted into a pair of Stokes and anti-Stokes entangled photons. Here we properly explain, demonstrate, quantify (for diamond) and explore the possibilities offered by such system, designing an entanglement map based on changes in the light-matter system. In particular, we show how the broadband polarization entanglement, that emerges from the interference between electronic and phononic degrees of freedom in the four-wave mixing process, depends on parameters such as Stokes-anti-Stokes Raman shift, scattering geometry and laser bandwidth, opening the avenue of exploration of such phenomenon in information processing. Emergent broadband polarization entanglement from electronic and phononic four-wave mixing indistinguishability Ado Jorio^1 August 26, 2024 =============================================================================================================== The operation of converting incoming photon(s) into different outgoing photon(s) in a medium is dictated by its susceptibility χ <cit.>. In centrosymmetric materials, the second-order susceptibility is null and the non-linear response is usually dominated by the third-order tensor <cit.>. The χ^(3) enables two photons from a broadband intense laser field to interact with the medium to produce Stokes and anti-Stokes shifted quanta (ħω_L + ħω_L' = ħω_S +ħω_aS), known as the SaS photon pair <cit.>. While the non-classical correlations between SaS photons have been widely established <cit.>, the presence of polarization entanglement in such fundamental four-wave mixing process has been demonstrated only very recently, in diamond <cit.>. This recent result indicates that the fundamental four-wave mixing (FWM) polarization entanglement comes from the indistinguishability between electronic (e-FWM) and phononic (p-FWM) mediated processes inside the centrosymmetric material. It should, therefore, be possible to tailor the SaS state by playing with state preparation parameters, such as the scattering geometry, the SaS Raman shift and the laser bandwidth <cit.>. In fact, the interplay of the SaS scattering properties should generate a map of different degrees of polarization entanglement depending on the balance between the electronic and phononic degrees of freedom. Properly demonstrating, quantifying and exploring these possibilities is the goal of this paper. The correlated SaS scattered intensity can be obtained by the electric polarization in the material <cit.> P_i (ω_aS) ∝χ^(3)_ijkl (-ω_aS, ω_L, ω_L', -ω_S) ℰ_j (ω_L) ℰ_k (ω_L') E_l^† (ω_S), where ℰ and E represent the laser and scattered modes, respectively, and indexes {i,j,k,l} represent polarization directions. The laser modes are occupied by an intense classical field, which is unaffected by the scattering of a few photons, while the Stokes and anti-Stokes modes start in the vacuum. The incident laser spectral amplitude is written as ℰ_j(ω_L) = ℰ_0j G(ω_L), G(ω_L) describing the laser spectral distribution with ∫_0^∞ |G(ω_L)|^2 dω_L = 1. The probability amplitude that describes the SaS pair scattered at frequencies ω_S and ω_aS in a polarization pure state |ψ⟩ is obtained by calculating ⟨ 0| E_l (ω_S) E_i (ω_aS) | ψ⟩ + ⟨ 0| E_i (ω_aS) E_l (ω_S) | ψ⟩ <cit.>, and the two-photon scattering amplitude probability at the same frequencies reads Ψ_li(ω_S, ω_aS) = C ℰ_0jℰ_0k∫_-ω_S^∞ G (ω_S +ω) G (ω_aS -ω) ×[ A^E_ijkl + A^R_ijklγ/ω_ph -ω +iγ/2 ] dω , where the total FWM susceptibility (in brackets) is composed of a constant electronic and a resonant phononic component <cit.>. The diamond structure belongs to the O_h point group <cit.>. In this case, χ^(3)_ijkl has only four independent terms <cit.>, which are χ^(3)_xxxx, χ^(3)_xxyy, χ^(3)_xyxy, χ^(3)_xyyx (assuming light propagating in the z direction and that the x and y directions coincide with the crystallographic axes). The frequency dispersion of the electronic contribution e-FWM can be neglected when it is very far from any electronic resonance, so that χ^(3) E_ijkl(-ω_aS, ω_L, ω_L', -ω_S) = A^E_ijklδ(ω_L +ω_L' -ω_S -ω_aS) becomes a complex constant tensor fulfilling energy conservation. Ideally, the ratio between the e-FWM components in centrosymmetric materials is χ^(3) E_xxxx≈ 3 χ^(3) E_xyyx≈ 3 χ^(3) E_xxyy≈ 3 χ^(3) E_xyxy <cit.>. However, experimental conditions are rarely ideal and we will only take these values as a reference. The p-FWM susceptibility tensor, on the other hand, is known to be χ^(3) R_ijkl∝∑_σ (α^R_ij,σα^R_kl,σ + α^R_ik,σα^R_jl,σ ) <cit.>, where α^R_ij,σ is the polarizability Raman tensor that describes the scattering of an incident electric field polarized at i into a scattered mode polarized at j, via a phonon σ. The Raman active vibrational mode of diamond belongs to the T_2g irreducible representation of the O_h point group. Due to the relatively small probability of formation of SaS pairs, the spectral features of the phononic contribution to the susceptibility can be calculated from perturbation theory <cit.>, and A^R_ijkl in Eq. (<ref>) is the tensorial part of χ^(3) R_ijkl. The frequency dependence is composed of a Lorentzian-shaped amplitude of the Stokes scattering phonon frequency, ω = ω_L -ω_S, around the resonance ω_ph with width γ/2 (γ is the phonon decay rate, inversely proportional to its lifetime). This susceptibility is the same as when the scattered fields are classical <cit.>, being an extrapolation of the stimulated Raman scattering to the regime where the Stokes stimulation is turned off. The SaS scattering amplitude is widened by the laser bandwidth, which adds an energy uncertainty to the scattered photons in both electronic and phononic processes. The factor C in Eq. (<ref>) contains the efficiency of the FWM scattering in polarizations l (Stokes) and i (anti-Stokes), and depends on the scattered field frequencies and the scattering angles. However, we can consider C a constant in our experiments because we only collect pairs in forward scattering and because the laser bandwidth and phonon spectrum, taken care separately in Eq. (<ref>), dominate any other frequency dependence. Assuming a Gaussian amplitude spectrum for the laser, G(ω_L) = (π W^2)^-1/4 e^-(ω_L -ω_c)^2 / 2W^2, centered around the angular frequency ω_c with width W (the laser power spectrum FWHM is 2√(ln 2) W/(2π)), the integral (<ref>) can be solved analytically. With the probability amplitude (<ref>) we can construct the SaS two-photon state generated by this FWM process. We assume the laser polarization to be vertical (V) with respect to the laboratory, denoted by 𝐞̂_V. If the crystallographic axis x is also along the vertical direction, then 𝐞̂_V = 𝐱̂, and the susceptibility χ^(3)_ixxl will govern the photon pair production. In general, however, if the crystallographic axis x and the laser polarization have an angle θ between them, such that 𝐞̂_V = cos(θ) 𝐱̂ + sin(θ) 𝐲̂, there will be a coherent sum of certain components of the susceptibility tensor defining the scattered photons polarization. By writing the laser field components as ℰ_0x = ℰ_0 cosθ and ℰ_0y = ℰ_0 sinθ, the scattered (non-normalized) state can be written as |Ψ_SaS⟩ (ω_S, ω_aS; θ) = [ Y^E_VV(θ) f^E + Y^R_VV(θ) f^R ] |VV⟩ + [ Y^E_HH(θ) f^E + Y^R_HH(θ) f^R ] |HH⟩ + [ Y^E_VH(θ) f^E +Y^R_VH(θ) f^R ] (|VH⟩ + |HV⟩), where f^η(ω_S, ω_aS) carries the spectral dependence of the scattering for η = {E, R}, being f^E ≡ e^-ω̅^2/W^2 and f^R ≡ e^-ω̅^2/W^2γ e^ -(Ω -iγ/2)^2 / W^2 /2i √(π)Werfc( γ/2W +i Ω/W), with ω̅≡ (ω_S +ω_aS)/2 -ω_c and Ω≡ (ω_aS -ω_S)/2 -ω_ph, and where Y^η_li (θ) are functions that depend on the angle θ and the A^η_ijkl tensor components, Y^η_VV (θ) ≡ Cℰ_0^2 [ (𝒮^4 +𝒞^4) A^η_xxxx + 2 𝒮^2𝒞^2 (A^η_xyyx + A^η_xxyy + A^η_xyxy) ], Y^η_HH (θ) ≡ Cℰ_0^2 [ 2𝒮^2𝒞^2 A^η_xxxx + (𝒮^4 +𝒞^4) A^η_xyyx -2𝒮^2𝒞^2 (A^η_xxyy + A^η_xyxy) ], Y^η_VH (θ) ≡ C ℰ_0^2 (𝒮^2 -𝒞^2) 𝒮𝒞 ×( A^η_xxxx - A^η_xyyx - A^η_xxyy - A^η_xyxy), where 𝒮≡sinθ and 𝒞≡cosθ. Equation (<ref>) means that the angle θ drives the balance between the contribution of the e-FWM spectrum in f^E and the p-FWM spectrum in f^R, and this balance will be different for each pair of scattered photons polarization VV, HH and VH (equal to HV), depending on how the tensor components are combined. The state in Eq. (<ref>) is, in general, entangled in polarization. In particular, for θ = 0^∘, Y^η_VH (0^∘) = 0, and {|VV⟩, |HH⟩} form a Schmidt basis <cit.>, meaning that if the amplitudes of these vector components are equal, state |Ψ_SaS⟩ (<ref>) is maximally entangled, while if one of them is zero, it is separable. In this sense, the problem of producing a state with a high amount of polarization entanglement becomes a matter of tailoring the relative amplitude of the two components of this Schmidt basis. To obtain a prediction for the SaS state generated as a function of θ, one needs to characterize the tensor components A^η_ijkl in the functions Y^η_li (θ), Eq. (<ref>). The functions f^η in Eq. (<ref>) are known and only depend on the laser central frequency ω_c, bandwidth W, the phonon frequency ω_ph and decay rate γ, which are given experimental conditions. Measuring the relative scattered intensities into |HH⟩ and |VV⟩ in θ = 0^∘ and θ= 45^∘ (angles in which there are no |VH⟩ and |HV⟩ components) allows us to retrieve the values of A^η_ijkl. Next, we present an experiment that was used to extract these quantities in diamond, and show how they translate into SaS entanglement. The experimental setup utilized is similar to the one described in <cit.>, with the addition of a monochromator to filter the Stokes signal <cit.>. The polarization of the laser is vertical (V) with respect to the laboratory and, for each monochromator position (each selected δω = ω_c - ω_S), one histogram of temporal difference is obtained in 300 seconds of acquisition. This scanning procedure is done for two orientations of the sample (θ = 0^∘ and 45^∘), and for each sample orientation the spectrum is obtained with the polarization of the photons incident on the avalanche photodiode (APD) detectors being selected in two ways—a spectrum with Stokes and anti-Stokes photons with vertical polarization (VV) and a spectrum with both photons with horizontal polarization (HH). The sample utilized was a diamond grown by a CVD process (Type IIac, 100-oriented, from Almax) positioned so that the laser propagates in the (001) direction of the crystal. The intensity spectra of correlated SaS pairs for VV (blue dots) and HH (red dots) polarization at θ=0^∘ and θ=45^∘ are displayed in Fig. <ref> (a) and (b), respectively. According to group theory analysis, VV(0^∘) configuration does not exhibit a Raman contribution (A^R_xxxx = 0 in Eq. (<ref>)) and only involves non-resonant electronic transitions. For this reason, the blue curve shown in Fig. <ref> (a) corresponding to this configuration is flat. Additionally, Fig. <ref> (a) shows a light blue solid line, obtained by averaging the intensity values I_SaS^corr for this experimental configuration, ⟨ I_SaS (VV(0^∘)⟩ = (27.5 ± 3.5)× 10^3. Since Y^E_VV(0^∘) = Cℰ_0^2A^E_xxxx and in our model it is proportional to ⟨ I_SaS (VV(0^∘)⟩^1/2, we have a reference value for A^E_xxxx. By looking at the other relevant scattering geometries, we obtain the other tensorial components of Eq. (<ref>) in relation to A^E_xxxx. The HH(45^∘) configuration also exhibits only e-FWM, as seen in the red data in Fig <ref> (b). The mean and standard deviation of the counts gives (1.60 ± 0.28) × 10^3. The HH(0^∘) data in Fig. <ref> (a) (red points) contains p-FWM with the characteristic resonance signature, which comes from A^R_xyyx≠ 0 in Eq. (<ref>b), contributing with a Raman scattering in the |HH⟩ polarization. Using Y^E_HH(0^∘) = ⟨ I_SaS^corr (VV(0^∘)) ⟩^1/2 (0.68-i0.12) and Y^R_HH(0^∘)=51450 fits the data with the shown theoretical curve in Fig. <ref> (a). The VV(45^∘) data is the last configuration left, and it is shown in Fig. <ref> (b) in blue. In order to fit it, we fix the Raman factor to be the same as in the HH(0^∘) configuration, Y^R_VV(45^∘) = 51450, and use Y^E_VV(45^∘) = ⟨ I_SaS^corr (VV(0^∘)) ⟩^1/2 (1.61-i0.55). With the above values and working with Eq. (<ref>) we obtain the electronic and Raman A^η_ijkl components, summarized in Table <ref>. For completeness the second-order correlation function g^(2)(0) is evaluated with the ratio [I^corr_SaS(Δτ =0) +I̅_SaS(Δτ≠ 0)]/I̅_SaS(Δτ≠ 0), where I^corr_SaS(Δτ = 0) are the measurements in Fig. <ref> (a) and (b), and I̅_SaS(Δτ≠ 0) accounts for the uncorrelated pair production. A plot of g^(2)(0) is shown in Fig. <ref>(c,d), where it is evident that near the resonance the values are the lowest, accounting for the high number of uncorrelated SaS photon pairs produced. Furthermore, the curves for HH(0^∘) and VV(45^∘) show a drop in g^(2)(0), being higher below and lower above the resonance region, while the VV(0^∘) and HH(45^∘) keep roughly the same value. The asymmetry in HH(0^∘) and VV(45^∘) is associated with the uncorrelated counts being symmetrical with respect to the resonance peak, and thus the asymmetry in the correlated counts for these configurations, which contain a p-FWM contribution, is transferred to the g^(2)(0) curves. This asymmetry is a result of the constructive (δω < ω_ph) versus destructive (δω > ω_ph) interference between the e-FWM and the p-FWM, which explains the Cooper-pair-like behavior of I_SaS <cit.>. On the other hand, the correlated counts in the VV(0^∘) and HH(45^∘) configurations are a purely e-FWM contribution, and thus are a flat curve, leading to an overall symmetric g^(2)(0). With the values of A^η_ijkl in hand, one can predict what is the entanglement in the SaS scattered state |Ψ_SaS⟩ (θ) of Eq. (<ref>) for given values of ω_S and ω_aS. We use, as an entanglement measure, the entropy of entanglement E = -Tr_i(ρ_i log_2 ρ_i) of either subsystem i = {S, aS} with reduced state ρ_i <cit.>, which, since our global state is pure, is zero for separable states and unity for maximally entangled states. In Fig. <ref> we show a contour plot of E(ω_S,ω_aS;θ, W), under the condition that ω_aS = 2ω_c - ω_S, that is, a symmetric SaS Raman shift. On the horizontal axis we vary the Raman shift, going through the resonance in diamond at ω_ph/2π = 1332 cm^-1, identified by a black vertical line, and on the vertical axis we plot (a) the angle θ between the laser linear polarization and the crystallographic axis, going from θ=0 to 45^∘ (due to the symmetry of the crystal, all other θ values will be related to this range), and (b) the laser bandwidth W, going from zero to W = 190γ, and where the value for our experiment W = 24 γ (W = 264 cm^-1 or FWHM 70 cm^-1) is indicated by a black horizontal line. The thick red regions indicate the parameters where E → 1, showing maximum entanglement. Maximum entanglement occurs at θ = 0^∘, in a region below resonance close to 1200 cm^-1 and another above, close to 1400 cm^-1. The measurements in Fig. <ref> illustrate this for the 0^∘ and 45^∘ conditions, for which |HH⟩ and |VV⟩ are a Schmidt basis, and the balance between HH and VV counts reflects how close it is to maximum entanglement. At 0^∘ the purely electronic response (VV) is strong, and maximum entanglement occurs where the HH response crosses it with the same amount of scattered photon pairs. Conversely, at 45^∘ the purely electronic response (HH) is weak in comparison with the VV scattering, which contains the Raman response, thus making it impossible for the probability amplitude of the |HH⟩ component to balance with the |VV⟩ one, when it would reach a maximally entangled state. If the laser bandwidth W grows, as shown in Fig. <ref> (b), the shape of the resonance curve is smoothed out, so the ratio between the |VV⟩ and |HH⟩ coefficients gets more and more even along the SaS spectrum. Because of this, the entanglement extremes become less pronounced as W grows, until the point that it is not possible to reach maximum entanglement anymore, at W ≈ 70γ. On the other hand, it also gets harder to obtain a separable state, which depends on a peak of either the |VV⟩ or the |HH⟩ component in relation to the other. In Fig. <ref> there is a hatched region indicating where the uncorrelated SaS pair production is high (low g^(2) region in Fig. <ref> (c,d)), corresponding to 2W, around 1.2 × FWHM. In this region, the scattered state is not properly represented by Eq. (<ref>), but it needs to be complemented by non-FWM scattering events, which involves the scattering of real phonons, that happens within a laser bandwidth around the Raman resonance. Outside this region, though, state (<ref>) is a good representation of the scattered SaS state, and our entanglement measure is representative. To complete the analysis, we have also drawn level curves for the violation of Bell-type CHSH inequalities as a function of the light-matter parameters. These curves correspond to the so-called Gisin parameter F <cit.> which, for pure states, reads F = 2(1 - 𝒫)^1/2, where 𝒫 = 1 - Tr_i(ρ_i)^2 is the linear entropy of either subsystem i. For separable states F=2, which means that no separable state can go above the classical upper bound in the CHSH inequality, equal to 2. It reaches its maximum at F= 2√(2)≈ 2.83 for maximally entangled states. We plot the F parameter in Fig. <ref> as blue shaded contour curves, and its smallest values within a white contour. Note that the maximal violation coincides with the maximally entangled states near 1200 cm^-1 and 1400 cm^-1. Since Bell analysis is state dependent, and the generated |Ψ_SaS⟩ state depends on the values of the light-matter parameters, the appropriate Bell angles to reach the maximum violation change for different regions of the map. As an example, the violation in the maximally entangled states near 1200 cm^-1 is obtained for linear polarization angles (0, π/4) for one of the photons and (π/8, 3π/8) for the other, where the angles are in relation to V polarization. Finally, we can localize the result of Ref. <cit.> in Fig. <ref>, represented by a white star. In the paper, it was found a state close to a Bell state at a symmetric Raman shift of 900 cm^-1 with θ = 0^∘, with amplitude ratio of |HH⟩ to |VV⟩ at √(0.28/0.72)≈ 0.62. Calculating the ratio with our state (<ref>) yields 0.49, and the discrepancy comes from our aS filter cutting some of the Raman contribution in HH at 900 cm^-1 (left edge of Fig. <ref> (a)), which does not happen in Ref. <cit.>. This region can be seen in the bottom left of Fig. <ref> (a) and (b), where and E ≈ 0.7 and F ≈ 2.5. To conclude, broadband polarization entanglement can be generated in four-wave mixing. The efficiency of the process can be controlled by how close the system is to the resonance with a Raman active phonon, and it is a matter of state engineering to choose the appropriate laser–crystal angle and SaS frequency in order to obtain a good balance between a good amount of entanglement and a sufficient pair production rate. The result is explored here for diamond, but it should be general for other centrosymmetric media, including silicon. § ACKNOWLEDGMENTS This work was supported by IDOR/Pioneer Science Initiative (www.pioneerscience.org), CNPq (INCT-IQ 465469/2014-0, 302872/2019-1, 421469/2023-4, 307619/2023-0), FAPEMIG (APQ-01860-22, APQ-04852-23, RED0008123) and FAPERJ (CNE E-26/200.307/2023). 20 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Bloembergen(1996)]bloembergen author author N. Bloembergen, title Nonlinear optics (publisher World Scientific Publishing Company, year 1996)NoStop [Boyd(2008)]boyd author author R. W. Boyd, title Nonlinear optics (publisher Academic Press, year 2008)NoStop [Shang and Hsu(1987)]chi_3_tensor author author C. Shang and author H. Hsu, title title The spatial symmetric forms of third-order nonlinear susceptibility, https://doi.org/10.1109/JQE.1987.1073327 journal journal IEEE Journal of Quantum Electronics volume 23, pages 177 (year 1987)NoStop [Klyshko(1977)]klyshko author author D. N. Klyshko, title title Correlation between the stokes and anti-stokes components in inelastic scattering of light, https://doi.org/10.1070/QE1977v007n06ABEH012890 journal journal Soviet Journal of Quantum Electronics volume 7, pages 755 (year 1977)NoStop [Parra-Murillo et al.(2016)Parra-Murillo, Santos, Monken, and Jorio]parra author author C. A. Parra-Murillo, author M. F. Santos, author C. H. Monken, and author A. Jorio, title title Stokes–anti-stokes correlation in the inelastic scattering of light by matter and generalization of the bose-einstein population function, https://doi.org/10.1103/PhysRevB.93.125141 journal journal Phys. Rev. B volume 93, pages 125141 (year 2016)NoStop [Thapliyal and Peřina Jr(2021)]kishore2021 author author K. Thapliyal and author J. Peřina Jr, title title Ideal pairing of the stokes and anti-stokes photons in the raman process, @noop journal journal Physical Review A volume 103, pages 033708 (year 2021)NoStop [Lee et al.(2011)Lee, Sprague, Sussman, Nunn, Langford, Jin, Champion, Michelberger, Reim, England et al.]lee2011entangling author author K. C. Lee, author M. R. Sprague, author B. J. Sussman, author J. Nunn, author N. K. Langford, author X.-M. Jin, author T. Champion, author P. Michelberger, author K. F. Reim, author D. England, et al., title title Entangling macroscopic diamonds at room temperature, @noop journal journal Science volume 334, pages 1253 (year 2011)NoStop [Lee et al.(2012)Lee, Sussman, Sprague, Michelberger, Reim, Nunn, Langford, Bustard, Jaksch, and Walmsley]lee2012macroscopic author author K. Lee, author B. Sussman, author M. Sprague, author P. Michelberger, author K. Reim, author J. Nunn, author N. Langford, author P. Bustard, author D. Jaksch, and author I. Walmsley, title title Macroscopic non-classical states and terahertz quantum processing in room-temperature diamond, @noop journal journal Nature Photonics volume 6, pages 41 (year 2012)NoStop [Kasperczyk et al.(2015)Kasperczyk, Jorio, Neu, Maletinsky, and Novotny]Kasperczyk author author M. Kasperczyk, author A. Jorio, author E. Neu, author P. Maletinsky, and author L. Novotny, title title Stokes-anti-stokes correlations in diamond, https://doi.org/10.1364/OL.40.002393 journal journal Opt. Lett. volume 40, pages 2393 (year 2015)NoStop [Saraiva et al.(2017)Saraiva, Júnior, Souza, Pena, Monken, Santos, Koiller, and Jorio]saraiva author author A. Saraiva, author F. S. de A. Júnior, author R. M. Souza, author A. P. Pena, author C. H. Monken, author M. F. Santos, author B. Koiller, and author A. Jorio, title title Photonic counterparts of cooper pairs, https://doi.org/10.1103/PhysRevLett.119.193603 journal journal Physical Review Letters volume 119, pages 193603 (year 2017)NoStop [Timsina et al.(2024)Timsina, Hammadia, Milani, Júnior, Brolo, and de Sousa]timsina2024resonant author author S. Timsina, author T. Hammadia, author S. G. Milani, author F. S. de A. Júnior, author A. Brolo, and author R. de Sousa, title title Resonant squeezed light from photonic cooper pairs, @noop journal journal Physical Review Research volume 6, pages 033067 (year 2024)NoStop [Freitas et al.(2023)Freitas, Machado, Valente, Sier, Corrêa, Saito, Galland, Santos, Monken, and Jorio]art_tiago author author T. A. Freitas, author P. Machado, author L. Valente, author D. Sier, author R. Corrêa, author R. Saito, author C. Galland, author M. F. Santos, author C. H. Monken, and author A. Jorio, title title Microscopic origin of polarization-entangled stokes–anti-stokes photons in diamond, https://doi.org/10.1103/PhysRevA.108.L051501 journal journal Phys. Rev. A volume 108, pages L051501 (year 2023)NoStop [Levenson and Bloembergen(1974)]art_levenson author author M. D. Levenson and author N. Bloembergen, title title Dispersion of the nonlinear optical susceptibility tensor in centrosymmetric media, https://doi.org/10.1103/PhysRevB.10.4447 journal journal Phys. Rev. B volume 10, pages 4447 (year 1974)NoStop [Levenson et al.(1972)Levenson, Flytzanis, and Bloembergen]art_levenson2 author author M. D. Levenson, author C. Flytzanis, and author N. Bloembergen, title title Interference of resonant and nonresonant three-wave mixing in diamond, https://doi.org/10.1103/PhysRevB.6.3962 journal journal Phys. Rev. B volume 6, pages 3962 (year 1972)NoStop [Júnior et al.(2019)Júnior, Saraiva, Santos, Koiller, Souza, Pena, Silva, Monken, and Jorio]filomeno_ass author author F. S. de A. Júnior, author A. Saraiva, author M. F. Santos, author B. Koiller, author R. M. Souza, author A. P. Pena, author R. A. Silva, author C. H. Monken, and author A. Jorio, title title Stokes–anti-stokes correlated photon properties akin to photonic cooper pairs, https://doi.org/10.1103/PhysRevB.99.100503 journal journal Phys. Rev. B volume 99, pages 100503(R) (year 2019)NoStop [Smith and Raymer(2006)]smith2006 author author B. J. Smith and author M. G. Raymer, title title Two-photon wave mechanics, https://doi.org/10.1103/PhysRevA.74.062104 journal journal Phys. Rev. A volume 74, pages 062104 (year 2006)NoStop [Nielsen and Chuang(2010)]nielsen_chuang author author M. A. Nielsen and author I. L. Chuang, title Quantum computation and quantum information (publisher Cambridge, year 2010)NoStop [sup()]supinfo title title See supplemental material at [url will be inserted by publisher] for experimental and theoretical fitting details@noop NoStop [Gisin(1991)]gisin91 author author N. Gisin, title title Bell's inequality holds for all non-product states, https://doi.org/10.1016/0375-9601(91)90805-I journal journal Phys. Lett. A volume 154, pages 201 (year 1991)NoStop [Chefles and Barnett(1997)]chefles97 author author A. Chefles and author S. Barnett, title title Diagonalisation of the bell–chsh operator, https://doi.org/10.1016/S0375-9601(97)00395-2 journal journal Phys. Lett. A volume 232, pages 4 (year 1997)NoStop
http://arxiv.org/abs/2408.11360v1
20240821060455
Perturbing scattering resonances in non-Hermitian systems: a generalized Wigner-Smith operator formulation
[ "Niall Byrnes", "Matthew R. Foreman" ]
physics.optics
[ "physics.optics", "math-ph", "math.MP", "physics.comp-ph" ]
APS/123-QED []matthew.foreman@ntu.edu.sg ^1School of Electrical and Electronic Engineering, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798 ^2Institute for Digital Molecular Analytics and Science, 59 Nanyang Drive, Singapore 636921 § ABSTRACT Resonances of open non-Hermitian systems are associated with the poles of the system scattering matrix. Perturbations of the system cause these poles to shift in the complex frequency plane. In this work, we introduce a novel method for calculating shifts in scattering matrix poles using generalized Wigner-Smith operators. We link our method to traditional cavity perturbation theory and validate its effectiveness through application to complex photonic networks. Our findings underscore the versatility of generalized Wigner-Smith operators for analyzing a broad spectrum of resonant systems and provides new insight into resonant properties of non-Hermitian systems. Perturbing scattering resonances in non-Hermitian systems: a generalized Wigner-Smith operator formulation Niall Byrnes^1, and Matthew R. Foreman^1,2 August 26, 2024 ========================================================================================================== Introduction.—Resonance is a fundamental and ubiquitous physical phenomenon that plays a key role in fields as diverse as quantum mechanics <cit.>, structural engineering <cit.>, electromagnetics <cit.>, and fluid dynamics <cit.>. In closed systems, resonances are associated with normal modes, which correspond to the orthogonal eigenvectors of the system's Hermitian Hamiltonian. The associated real eigenvalues define the spectrum of permissible oscillation frequencies or energy levels <cit.>. Solutions of eigenvalue problems, however, are not in general analytically possible. Approximate methods, in which the system of interest is modelled by perturbing a solvable reference problem, are therefore frequently employed. Rayleigh-Schrödinger perturbation theory, in which the perturbed solution is written as a series expansion over all unperturbed eigenstates, is perhaps the most famous approach <cit.>, however a multitude of alternate theories have been developed. Brillouin-Wigner perturbation theory, for example, can provide better convergence properties for larger perturbations if the spectrum comprises of mostly bound states <cit.>. Alternatively, many-body perturbation theory focuses on large numbers of particles, whereby particle interactions are considered as a perturbation to a non-interacting system <cit.>. Although closed systems are useful theoretical ideals, in reality a system interacts with its external environment. The coupling of normal modes to external degrees of freedom introduces potential loss channels, which can alter the nature and energy of resonant modes. So-called open, or scattering, systems can no longer be described by a Hermitian Hamiltonian and a coupling operator must be introduced to form an effective non-Hermitian Hamiltonian <cit.>. Alternatively, one can use the closely related scattering matrix, 𝐒, which derives from the resolvent of the effective Hamiltonian <cit.>. The resonant states of the system, now termed quasi-normal modes, are then identified from the poles of the scattering matrix when analytically continued into the complex frequency plane <cit.>. Loss to the environment pushes the resonant frequencies off the real axis, resulting in complex-valued eigenfrequencies ω_p, where (ω_p) and -(ω_p)/2 are the resonant frequency and linewidth respectively. Open systems are known to possess a number of interesting resonant phenomena and scattering anomalies, such as coherent perfect absorption <cit.>, exceptional points <cit.>, and bound states in the continuum <cit.>. Moreover, resonances in open systems are susceptible to environmental variations, such as changes in temperature, external electromagnetic fields or material composition, a concept underpinning many sensing technologies <cit.>. Perturbed non-Hermitian scattering systems can exhibit additional unique behavior, such as anomalous resonance shifts <cit.> and nonlinear sensitivity at exceptional point resonances <cit.>. Evaluation of resonant modes and their properties therefore remains an important task in, for example, evaluating state lifetimes, laser dynamics and sensor sensitivity. Traditional perturbation theories, however, are not directly applicable to open systems due to mathematical complexities introduced by quasi-normal modes, which lack orthogonality and grow exponentially away from system <cit.>. Resonances with high quality factors only couple weakly to the environment, such that these effects are minor. In the electromagnetic domain, it has been shown that in such cases resonance shifts can be calculated in terms of changes in the stored electromagnetic energy <cit.>. The validity of such expressions however is limited, and, in the presence of significant losses, more careful consideration of the normalization of quasi-normal modes <cit.> or alternative mode expansions <cit.> become necessary. In this paper, we present a novel formulation for determining resonance shifts due to arbitrary system perturbations based on generalized Wigner-Smith operators. In the case of high quality factor resonances, we show that these operators are related to a system's stored and dissipated energy, in full agreement with results from traditional perturbation theory. Our theory is tested against numerical simulations of resonant scattering in random photonic networks. In an accompanying paper <cit.>, we apply our theory to low quality factor resonances and present further numerical examples. Perturbation theory.—An important tool commonly employed in the analysis of scattering systems is the Wigner-Smith time delay matrix 𝐐_ω = -i𝐒^-1𝐒ω, where ω denotes the partial derivative with respect to frequency ω <cit.>. 𝐐_ω has several interesting properties relevant to resonant scattering. For unitary systems at real ω, 𝐐_ω is Hermitian and its eigenvalues are associated with well-defined time delays experienced by narrow-band, transient system excitations. Evaluated at scattering matrix poles, these time delays coincide with the lifetimes of the associated resonances, demonstrating a link between 𝐐_ω and a system's quasi-normal modes <cit.>. In electromagnetic theory, it has also been shown that diagonal elements of 𝐐_ω can be expressed as energy-like integrals over the extent of the system <cit.>, highlighting the connection to mode volume. By itself, however, 𝐐_ω lacks the specificity required to efficiently capture localized or parametric system perturbations. For this reason, we also consider the so-called generalized Wigner-Smith operator 𝐐_α, defined by Eq. (<ref>) with the replacement ω→α, where α is an arbitrary variable. Operators of this kind have recently attracted significant attention in the optical domain as a tool for engineering light in complex scattering environments <cit.>. Importantly, by virtue of its generalized nature, 𝐐_α allows for more targeted descriptions of specific features or components of a system. For example, if α represents the refractive index of an isolated scatterer, 𝐐_α can be expressed in terms of the integrated optical intensity within the scatterer volume <cit.>. The generalized Wigner-Smith operator contains sufficient information to predict resonance shifts induced by arbitrary system perturbations. To show this, we consider a generic scattering system described by a scattering matrix 𝐒, whose poles are assumed to be of order one. Let α denote some system parameter, such as the temperature, shape or size of some component of the system. If ω_p denotes a complex resonant frequency and the system is perturbed so that α changes value to α', the resonant frequency will shift to ω_p'. To determine the change in the resonant frequency Δω_p = ω_p' - ω_p that is induced by the perturbation Δα = α' - α, consider the function f(ω, α) = [𝐒^-1(ω, α)], whose zeros coincide with the poles of 𝐒. Assuming the perturbation is small, a first order, multivariate Taylor expansion of f gives f' = f + (ω'-ω) fω + (α' - α)fα, where f = f(ω, α) and f' = f(ω', α'). Ideally, we would evaluate Eq. (<ref>) at ω = ω_p, ω' = ω'_p and solve for Δω_p. Some care is required, however, in recasting the result back in terms of 𝐒, whose elements diverge at ω_p. This problem can be mitigated by calculating the residue of each term on the right hand side of Eq. (<ref>) at ω=ω_p, which ultimately yields (see Supplemental Material for more details <cit.>) Δω_p = iΔαω_p(𝐐_α) = -Δαω_p(𝐐_α)/ω_p(𝐐_ω), where denotes the trace operator and ω_p denotes the residue at the pole ω_p. As can be seen from Eq. (<ref>), for a given perturbation Δα, the pole shift Δω_p is completely determined by ω_p(𝐐_α), which, we emphasize, should be evaluated for the unperturbed network. We also observe that the form of Eq. (<ref>) implies the ratio of the residues of the Wigner-Smith operators effectively behaves as a condition number for ω_p when thought of as a function of α <cit.> . Eq. (<ref>) is our key result, which, importantly, was derived on purely mathematical grounds, and is therefore applicable to a broad range of physical scenarios. To gain physical insight into Eq. (<ref>), we consider now a class of optical systems composed of arbitrary, finite interior regions coupled to the environment by a finite number of dielectric waveguides. Examples of such systems include fiber Bragg resonators <cit.>, complex networks <cit.> and ring resonators <cit.>. Note that Eq. (<ref>) can be written in the limit form Δω_p = lim_ω→ω_p(-Δα(𝐐_α)/(𝐐_ω)), and so the pole shift is given approximately by the expression within the limit evaluated at ω close to ω_p. We assume for simplicity that the system permittivity ϵ is real and isotropic and that the permeability μ_0 is that of free space. Assuming also that the resonances under consideration have high quality factors, such that (ω) ≪(ω), it is possible to show that <cit.> 𝐐_ξ =(𝐈 + 2(ω)∫_Ω(ϵ𝐔^e + μ_0𝐔^m) dV)^-1(-∫_Ω[(ωϵ)ξ𝐔^e +ωξμ_0𝐔^m + 2i(ω)(ϵ𝐕_ξ^e +μ_0𝐕_ξ^m)] V), where ξ represents either ω or α and Ω denotes the volume occupied by the system with a boundary ∂Ω perforated only by the coupling waveguides. In Eq. (<ref>), 𝐈 is the identity matrix and 𝐔^e, 𝐔^m, 𝐕^e_ξ, and 𝐕^m_ξ are matrices whose (q,p)–th elements are given by U^e_qp = 1/4𝐄_p·𝐄^*_q, U^m_qp = 1/4𝐇_p·𝐇^*_q, V^e_ξ,qp = 1/4𝐄_pξ·𝐄^*_q, and V^m_ξ,qp = 1/4𝐇_pξ·𝐇^*_q respectively, where the fields 𝐄_p and 𝐇_p (𝐄_q and 𝐇_q) are those that exist throughout Ω when the system is illuminated by the p–th (q–th) incident field. Here, p and q should be understood as enumerating all modes in all waveguides that connect the system to the environment. As discussed in the Supplemental Material <cit.>, the form of 𝐐_ξ in Eq. (<ref>) is linked to the factorization 𝐐_ξ = 𝐀^-1𝐁_ξ, where 𝐀 = 𝐒^†𝐒 and 𝐁_ξ = -i𝐒^†𝐒ξ. We shall analyze these factors separately. First, to further simplify matters, we consider the case where the system has a single coupling waveguide supporting a single mode, so that the scattering matrix reduces to an effective scalar reflection coefficient r_eff. Correspondingly, the Wigner-Smith operator 𝐐_ξ reduces to the scalar quantity Q_ξ = A^-1B_ξ, where A = |r_eff|^2 and B_ξ = -ir^*_effr_effξ. To evaluate these expressions at the complex frequency ω, we employ a useful transformation to recast the problem into one at a real frequency, but with modified material parameters. Introducing ϵ̃= ϵ[1 + i(ω)/(ω)] and μ̃= μ_0[1 + i(ω)/(ω)], we can show that <cit.> A - 1 = (ω)/2∫_Ω[(ϵ̃)|𝐄|^2 + (μ̃)|𝐇|^2] V, where the fields should now be understood to be oscillating at the real frequency (ω). Note that the non-zero imaginary parts of ϵ̃ and μ̃ introduce virtual gain or loss to the system, which is an artifact of the fact that ω is actually complex. Eq. (<ref>) is an energy balance relation for the system, since the integral on the right hand side describes the energy dissipated (gained) within the system by the virtual loss (gain) <cit.>. B_ω meanwhile is a complex quantity whose real part is given by <cit.> (B_ω) = -1/4∫_Ω [((ω) ϵ)(ω)|𝐄|^2 + μ_0|𝐇|^2 -2(ω)((ϵ̃) (𝐄(ω)·𝐄^*) + (μ̃) (𝐇(ω)·𝐇^*))] V, which, up to constant factors, is equal to the system's stored electromagnetic energy <cit.>. Frequency derivatives in Eq. (<ref>) are necessary to properly account for the effects of material dispersion. On the other hand, (B_ω) = -A(ω)/2 describes energy dissipation within the system. Similar expressions have been used to account for material losses in resonant systems <cit.>. Consider now B_α and suppose that the system perturbation has the effect of modifying the permittivity in a localized region Ω_α⊂Ω. This might be caused by, for example, a local pressure change or a particle binding to the system. If the system's internal field distributions are only weakly affected by the perturbation, then V^e_α = V^m_α≈ 0 and we have B_α≈ω/4∫_Ωϵα|𝐄|^2 V. Recalling the form of the right hand side of Eq. (<ref>), note that the numerator will contain the factor Δα B_α, which, in light of Eq. (<ref>), will involve the product Δαϵα. By assumption, outside of Ω_α, ϵα = 0, while within Ω_α we have Δαϵα = Δϵ, where Δϵ is the change in ϵ caused by the perturbation. Eq. (<ref>) therefore reduces to Δω_p/ω_p = - 1/B_ω∫_Ω_αΔϵ|𝐄|^2 V. Since we have assumed high quality factor resonances, the previously discussed virtual gain or dissipation will be weak, such that B_ω≈(B_ω) and Eq. (<ref>) is thus in full agreement with standard cavity perturbation theory <cit.>. It is important to stress that Eq. (<ref>) is only strictly valid for infinite quality factor resonances, but is a reasonable approximation when loss is weak. In contrast, Eq. (<ref>) holds more generally, since no restrictive assumptions were made in its derivation. For further analysis of the case of low quality factor resonances, we refer the interested reader to Ref. <cit.>. Numerical examples.—We now turn to demonstrating the validity of our results with numerical simulations. As an example system, we consider photonic networks consisting of randomly connected, single-mode dielectric waveguides, which have recently been investigated as a platform for random lasing <cit.> and in integrated photonic circuits <cit.>. Such systems are relatively simple to model and have non-trivial spectra. An example network, shown in Figure <ref>, was generated by Delaunay triangulation of a random collection of coplanar points (the purple, or `internal' nodes). An additional layer of `external' nodes (green) connect to some of the internal nodes at the outer edges of the network and serve as entry points to the network, allowing one to define the network scattering matrix 𝐒. Given knowledge of the scattering properties of the network's nodes and links, 𝐒 and its derivatives can be calculated and thus 𝐐_ξ evaluated directly <cit.>. In the supplemental material <cit.>, we also demonstrate how 𝐐_ξ can be calculated numerically from Eq. (<ref>). With our network, we searched for scattering matrix poles with optical frequencies (ω) in the vicinity of 3425 THz, corresponding to a vacuum wavelength of about 550 nm. The links were assumed to be made of BK7 glass with refractive index n calculated using a standard Sellmeier equation <cit.>. Dispersion was weak in our example and n ≈ 1.5185 was approximately constant over the range of frequencies considered. Propagation of light through each link was described by exponential factors of the form e^± iω nL/c, where L is the link length. Our network was micro-scale in size with an average link length of 60 μm. For simplicity, each internal node was given a randomly generated, frequency-independent scattering matrix drawn from the circular orthogonal ensemble to enforce energy conservation and reciprocity <cit.>. Although not fully realistic, our model was sufficient to produce a complex scattering system with which our theory could be tested. To perturb the network, we isolated a segment of a randomly chosen link (the red, `perturbed' segment or Ω_α in Figure <ref>) and varied its refractive index relative to the rest of the link. Specifically, the refractive index of the segment n_s was given by n_s = n + Δ n, where Δ n was incrementally increased from 10^-5 to 10^-2. Note that the final value Δ n = 10^-2 corresponded to a total phase shift in the segment ϕ = ωΔ n L /c equal to several multiples of 2π. We further introduced two virtual nodes where the perturbed segment met the rest of the link, which were given standard Fresnel scattering matrices based on the refractive index mismatch. Figure <ref> depicts the variation of log[(𝐒)] in the complex ω plane for the unperturbed network, i.e., Δ n = 0. Bright regions, marked with white dots and crosses, correspond to the poles of the unperturbed network scattering matrix. Figures <ref>(a)-(d) show detailed plots of the smaller regions bounded by the white dashed boxes in the main panel. The positions of a subset of the poles, specifically those marked with white crosses, were tracked as Δ n was increased. The trajectories followed are shown by the solid red and white lines emanating from the initial pole positions (crosses). White lines were traced by numerically solving (𝐒^-1) = 0 for each value of Δ n, using the previous pole position as an initial guess. The red lines, on the other hand, were traced using Eq. (<ref>) with α = Δ n to determine the pole shifts at each step (see Supplemental Material <cit.> for details of the calculation of 𝐐_α). As can be seen, the Wigner-Smith theory agrees excellently with the direct numerical solutions. Some discrepancies, however, between the two methods can be seen, e.g., near the top of the main panel. In this region, it was found that the computed values for ω_p(𝐐_α) were relatively large, implying that the pole shifts were more sensitive to the perturbation. Achieving greater accuracy would thus require small a perturbation step size to maintain validity of Eq. (<ref>). Dashed red and white lines emanating from the crosses on the right hand side of the figure show the pole trajectories when Δ n instead took negative values, and show further agreement between the two methods. Figure <ref> shows several interesting features. First, note that the half space (ω) > 0 contains dark regions for which (𝐒) = 0. These zeros can be pushed to the real axis by introducing loss to the system, whereby they correspond to coherent perfect absorption modes <cit.>. Note also that the poles in our data exhibit two distinct types of trajectories: some follow long, meandering paths, while others revolve around localized, closed loops. Figures <ref>(a)-(b) in particular demonstrate the latter type. It is important to realize that 𝐒 is periodic in Δ n since for specific values of Δ n the additional propagation phase acquired in Ω_α will be an exact multiple of 2π. At these values 𝐒 will be identical to that of the unperturbed network, such that poles following loop trajectories must return to their original positions, while poles following open trajectories must pass through the positions of other poles of the unperturbed network (white dots in Figure <ref>). As a further simple application of our theory, we next considered adding gain to (or `pumping') our network which can lead to random lasing. Mathematically, gain can be introduced to the i'th link by varying its refractive index n_i according to n_i = (n_i) - iγ_i, where (n_i) is calculated as before and γ_i > 0 is a variable gain coefficient taken, for simplicity, to be frequency independent over the pumping bandwidth. Adding gain tends to cause poles to drift towards the real axis and the pole that first reaches the axis dictates the dominant lasing frequency <cit.>. Single mode lasing at arbitrary resonant frequencies is achievable by shaping the pump profile in accordance with the spatial profile of the resonant mode <cit.>. Similar selective lasing can be realized using our theory by considering the collection of generalized Wigner-Smith operators associated with the gain coefficients. Specifically, for any pole ω_p and for all i indexing the links, we can calculate ω_p(𝐐_γ_i), where 𝐐_γ_i is the operator associated with adding gain γ_i to only the i'th link. In light of Eq. (<ref>), these values predict how the pole will shift when different links are pumped in isolation, thus revealing which link should be pumped to optimally shift ω_p towards the real axis. Figure <ref> shows the results of the described numerical pumping experiment. The imaginary parts of three arbitrarily selected poles (distinguished by color and corresponding to the numbered poles in Figure <ref>) were tracked in response to three alternate pumping methods (distinguished by line style). Solid lines track the poles when all links were pumped uniformly, in which case the narrowest resonance, mode 1 (at 3424.47 THz), reaches the lasing threshold first. Alternatively, the dashed and dot-dashed lines track the poles when the links highlighted (and denoted 2 and 3 respectively) in Figure <ref>, were pumped in isolation. In particular, links 2 and 3 correspond to those predicted to optimally shift modes 2 (at 3424.8 THz) and 3 (at 3426.25 THz) respectively towards the real axis. This predicted behaviour is evident in Figure <ref>, with pumping of link 2 bringing mode 2 to lasing threshold first (and similarly for mode/link 3). We also calculated the spatial intensity profiles of modes 2 and 3 across the network, which are shown on the right hand side of Figure <ref>. Notably, the intensity of mode 2 (3) is strongly peaked across link 2 (3), confirming the wisdom that the optimal pump profile should conform to the mode's spatial distribution. Interestingly, however, the use of the generalized Wigner-Smith operator eliminated the need to explicitly calculate the mode distribution in determining the pump profile. Finally, we note in passing that a similar strategy of selectively introducing loss to specific links could be used to achieve selective coherent perfect absorption of desired modes. Conclusion.—To conclude, in this work we have presented a novel method for calculating resonance shifts in perturbed open systems using generalized Wigner-Smith operators associated with the perturbation parameters. These operators have found increasing use in the control and manipulation of optical fields in recent years. Our work reveals the connection of generalized Wigner-Smith operators to resonant properties in non-Hermitian systems and further highlights their utility. Our perturbation theory is based on generic complex analytic arguments and is therefore applicable to a wide range of scenarios. At the same time, we have demonstrated that our results reduce to more traditional perturbation formulas for high quality resonances. We have verified our theory numerically by tracking pole shifts caused by refractive index perturbations in a complex photonic network, and in a spatially selective pumping experiment. Our work provides a novel way to analyze scattering resonances, which may be of use in cavity or nanostructure design <cit.> and future sensing technologies. 50 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Eleuch and Rotter(2017)]PhysRevA.95.022117 author author H. Eleuch and author I. Rotter, title title Resonances in open quantum systems, https://doi.org/10.1103/PhysRevA.95.022117 journal journal Phys. Rev. A volume 95, pages 022117 (year 2017)NoStop [Kappos(2014)]Kappos2014 author author A. Kappos, title Seismic Analysis of Concrete Bridges: Numerical Modeling, in https://doi.org/10.1007/978-3-642-36197-5_127-1 booktitle Encyclopedia of Earthquake Engineering (publisher Springer Berlin Heidelberg, year 2014) p. pages 1–37NoStop [Gay-Balmaz and Martin(2002)]gay2002electromagnetic author author P. Gay-Balmaz and author O. J. Martin, title title Electromagnetic resonances in individual and coupled split-ring resonators, https://doi.org/10.1063/1.1497452 journal journal J. Appl. Phys. volume 92, pages 2929–2936 (year 2002)NoStop [Zhao et al.(2022)Zhao, Molin, Wang, Wolgamot, and Taylor]zhao2022nonlinear author author W. Zhao, author B. Molin, author Y. Wang, author H. Wolgamot, and author P. Taylor, title title Nonlinear harmonics of gap fluid resonance with floating body motions, https://doi.org/10.1017/jfm.2022.834 journal journal J. Fluid Mech. volume 951, pages A23 (year 2022)NoStop [Krasnok et al.(2019)Krasnok, Baranov, Li, Miri, Monticone, and Alú]Krasnok2019 author author A. Krasnok, author D. Baranov, author H. Li, author M.-A. Miri, author F. Monticone, and author A. Alú, title title Anomalies in Light Scattering, https://doi.org/10.1364/aop.11.000892 journal journal Adv. Opt. Photon. volume 11, pages 892–951 (year 2019)NoStop [Schrödinger(1926)]Schrodinger1926 author author E. Schrödinger, title title Quantisierung als Eigenwertproblem, https://doi.org/10.1002/andp.19263840404 journal journal Ann. Phys. volume 385, pages 437–490 (year 1926)NoStop [Brillouin(1932)]Brillouin1932 author author L. Brillouin, title title Les problèmes de perturbations et les champs self-consistents, https://doi.org/10.1051/jphysrad:jphysrad0193200309037300 journal journal J. Phys. Radium volume 3, pages 373–389 (year 1932)NoStop [Kelly(1964)]PhysRev.136.B896 author author H. P. Kelly, title title Many-Body Perturbation Theory Applied to Atoms, https://doi.org/10.1103/PhysRev.136.B896 journal journal Phys. Rev. volume 136, pages B896–B912 (year 1964)NoStop [Ashida et al.(2020)Ashida, Gong, and Ueda]Ashida2020 author author Y. Ashida, author Z. Gong, and author M. Ueda, title title Non-Hermitian physics, https://doi.org/10.1080/00018732.2021.1876991 journal journal Adv. Phys. volume 69, pages 249–435 (year 2020)NoStop [Rotter and Gigan(2017)]RevModPhys.89.015005 author author S. Rotter and author S. Gigan, title title Light fields in complex media: Mesoscopic scattering meets wave control, https://doi.org/10.1103/RevModPhys.89.015005 journal journal Rev. Mod. Phys. volume 89, pages 015005 (year 2017)NoStop [Kristensen et al.(2020)Kristensen, Herrmann, Intravaia, and Busch]Kristensen:20 author author P. T. Kristensen, author K. Herrmann, author F. Intravaia, and author K. Busch, title title Modeling electromagnetic resonators using quasinormal modes, https://doi.org/10.1364/AOP.377940 journal journal Adv. Opt. Photon. volume 12, pages 612–708 (year 2020)NoStop [Baranov et al.(2017)Baranov, Krasnok, Shegai, Alù, and Chong]Baranov2017 author author D. G. Baranov, author A. Krasnok, author T. Shegai, author A. Alù, and author Y. Chong, title title Coherent perfect absorbers: linear control of light with light, https://doi.org/10.1038/natrevmats.2017.64 journal journal Nat. Rev. Mater. volume 2, pages 17064 (year 2017)NoStop [Miri and Alu(2019)]miri2019exceptional author author M.-A. Miri and author A. Alu, title title Exceptional points in optics and photonics, https://doi.org/10.1126/science.aar7709 journal journal Science volume 363, pages eaar7709 (year 2019)NoStop [Hsu et al.(2016)Hsu, Zhen, Stone, Joannopoulos, and Soljačić]Hsu2016 author author C. W. Hsu, author B. Zhen, author A. D. Stone, author J. D. Joannopoulos, and author M. Soljačić, title title Bound states in the continuum, https://doi.org/10.1038/natrevmats.2016.48 journal journal Nat. Rev. Mater. volume 1, pages 16048 (year 2016)NoStop [Chen and Wang(2020)]chen2020optical author author C. Chen and author J. Wang, title title Optical biosensors: An exhaustive and comprehensive review, https://doi.org/10.1039/C9AN01998G journal journal Analyst volume 145, pages 1605–1628 (year 2020)NoStop [Degen et al.(2017)Degen, Reinhard, and Cappellaro]RevModPhys.89.035002 author author C. L. Degen, author F. Reinhard, and author P. Cappellaro, title title Quantum sensing, https://doi.org/10.1103/RevModPhys.89.035002 journal journal Rev. Mod. Phys. volume 89, pages 035002 (year 2017)NoStop [Foreman et al.(2015)Foreman, Swaim, and Vollmer]Foreman2015b author author M. R. Foreman, author J. D. Swaim, and author F. Vollmer, title title Whispering gallery mode sensors, https://doi.org/10.1364/AOP.7.000168 journal journal Adv. Opt. Photon. volume 7, pages 168–240 (year 2015)NoStop [Ruesink et al.(2015)Ruesink, Doeleman, Hendrikx, Koenderink, and Verhagen]Ruesink2015 author author F. Ruesink, author H. M. Doeleman, author R. Hendrikx, author A. F. Koenderink, and author E. Verhagen, title title Perturbing Open Cavities: Anomalous Resonance Frequency Shifts in a Hybrid Cavity-Nanoantenna System, https://doi.org/10.1103/PhysRevLett.115.203904 journal journal Phys. Rev. Lett. volume 115, pages 203904 (year 2015)NoStop [Azeem et al.(2021)Azeem, Trainor, Devane, Norman, Rueda, Lambert, Kumari, Foreman, and Schwefel]Azeem2021 author author F. Azeem, author L. Trainor, author P. Devane, author D. Norman, author A. Rueda, author N. Lambert, author M. Kumari, author M. R. Foreman, and author H. Schwefel, title title Dielectric Perturbations: Anomalous Resonance Frequency Shifts in Optical Resonators, https://doi.org/10.1364/OL.420791 journal journal Opt. Lett. volume 46, pages 2477–2480 (year 2021)NoStop [Parto et al.(2020)Parto, Liu, Bahari, Khajavikhan, and Christodoulides]Parto2020 author author M. Parto, author Y. G. N. Liu, author B. Bahari, author M. Khajavikhan, and author D. N. Christodoulides, title title Non-Hermitian and topological photonics: optics at an exceptional point, https://doi.org/10.1515/nanoph-2020-0434 journal journal Nanophotonics volume 10, pages 403–423 (year 2020)NoStop [Ching et al.(1998)Ching, Leung, Maassen van den Brink, Suen, Tong, and Young]Ching1998 author author E. S. C. Ching, author P. T. Leung, author A. Maassen van den Brink, author W. M. Suen, author S. S. Tong, and author K. Young, title title Quasinormal-mode expansion for waves in open systems, https://doi.org/10.1103/RevModPhys.70.1545 journal journal Rev. Mod. Phys. volume 70, pages 1545–1554 (year 1998)NoStop [Waldron(1960)]Waldron1960 author author R. A. Waldron, title title Perturbation theory of resonant cavities, https://doi.org/10.1049/pi-c.1960.0041 journal journal Proc. Inst. Electr. Eng. volume 107, pages 272–274 (year 1960)NoStop [Bethe and Schwinger()]bethe1943perturbation author author H. A. Bethe and author J. Schwinger, @noop title Perturbation theory for cavities, note N.D.R.C. Rpt. D1‐117, (Cornell University, 1943)NoStop [Lalanne et al.(2018)Lalanne, Yan, Vynck, Sauvan, and Hugonin]https://doi.org/10.1002/lpor.201700113 author author P. Lalanne, author W. Yan, author K. Vynck, author C. Sauvan, and author J.-P. Hugonin, title title Light interaction with photonic and plasmonic resonances, https://doi.org/https://doi.org/10.1002/lpor.201700113 journal journal Laser Photonics Rev. volume 12, pages 1700113 (year 2018)NoStop [Chen et al.(2019)Chen, Bergman, and Sivan]PhysRevApplied.11.044018 author author P. Y. Chen, author D. J. Bergman, and author Y. Sivan, title title Generalizing Normal Mode Expansion of Electromagnetic Green's Tensor to Open Systems, https://doi.org/10.1103/PhysRevApplied.11.044018 journal journal Phys. Rev. Appl. volume 11, pages 044018 (year 2019)NoStop [Byrnes and Foreman(2024)]WS_second_paper author author N. Byrnes and author M. R. Foreman, @noop journal journal in preparation (year 2024)NoStop [TEX(2016)]TEXIER201616 title title Wigner time delay and related concepts: Application to transport in coherent conductors, https://doi.org/https://doi.org/10.1016/j.physe.2015.09.041 journal journal Phys. E: Low-dimensional Syst. Nanostruct. volume 82, pages 16 (year 2016)NoStop [Patel and Michielssen(2021)]9142355 author author U. R. Patel and author E. Michielssen, title title Wigner–Smith Time-Delay Matrix for Electromagnetics: Theory and Phenomenology, https://doi.org/10.1109/TAP.2020.3008650 journal journal IEEE Trans. Antennas Propag. volume 69, pages 902–917 (year 2021)NoStop [de Carvalho and Nussenzveig(2002)]DECARVALHO200283 author author C. de Carvalho and author H. Nussenzveig, title title Time delay, https://doi.org/https://doi.org/10.1016/S0370-1573(01)00092-8 journal journal Phys. Rep. volume 364, pages 83 (year 2002)NoStop [Mao et al.(2023)Mao, Patel, and Michielssen]10091775 author author Y. Mao, author U. R. Patel, and author E. Michielssen, title title Wigner–Smith Time Delay Matrix for Electromagnetics: Systems With Material Dispersion and Losses, https://doi.org/10.1109/TAP.2023.3262979 journal journal IEEE Trans. Antennas Propag. volume 71, pages 5266–5275 (year 2023)NoStop [Ambichl et al.(2017)Ambichl, Brandstötter, Böhm, Kühmayer, Kuhl, and Rotter]PhysRevLett.119.033903 author author P. Ambichl, author A. Brandstötter, author J. Böhm, author M. Kühmayer, author U. Kuhl, and author S. Rotter, title title Focusing inside Disordered Media with the Generalized Wigner-Smith Operator, https://doi.org/10.1103/PhysRevLett.119.033903 journal journal Phys. Rev. Lett. volume 119, pages 033903 (year 2017)NoStop [Horodynski et al.(2019)Horodynski, Kühmayer, Brandstötter, Pichler, Fyodorov, Kuhl, and Rotter]Horodynski2019 author author M. Horodynski, author M. Kühmayer, author A. Brandstötter, author K. Pichler, author Y. V. Fyodorov, author U. Kuhl, and author S. Rotter, title title Optimal wave fields for micromanipulation in complex scattering environments, https://doi.org/10.1038/s41566-019-0550-z journal journal Nat. Photonics volume 14, pages 149–153 (year 2019)NoStop [Bouchet et al.(2021)Bouchet, Rotter, and Mosk]Bouchet2021 author author D. Bouchet, author S. Rotter, and author A. P. Mosk, title title Maximum information states for coherent scattering measurements, https://doi.org/10.1038/s41567-020-01137-4 journal journal Nat. Phys. volume 17, pages 564–568 (year 2021)NoStop [sup()]supplemental @noop title See Supplemental Material for (I) a derivation of Eq. (3), (II) a derivation of Eq. (5), (III), derivations of Eqs. (6) and (7), and (IV) a detailed discussion on the calculation of the Wigner-Smith operators for our photonic network.Stop [Trefethen and Bau(1997)]trefethen1997numerical author author L. Trefethen and author D. Bau, https://doi.org/10.1137/1.9781611977165 title Numerical Linear Algebra (publisher SIAM, year 1997)NoStop [Othonos(1997)]othonos1997fiber author author A. Othonos, title title Fiber Bragg gratings, https://doi.org/10.1063/1.1148392 journal journal Rev. Sci. Instrum. volume 68, pages 4309–4341 (year 1997)NoStop [Gaio et al.(2019)Gaio, Saxena, Bertolotti, Pisignano, Camposeo, and Sapienza]gaio2019nanophotonic author author M. Gaio, author D. Saxena, author J. Bertolotti, author D. Pisignano, author A. Camposeo, and author R. Sapienza, title title A nanophotonic laser on a graph, https://doi.org/10.1038/s41467-018-08132-7 journal journal Nat. Commun. volume 10, pages 226 (year 2019)NoStop [rab(2007)]rabus2020ring title Ring resonators: Theory and modeling, in https://doi.org/10.1007/978-3-540-68788-7_2 booktitle Integrated Ring Resonators: The Compendium (publisher Springer Berlin Heidelberg, year 2007) pp. pages 3–40NoStop [Geyi(2019)]Geyi:19 author author W. Geyi, title title Stored electromagnetic field energies in general materials, https://doi.org/10.1364/JOSAB.36.000917 journal journal J. Opt. Soc. Am. B volume 36, pages 917–925 (year 2019)NoStop [Xiao et al.(2012)Xiao, Liu, Li, Chen, Li, and Gong]PhysRevA.85.031805 author author Y.-F. Xiao, author Y.-C. Liu, author B.-B. Li, author Y.-L. Chen, author Y. Li, and author Q. Gong, title title Strongly enhanced light-matter interaction in a hybrid photonic-plasmonic resonator, https://doi.org/10.1103/PhysRevA.85.031805 journal journal Phys. Rev. A volume 85, pages 031805 (year 2012)NoStop [Hu et al.(2014)Hu, Shao, Arnold, Liu, Ma, and Xiao]PhysRevA.90.043847 author author Y. Hu, author L. Shao, author S. Arnold, author Y.-C. Liu, author C.-Y. Ma, and author Y.-F. Xiao, title title Mode broadening induced by nanoparticles in an optical whispering-gallery microcavity, https://doi.org/10.1103/PhysRevA.90.043847 journal journal Phys. Rev. A volume 90, pages 043847 (year 2014)NoStop [Wang et al.(2023)Wang, Savo, Maeder, Kaufmann, Kellner, Morandi, Rotter, Sapienza, and Grange]Wang2023 author author X. S. Wang, author R. Savo, author A. Maeder, author F. Kaufmann, author J. Kellner, author A. Morandi, author S. Rotter, author R. Sapienza, and author R. Grange, title title Graph model for multiple scattering in lithium niobate on insulator integrated photonic networks, https://doi.org/10.1364/oe.492431 journal journal Opt. Express volume 31, pages 42255–42270 (year 2023)NoStop [Giacomelli et al.(2019)Giacomelli, Lepri, and Trono]Giacomelli2019 author author G. Giacomelli, author S. Lepri, and author C. Trono, title title Optical networks as complex lasers, https://doi.org/10.1103/physreva.99.023841 journal journal Phys. Rev. A volume 99, pages 023841 (year 2019)NoStop [Polyanskiy(2024)]Polyanskiy2024 author author M. N. Polyanskiy, title title Refractiveindex.info database of optical constants, https://doi.org/10.1038/s41597-023-02898-2 journal journal Sci. Data volume 11, pages 94 (year 2024)NoStop [Byrnes and Foreman(2022)]Byrnes2022 author author N. Byrnes and author M. R. Foreman, title title Polarisation statistics of vector scattering matrices from the circular orthogonal ensemble, https://doi.org/10.1016/j.optcom.2021.127462 journal journal Opt. Commun. volume 503, pages 127462 (year 2022)NoStop [Chong et al.(2010)Chong, Ge, Cao, and Stone]PhysRevLett.105.053901 author author Y. D. Chong, author L. Ge, author H. Cao, and author A. D. Stone, title title Coherent perfect absorbers: Time-reversed lasers, https://doi.org/10.1103/PhysRevLett.105.053901 journal journal Phys. Rev. Lett. volume 105, pages 053901 (year 2010)NoStop [Andreasen et al.(2010)Andreasen, Asatryan, Botten, Byrne, Cao, Ge, Labonté, Sebbah, Stone, Türeci, and Vanneste]Andreasen2010 author author J. Andreasen, author A. A. Asatryan, author L. C. Botten, author M. A. Byrne, author H. Cao, author L. Ge, author L. Labonté, author P. Sebbah, author A. D. Stone, author H. E. Türeci, and author C. Vanneste, title title Modes of random lasers, https://doi.org/10.1364/aop.3.000088 journal journal Adv. Opt. Photon. volume 3, pages 88 (year 2010)NoStop [Bachelard et al.(2012)Bachelard, Andreasen, Gigan, and Sebbah]PhysRevLett.109.033903 author author N. Bachelard, author J. Andreasen, author S. Gigan, and author P. Sebbah, title title Taming random lasers through active spatial control of the pump, https://doi.org/10.1103/PhysRevLett.109.033903 journal journal Phys. Rev. Lett. volume 109, pages 033903 (year 2012)NoStop [Bachelard et al.(2014)Bachelard, Gigan, Noblin, and Sebbah]Bachelard2014 author author N. Bachelard, author S. Gigan, author X. Noblin, and author P. Sebbah, title title Adaptive pumping for spectral control of random lasers, https://doi.org/10.1038/nphys2939 journal journal Nat. Phys. volume 10, pages 426–431 (year 2014)NoStop [Granchi et al.(2023)Granchi, Intonti, Florescu, García, Gurioli, and Arregui]Granchi2023 author author N. Granchi, author F. Intonti, author M. Florescu, author P. D. García, author M. Gurioli, and author G. Arregui, title title Q-Factor Optimization of Modes in Ordered and Disordered Photonic Systems Using Non-Hermitian Perturbation Theory, https://doi.org/10.1021/acsphotonics.3c00510 journal journal ACS Photonics volume 10, pages 2808–2815 (year 2023)NoStop == Supplemental material for “Perturbing scattering resonances in non-Hermitian systems: a generalized Wigner-Smith operator formulation” Niall Byrnes^1, and Matthew R. Foreman^1,2 August 26, 2024 ====================================================================================================================================== § DERIVATION OF THE POLE SHIFT EQUATION As discussed in the main text, it is helpful to consider the function f(ω, α) = [𝐒^-1(ω, α)], where 𝐒 is the scattering matrix, ω is the complex frequency, and α is a system parameter. Importantly, if ω_p is a pole of 𝐒, then f(ω_p, α) = 0. We assume that all poles and zeros are of order one. Suppose that the system is perturbed so that α changes value to α' = α + Δα and let C be an arbitrary contour in the complex frequency plane that encircles ω_p and no other poles or zeros of 𝐒. If ω∈ C, expanding f about (ω, α) we have f(ω_p', α') = f(ω, α) + (ω'_p-ω) fω(ω, α) + Δαfα(ω, α) + ⋯, where ω'_p is the shifted pole for the perturbed system. The higher order terms in Eq. (<ref>) will contain factors of the form (ω_p'-ω)^n_1Δα^n_2, where n_1 + n_2 ≥ 2. We shall neglect these terms on the grounds that the perturbation Δα and the corresponding pole shift Δω_p are assumed to be small. By taking C to be sufficiently close to ω_p, we can assume that ω≈ω_p and thus ω'_p - ω≈ω'_p - ω_p = Δω_p. Therefore, (ω_p'-ω)^n_1Δα^n_2≈Δω_p^n_1Δα^n_2 and since all terms for which n_1 + n_2 ≥ 2 are the products of multiple small quantities, we neglect them. Since ω'_p is a resonant frequency of the perturbed system, f(ω_p', α')=0. Also, since f is non-zero on C, we can divide each term in Eq. (<ref>) by f(ω, α) to give (dropping functional dependencies on ω and α in our notation for clarity) 0 = 1 + (ω'_p - ω)fω/f + Δαfα/f. Letting ξ stand in place of ω and α and using standard results from matrix calculus, the logarithmic derivative of f is given by <cit.> fξ/f = (𝐒𝐒^-1ξ) = -(𝐒^-1𝐒ξ). The final equality in Eq. (<ref>) follows from the fact that 𝐒^-1ξ = -𝐒^-1(𝐒ξ)𝐒^-1 and the cyclic invariance of the trace operator. Ideally, we would evaluate Eq. (<ref>) at ω = ω_p and solve the resulting equation for Δω_p. Since f has a zero at ω_p, however, the function fξ/f has a pole at ω_p. To avoid problems associated with divergences, we proceed by calculating the residue of each term in Eq. (<ref>) at ω_p. This can be done by dividing each term by 2π i and integrating over the contour C. Clearly the unity term has zero residue. For the term containing fα/f, it follows from Eq. (<ref>) and the definition of the Wigner-Smith operator that Δα/2π i∮_C fα/fω = -iΔα1/2π i∮_C (-i𝐒^-1𝐒α)ω = -iΔαω_p(𝐐_α). The residue of the term containing fω/f can be evaluated using the argument principle <cit.>. Since, by construction, f only has a single zero (at ω_p) and no poles within C, it follows that ω_pfω/f = 1. Next, we make use of the fact that if the functions g and h are such that g is holomorphic at ω_p and h has a simple pole at ω_p, then ω_pgh = g(ω_p)ω_ph. Setting g(ω) = ω'_p - ω and h = fω/f, we obtain 1/2π i∮_C (ω'_p - ω)fω/fdω = ω'_p - ω_p = Δω_p. Having found the residue of each term in Eq. (<ref>), the resulting equation can now be solved for Δω_p to give Δω_p = iΔαω_p(𝐐_α), which is the first equality of Eq. (<ref>) in the main text. The second equality of Eq. (<ref>) in the main text follows from the fact that ω_p(𝐐_ω) = i, which is a straightforward consequence of Eqs. (<ref>) and (<ref>) since i = i1/2π i∮_Cfω/fω = 1/2π i∮_C(-i𝐒^-1𝐒ω) ω = ω_p(𝐐_ω). Finally, since i = -1/i, we have Δω_p = iΔαω_p(𝐐_α) = -Δαω_p(𝐐_α)/i = -Δαω_p(𝐐_α)/ω_p(𝐐_ω), which completes the derivation. § DERIVATION OF THE VOLUME INTEGRAL FORM OF THE WIGNER-SMITH OPERATOR In this section we present a derivation of Eq. (<ref>) in the main text, which expresses the Wigner-Smith operators in terms of volume integrals over the system. An important observation is that the Wigner-Smith operator can be factorized as 𝐐_ξ = 𝐀^-1𝐁_ξ, where 𝐀 = 𝐒^†𝐒 and 𝐁_ξ = -i𝐒^†𝐒ξ. We proceed by deriving volume integral expressions for 𝐀 and 𝐁_ξ separately and then combining the results to obtain an expression for 𝐐_ξ. As shall be shown, the former can be derived from a generalized Poynting theorem, while the latter can be derived from an additional energy balance relation. Suppose that all fields are time harmonic with an implicit e^-iω t dependence. The permittivity ϵ is assumed to be a real valued scalar and the permeability μ_0 is that of free-space. We begin by determining a expression for 𝐀. Let Ω be a large surface that encapsulates the system and let ∂Ω denote its boundary. Suppose that light is able to enter and exit the system via a finite number of waveguides that perforate ∂Ω. Let 𝐄_p and 𝐇_p (𝐄_q and 𝐇_q) denote the fields throughout the system that arise due to illuminating the system by the p–th (q–th) incident field, where p (q) enumerates all of the modes in all of the waveguides. An integral version of Poynting's theorem for time harmonic fields gives <cit.> 1/2∫_∂Ω(𝐄_p ×𝐇^*_q)·𝐧̂ dA = i/2∫_Ω(ωμ_0𝐇_p ·𝐇^*_q - ω^*ϵ𝐄_p ·𝐄^*_q) dV , where 𝐧̂ is an outward-pointing unit normal vector to the surface ∂Ω. Consider first the integral on the left hand side of Eq. (<ref>), which can be written as a sum of integrals over the waveguide cross sections. We shall assume for simplicity that these waveguides only support a single mode, but extension to multiple modes is straightforward. Around the cross section ∂Ω_m, where ∂Ω meets the the m–th waveguide, we define a local Cartesian coordinate system where 𝐳̂ points out of the system along the waveguide axis and 𝐱̂ and 𝐲̂ lie within ∂Ω_m, which is assumed to be at z=0. Let 𝐄_mp and 𝐇_mp (𝐄_mq and 𝐇_mq) denote the fields within the m'th waveguide that arise due to illuminating the system via the p–th (q–th) waveguide. We assume that the fields within the waveguide have the form 𝐄_mp = δ_mp𝐞_m^-e^-iβ_m z + S_mp𝐞_m^+e^iβ_m z, 𝐇_mp = δ_mp𝐡_m^-e^-iβ_m z + S_mp𝐡_m^+e^iβ_m z, where β_m is the waveguide propagation constant, δ_mp is a Kronecker delta, S_mp is the (m,p)–th element of the scattering matrix, and 𝐞_m^+, 𝐞_m^-, 𝐡_m^+, and 𝐡_m^- are the vector profiles of the electric and magnetic fields, which are functions only of the transverse spatial coordinates x and y. The propagation constant can be expressed as β_m = n^eff_mk_0, where n^eff_m = √(ϵ^eff_m/ϵ_0) is an effective refractive index, and k_0 = ω/c is the vacuum wavenumber, where c is the speed of light in vacuum. Using Eqs. (<ref>) and (<ref>), we have 1/2 ∫_∂Ω_m(𝐄_mp×𝐇^*_mq)·𝐧̂ dA = 1/2∫_∂Ω_m[(δ_mp𝐞_m,t^- + S_mp𝐞_m,t^+) × (δ_mq𝐡_m,t^-* + S^*_mq𝐡_m,t^+*)]·𝐳̂ dA =1/2(δ_mpδ_mq - δ_mpS^*_mq + δ_mqS_mp - S_mpS^*_mq)∫_∂Ω_m(𝐞_m,t×𝐡_m,t^*)·𝐳̂ dA, where 𝐞^±_m,t and 𝐡^±_m,t are the transverse parts of 𝐞^±_m and 𝐡^±_m. Note that we have adopted the convention in which 𝐞_m,t = 𝐞_m,t^- = 𝐞_m,t^+ and 𝐡_m,t = 𝐡_m,t^- = -𝐡_m,t^+, which can be done without loss of generality <cit.>. The integral on the right side of the final equality of Eq. (<ref>) warrants some discussion. Ideally, we would assert that the integral is equal to unity by some appropriate mode normalization. For materials with loss or gain, however, this cannot be assumed to be the case. In our case, even though ϵ is real, we are still faced with virtual loss or gain in virtue of of the fact that ω is complex (see Section <ref>). Since, however, we are only concerned with high quality factor resonances for which (ω)/(ω) ≪ 1, this effect is weak and we are permitted to treat 𝐞_m,t and 𝐡_m,t as though the waveguides were lossless, allowing us to assert <cit.> 1/2∫_∂Ω_m(𝐞_m,t×𝐡_m,t^*)·𝐳̂ dA = 1. If the external waveguides supported multiple modes, we would also be permitted to assume mode orthogonality in the sense that the integral in Eq. (<ref>) would vanish if 𝐞_t and 𝐡_t were associated with two different modes. Armed with Eq. (<ref>), summing Eq. (<ref>) over m gives the integral over all of ∂Ω, which is given by 1/2∫_∂Ω(𝐄_p×𝐇^*_q)·𝐧̂ dA = (𝐈 - 𝐒^†𝐒 + 𝐒 - 𝐒^†)_qp, where 𝐈 is the identity matrix and the subscript qp indicates that we are looking at the (q,p)–th element of the matrix expression within the parentheses. In order to ease notation, let U^e_qp = 1/4𝐄_p ·𝐄^*_q, U^m_qp = 1/4𝐇_p ·𝐇^*_q, and let 𝐔^e and 𝐔^m denote the matrices whose (q,p)–th elements are U^e_qp and U^m_qp respectively. Eq. (<ref>) can then be written as the matrix equation 𝐈 - 𝐒^†𝐒 + 𝐒 - 𝐒^† = 2i∫_Ω(ωμ_0𝐔^m - ω^*ϵ𝐔^e) dV. In order to extract 𝐒^†𝐒 from Eq. (<ref>), we take the Hermtiain part of both sides. The Hermitian and skew Hermitian parts of a matrix 𝐌, which we denote by (𝐌) and (𝐌) respectively, are defined by Her(𝐌) = 𝐌 + 𝐌^†/2, SHer(𝐌) = 𝐌 - 𝐌^†/2. Taking the Hermitian part eliminates the skew Hermitian matrix 𝐒 - 𝐒^† from the left side of Eq. (<ref>), leaving only the Hermitian matrix 𝐈 - 𝐒^†𝐒. Finally, we find 𝐀 to be 𝐀 = 𝐒^†𝐒 = 𝐈 + 2(ω)∫_Ω(ϵ𝐔^e + μ_0𝐔^m) dV. Note that if (ω) = 0, the system is truly lossless and the integral in Eq. (<ref>) vanishes, recovering the usual unitarity relation for 𝐒. We now turn to the problem of determining an integral equation for 𝐁_ξ. For our starting point, it can be shown in general that, on the basis of energy conservation (see, for example, Ref. <cit.>), i/4∫_∂Ω (𝐄_p × 𝐇^*_qξ±𝐄_pξ×𝐇^*_q)·𝐧̂ dA = ∫_Ω [ ( ω^*ϵ)ξU^e_qp∓ωξμ_0U^m_qp+ω^*ϵ(V^e*_ξ,pq± V^e_ξ,qp)-ωμ_0(V^m*_ξ,pq± V^m_ξ,qp)] dV, where we define V^e_ξ,qp = 1/4𝐄_pξ·𝐄^*_q, V^m_ξ,qp = 1/4𝐇_pξ·𝐇^*_q. Note that Eq. (<ref>) is in fact a pair of equations as one may choose either the upper or lower set of plus and minus signs. The surface integral on the left hand side of Eq. (<ref>) can be evaluated using similar steps to those in deriving Eq. (<ref>), taking appropriate derivatives and conjugates of Eqs. (<ref>) and (<ref>) as required. For the m–th waveguide, we find 1/4∫_∂Ω_m (𝐄_mp×𝐇^*_mqξ±𝐄_mpξ×𝐇^*_mq)·𝐳̂ dA = 1/2(-δ_mpS^*_mqξ - S_mpS^*_mqξ±δ_mqS_mpξ∓ S^*_mqS_mpξ) +1/2(δ_mpδ_mq - δ_mpS^*_mq + δ_mqS_mp - S_mpS^*_mq)∫_∂Ω_m(𝐞_m,t×𝐡^*_m,tξ±𝐞_m,tξ×𝐡^*_m,t)·𝐳̂ dA. Assuming that the transverse mode profiles 𝐞_m,t and 𝐡_m,t only weakly depend on ξ, the integral in the final term of Eq. (<ref>) vanishes. Summing the resulting equation over m gives i/4∫_∂Ω (𝐄_p×𝐇^*_qξ± (𝐄_pξ) ×𝐇^*_q)·𝐧̂ dA = 1/2[-i𝐒^†ξ - i𝐒^†ξ𝐒± i𝐒ξ∓ i𝐒^†𝐒ξ]_qp and, as before, Eq. (<ref>) can therefore be written as the matrix equation 1/2 (-i𝐒^†ξ - i𝐒^†ξ𝐒± i𝐒ξ∓ i𝐒^†𝐒ξ) = ∫_Ω [( ω^*ϵ)ξ𝐔^e∓ωξμ_0𝐔^m + ω^*ϵ(𝐕_ξ^e †±𝐕_ξ^e)-ωμ_0(𝐕_ξ^ m †±𝐕_ξ^m)] dV, where 𝐕_ξ^e (𝐕_ξ^m) is the matrix whose (q,p)–th element is V^e_ξ,qp (V^m_ξ,qp). To isolate -i𝐒^†𝐒ξ, we take the skew Hermitian parts of both sides of Eq. (<ref>) with the upper set of plus and minus signs and the Hermitian parts of both sides of Eq. (<ref>) with the lower set of plus and minus signs. This gives the pair of equations ( -i𝐒^†𝐒ξ) = -i∫_Ω[((ω)ϵ)ξ𝐔^e +(ω)ξμ_0𝐔^m -2i(ω)(ϵ(𝐕_ξ^e) +μ_0(𝐕_ξ^m))] V, ( -i𝐒^†𝐒ξ) = -∫_Ω[((ω)ϵ)ξ𝐔^e +(ω)ξμ_0𝐔^m -2i(ω)(ϵ(𝐕_ξ^e) +μ_0(𝐕_ξ^m))] V. Next, since, by definition, 𝐁_ξ = -i𝐒^†𝐒ξ = (-i𝐒𝐒ξ) +(-i𝐒^†𝐒ξ), Eqs. (<ref>) and (<ref>) can be combined to give 𝐁_ξ = -∫_Ω[(ωϵ)ξ𝐔^e +ωξμ_0𝐔^m + 2i(ω)(ϵ𝐕_ξ^e +μ_0𝐕_ξ^m)] V . Finally, combining 𝐀 and 𝐁_ξ, we arrive at 𝐐_ξ = (𝐈 + 2(ω)∫_Ω(ϵ𝐔^e + μ_0𝐔^m) dV)^-1 (-∫_Ω[(ωϵ)ξ𝐔^e +ωξμ_0𝐔^m + 2i(ω)(ϵ𝐕_ξ^e +μ_0𝐕_ξ^m)] V), which is Eq. (<ref>) in the main text. § PHYSICAL INTERPRETATION OF COMPLEX FREQUENCY AND DERIVATION OF ENERGY INTERPRETATION OF WIGNER-SMITH OPERATORS In this section we discuss the physical interpretation of complex frequencies and derive Eqs. (<ref>) and (<ref>) in the main text. Since ω = (ω)[1 +i(ω)/(ω)], Maxwell's curl equations throughout the system can be written as ∇×𝐄 =iωμ_0𝐇 =i(ω)μ̃𝐇, ∇×𝐇 = -iωϵ𝐄 = -i(ω)ϵ̃𝐄, where ϵ̃=ϵ[1 +i(ω)/(ω)] and μ̃=μ_0[1 +i(ω)/(ω)]. This demonstrates a physical equivalence between two different points of view. On the one hand, we may think of the system as having real material parameters ϵ and μ_0, but supporting waves at a complex frequency ω. On the other hand, we may instead imagine that the waves oscillate at the real frequency (ω), but in a system with complex material parameters ϵ̃ and μ̃. Since Maxwell's equations are the same in either case, the two viewpoints are physically equivalent. The latter, however, is perhaps more familiar, and the imaginary parts of the permittivity and permeability are well understood as representing loss or gain. A wave possessing a complex frequency therefore also exhibits loss or gain, depending on the sign of (ω). Using the definitions of ϵ̃ and μ̃ in Eq. (<ref>), it is straightforward to show that 𝐀 = 𝐈 + 2(ω)∫_Ω[(ϵ̃)𝐔^e + (μ̃)𝐔^m] dV, which, when restricted to the single mode example discussed in the main text, which enforces the transformations 𝐈→ 1, 𝐔^e →1/4|𝐄|^2 and 𝐔^m →1/4|𝐇|^2, reduces to Eq. (<ref>) in the main text. Similalrly, in terms of ϵ̃ and μ̃, Eq. (<ref>) becomes 𝐁_ξ = -∫_Ω[((ω)ϵ̃)ξ𝐔^e +((ω)μ̃)ξ𝐔^m + 2i(ω)((ϵ̃)𝐕_ξ^e +(μ̃)𝐕_ξ^m)] V. Restricting again to a single mode, where now 𝐕^e_ξ→1/4𝐄ξ·𝐄^* and 𝐕^m_ξ = 1/4𝐇ξ·𝐇^*, and taking the real part of Eq. (<ref>), we find (B_ξ) = -1/4∫_Ω[ ((ω)ϵ)ξ|𝐄|^2 +(ω)μ_0ξ|𝐇|^2 - 2(ω)((ϵ̃)(ξ𝐄·𝐄^*) +(μ̃)(𝐇ξ·𝐇^*))] V. Next, note that for any holomorphic function f of a complex variable z we have fz = f(z) <cit.>. Assuming that all functions in Eq. (<ref>) that need to be differentiated are complex differentiable with respect to ω, we can set ξ = ω in Eq. (<ref>) and make the substitution ω→(ω) to obtain (B_ω) = -1/4∫_Ω[ ((ω)ϵ)(ω)|𝐄|^2 +μ_0|𝐇|^2 - 2(ω)((ϵ̃)((ω)𝐄·𝐄^*) +(μ̃)(𝐇(ω)·𝐇^*))] V, which is Eq. (<ref>) in the main text. Finally, taking the imaginary of part Eq. (<ref>) for ξ = ω gives (B_ω) = -1/4∫_Ω[ ((ω)(ϵ̃))(ω)|𝐄|^2 +((ω)(μ̃))(ω)|𝐇|^2 + 2(ω)((ϵ̃)((ω)𝐄·𝐄^*) +(μ̃)(𝐇(ω)·𝐇^*))] V. Since (𝐄ω·𝐄^*) = 1/2(𝐄ω·𝐄^* + 𝐄^*ω·𝐄) = 1/2|𝐄|^2ω and, similarly, (𝐇ω·𝐇^*) = 1/2|𝐇|^2ω, Eq. (<ref>) can be simplified to give (B_ω) = -1/2(ω)((ω)/2∫_Ω[(ϵ̃)|𝐄|^2 +(μ̃)|𝐇|^2] V) = -1/2A(ω), which shows that (B_ω) is related to energy dissipation within the system. § EVALUATION OF THE WIGNER-SMITH VOLUME INTEGRALS FOR COMPLEX PHOTONIC NETWORKS In this section we show how to evaluate the volume integrals appearing in Eq. (<ref>) for a complex photonic network. The structure and properties of these networks are discussed in the main text. The first integral in Eq. (<ref>) is 𝐀 - 𝐈 = 2(ω)∫_Ω(ϵ𝐔^e + μ_0𝐔^m) dV. Integrating over Ω requires integrating over all of the N_node nodes and N_link links in the network as well as the fields in the surrounding space. We can decompose Ω into a union of subsets Ω = Ω^space∪Ω^link∪Ω^node = Ω^space∪(Ω_1^link∪∪Ω_N_link^link)∪(Ω_1^node∪∪Ω_N_node^node), where Ω_m^link is the volume occupied by the m–th link, Ω_m^node is the volume occupied by the m–th node, and Ω^space is the volume occupied by the remaining space around the links and nodes. Assuming the system does not support leaky modes, the fields in the space around the links and nodes will be evanescent, decaying away from the network components. If necessary, we may expand the volumes Ω_m^link and Ω_m^node beyond the physical extent of the network components so as to contain these evanescent fields up to a point where they have sufficiently decayed and no longer significantly contribute to the integrals. The integral over the remaining space Ω^space can then be be neglected. Consider evaluating the integral in Eq. (<ref>) over Ω^link_m and suppose that this link connects the i–th and j–th nodes 𝒩_i and 𝒩_j. A diagram of the link is given in Figure <ref>. We define a local Cartesian coordinate system for the link, similar to that used in Section <ref>. The unit vector 𝐳̂ is assumed to be parallel to the link's axis and points into the link from 𝒩_i at z=0. The link extends to z= L_m, where it meets 𝒩_j such that L_m is the length of the link. To keep our calculations relatively simple, we assume that each link is weakly guiding with an effective refractive index n_m^eff = c√(ϵ^eff_mμ_0), uniform in the z direction and approximately constant over transverse cross sections. We also assume that each link supports a single mode, which, given the weak guiding assumption, has negligible z component and transverse fields that satisfy <cit.> 𝐡_m,t = Y^eff_m𝐳̂×𝐞_m,t, where Y^eff_m = √(ϵ^eff_m/μ_0) is the link's admittance. We can now derive individual normalization equations for 𝐞_m,t and 𝐡_m,t. Let ∂Ω^link_m be a transverse cross section of the link (for any value of z). Recalling Eq. (<ref>), we have 1 = 1/2∫_∂Ω^link_m(𝐞_m,t×𝐡^*_m,t)·𝐳̂ A = Y^eff_m/2∫_∂Ω^link_m[𝐞_m,t×(𝐳̂×𝐞_m,t^*)]·𝐳̂ A = Y^eff_m/2∫_∂Ω^link_m𝐞_m,t·𝐞^*_m,t A, from which it follows that 1/2∫_∂Ω^link_m𝐞_m,t·𝐞^*_m,t A = 1/Y^eff_m. Similarly, 1/2∫_∂Ω^link_m𝐡_m,t·𝐡^*_m,t A = (Y^eff_m)^2/2∫_∂Ω^link_m(𝐳̂×𝐞_m,t)·(𝐳̂×𝐞^*_m,t) A = (Y_m^^eff)^2/2∫_∂Ω^link_m𝐞_m,t·𝐞^*_m,t A= Y^eff_m. The fields within the link that arise due to illuminating the system through the p–th external link are given by 𝐄_mp = (O_ij,pe^iβ_m z + I_ij,pe^-iβ_m z)𝐞_m,t, 𝐇_mp = (O_ij,pe^iβ_m z - I_ij,pe^-iβ_m z)𝐡_m,t, where O_ij,p denotes the field amplitude that enters the link at z=0 from 𝒩_i and propagates towards 𝒩_j, and I_ij,p is the field amplitude that exits the link at z=0, entering 𝒩_i, having arrived from 𝒩_j. Note that the link index m defines the pair of indices ij uniquely. Integrating the field products over Ω^link amounts to employing the normalization conditions Eqs. (<ref>) and (<ref>) to handle the transverse coordinates and integrating the exponential functions with respect to z. Summing the results over m, we ultimately arrive at 2(ω)∫_Ω^link(ϵ U^e_qp + μ_0 U^m_qp) V= ∑_m [ O_ij,pO^*_ij,q(e^-2(ω)n^eff_mL_m/c_0 -1) + I_ij,pI^*_ij,q(1 - e^2(ω)n^eff_mL_m/c_0)], which expresses the integral in terms of the internal field components. Consider next evaluating the integral in Eq. (<ref>) over Ω^node_m. Suppose that the m–th node 𝒩_m is connected to a collection of other nodes 𝒩_j_1, 𝒩_j_2, with indices j_1, j_2, by a collection of links. The geometry of the node is depicted in Figure <ref>. In general, evaluating the integral requires a model for the fields within the node. For simplicity, we shall instead evaluate the integral indirectly. Recall that the integral in question originated from Eq. (<ref>), which was derived from a generalized Poynting theorem over the extent of the network. We can instead apply Poynting's theorem to Ω^node_m to obtain a similar equation to Eq. (<ref>) for the node integral in terms of the node scattering matrix 𝐒_m. Unlike in our derivation of Eq. (<ref>), however, we cannot use Eqs. (<ref>) and (<ref>) for the fields, but must instead use expressions for the internal network fields, similar to those in Eqs. (<ref>) and (<ref>). Repeating the derivation of Eq. (<ref>) with the correct field expressions, we obtain 2(ω)∫_Ω^node_m(ϵ U^e_qp + μ_0 U^m_qp) dV = 𝐢_mq^† (𝐒_m^†𝐒_m - 𝐈) 𝐢_mp, where 𝐢_mp = (I_mj_1,p, I_mj_2,p, )^T is a vector containing components of all of the fields that are incident upon the node when the network is illuminated via the p–th mode. 𝐢_mq = (I_mj_1,q, I_mj_2,q, )^T is defined similarly. Summing the result of Eq. (<ref>) over m gives the total integral over Ω^node. Consider now the second integral that appears in Eq. (<ref>), i.e. 𝐁_ξ = -∫_Ω[(ωϵ)ξ𝐔^e +ωξμ_0𝐔^m + 2i(ω)(ϵ𝐕_ξ^e +μ_0𝐕_ξ^m)] V. As before, we can assume that the integral over Ω^space is negligible. Evaluating the integral over the network's links can be done using analogous steps to those used in deriving Eq. (<ref>). The algebra is somewhat lengthy and there is little additional insight to be gained from a detailed presentation. The final result is -∫_Ω^link [(ωϵ)ξU^e_qp +ωξμ_0U^m_qp + 2i(ω)(ϵ V_ξ,qp^e +μ_0V_ξ,qp^m)] V = ∑_m((n^eff_mk_0)ξL_m(O_ij,pO^*_ij,qe^-2(ω)n^eff_mL_m/c_0 + I_ij,pI^*_ij,qe^2(ω)n^eff_mL_m/c_0) + (ω)/2(ω)n^eff_mξ/n^eff_m[O_ij,pI^*_ij,q(e^2i(ω)n^eff_mL_m/c_0 -1) + I_ij,pO^*_ij,q(1 - e^-2i(ω)n^eff_mL_m/c_0)] + i[O_ij,pξO^*_ij,q(1 - e^-2(ω)n^eff_mL_m/c_0) + I_ij,pξI^*_ij,q(e^2(ω)n^eff_mL_m/c_0-1)]). Evaluating the integral over the network's nodes can also be done indirectly by repeating the derivation of Eq. (<ref>), but over Ω_m^node using the internal fields at the node boundary. This time we obtain -∫_Ω^node_m[(ωϵ)ξU^e_qp +ωξμ_0U^m_qp + 2i(ω)(ϵ V_ξ,qp^e +μ_0V_ξ,qp^m)] V = 𝐢^†_mq(-i𝐒^†_m𝐒_mξ)𝐢_mp + i𝐢^†_mq(𝐈-𝐒^†_m𝐒_m)𝐢_mpξ, which can be summed over m to give the total integral over Ω^node. Finally, combining the results from Eqs. (<ref>), (<ref>), (<ref>) and (<ref>) allows us to compute the Wigner-Smith matrix 𝐐_ξ. 4 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Hjørungnes(2011)]SHjørungnes_2011 author author A. Hjørungnes, https://doi.org/10.1017/CBO9780511921490 title Complex-Valued Matrix Derivatives: With Applications in Signal Processing and Communications (publisher Cambridge University Press, year 2011)NoStop [Ahlfors(1979)]Sahlfors1979complex author author L. Ahlfors, @noop title Complex Analysis: An Introduction to The Theory of Analytic Functions of One Complex Variable (publisher McGraw-Hill Education, year 1979)NoStop [Geyi(2019)]SGeyi:19 author author W. Geyi, title title Stored electromagnetic field energies in general materials, https://doi.org/10.1364/JOSAB.36.000917 journal journal J. Opt. Soc. Am. B volume 36, pages 917–925 (year 2019)NoStop [Snyder and Love(2012)]Ssnyder2012optical author authorA. Snyder and author J. Love, https://doi.org/10.1007/978-1-4613-2813-1 title Optical Waveguide Theory (publisher Springer New York, year 2012)NoStop
http://arxiv.org/abs/2408.12385v1
20240822132641
Sharper Bounds for Chebyshev Moment Matching with Applications to Differential Privacy and Beyond
[ "Cameron Musco", "Christopher Musco", "Lucas Rosenblatt", "Apoorv Vikram Singh" ]
cs.DS
[ "cs.DS", "cs.LG" ]
Cheating in quantum Rabin oblivious transfer using delayed measurements Erika Andersson August 26, 2024 ======================================================================= § ABSTRACT We study the problem of approximately recovering a probability distribution given noisy measurements of its Chebyshev polynomial moments. We sharpen prior work, proving that accurate recovery in the Wasserstein distance is possible with more noise than previously known. As a main application, our result yields a simple “linear query” algorithm for constructing a differentially private synthetic data distribution with Wasserstein-1 error Õ(1/n) based on a dataset of n points in [-1,1]. This bound is optimal up to log factors and matches a recent breakthrough of Boedihardjo, Strohmer, and Vershynin [Probab. Theory. Rel., 2024], which uses a more complex “superregular random walk” method to beat an O(1/√(n)) accuracy barrier inherent to earlier approaches. We illustrate a second application of our new moment-based recovery bound in numerical linear algebra: by improving an approach of Braverman, Krishnan, and Musco [STOC 2022], our result yields a faster algorithm for estimating the spectral density of a symmetric matrix up to small error in the Wasserstein distance. empty § INTRODUCTION The problem of recovering a probability distribution (or its parameters) by “matching” noisy estimates of the distribution's moments goes back over 100 years to the work of Chebyshev and Pearson <cit.>. Moment matching continues to find a wide variety of applications, both in traditional statistical problems <cit.> and beyond. For example, moment matching is now widely used for solving eigenvalue estimation problems in numerical linear algebra and computational chemistry <cit.>. One powerful and general result on moment matching for distributions with bounded support is that the method directly leads to approximations with small error in the Wasserstein-1 distance (a.k.a. earthmover's distance). Concretely, given a distribution p supported on [-1,1],[The result easily extends to p supported on any finite interval by shifting and scaling the distribution to [-1,1]. For a general interval [a,b], matching k moments yields error O(|a - b|/k) in Wasserstein-1 distance.] any distribution q for which _x∼ p[x^i] = _x∼ q[x^i] for i = 1, …, k satisfies W_1(p,q) = O(1/k), where W_1 denotes the Wasserstein-1 distance <cit.>. I.e., to compute an -accurate approximation to p, it suffices to compute p's first O(1/) moments and to return any distribution q with the same moments. Unfortunately, the above result is highly sensitive to noise, so is difficult to apply in the typical setting where, instead of p's exact moments, we only have access to estimates of the moments (e.g., computed from a sample). In particular, it can be shown that the accuracy of these estimates needs to be proportional to 1/2^k if we want to approximate p up to Wasserstein error O(1/k) <cit.>. In other words, distribution approximation is poorly conditioned with respect to the standard moments. §.§ Chebyshev moment matching One way of avoiding the poor conditioning of moment matching is to move from the standard moments, _x∼ p[x^i], to a better conditioned set of “generalized” moments. Specifically, significant prior work <cit.> leverages Chebyshev moments of the form _x∼ p[ T_i(x)], where T_i is the i^th Cheybshev polynomial of the first kind, defined as: T_0(x) = 1 T_1(x) = x T_i(x) = 2xT_i-1(x) - T_i-2(x), for i ≥ 2. The Cheybshev moments are known to be less noise sensitive than the standard moments: instead of exponentially small error, Õ(1/k) additive error[Throughout, we let Õ(z) denote O(zlog^c(z)) for constant c.] in computing p's first k Chebyshev moments suffices to find a distribution that is O(1/k) close to p in Wasserstein distance (see, e.g., Lemma 3.1 in <cit.>). This fact has been leveraged to obtain efficient algorithms for distribution estimation in a variety of settings. For example, Chebyshev moment matching leads to O(n^2/()) time algorithms for estimating the eigenvalue distribution (i.e., the spectral density) of an n× n symmetric matrix A to error A_2 in the Wasserstein distance <cit.>. Cheybshev moment matching has also been leveraged for differentially private synthetic data generation. In this setting, p is the uniform distribution over a dataset x_1, …, x_n. The goal is to find some q that approximates p, but in a differentially private way, which informally means that q cannot reveal too much information about any one data point, x_j <cit.>. Such a q can be used to generate private synthetic data that is representative of the original data. One approach to solving this problem is to compute p's Chebyshev moments, and then add noise, which is known to ensure privacy <cit.>. Then, one can find a distribution q that matches the noised moments. It has been proven that, for a dataset of size n, this approach yields a differentially private distribution q that is Õ(1/n^1/3) close to p in Wasserstein distance <cit.>. §.§ Our contributions Despite the success of Chebyshev moment matching, including for the applications discussed above, there is room for improvement. For example, for private distribution estimation, alternative methods can achieve nearly-optimal error Õ(1/n) in Wasserstein distance for a dataset of size n <cit.>, improving on the Õ(1/n^1/3) bound known for moment matching. For eigenvalue estimation, existing moment matching methods obtain an optimal quadratic dependence on the matrix dimension n, but a suboptimal polynomial dependence on the accuracy parameter, <cit.>. The main contribution of this work is to resolve these gaps by proving a sharper bound on the accuracy with which the Chebyshev moments need to be approximated to recover a distribution to high accuracy in the Wasserstein distance. Formally, we prove the following: theoremmasterthm Let p,q be distributions supported on [-1,1]. For any positive integer k, if the distributions' first k Chebyshev moments satisfy ∑_j=1^k 1/j^2(_x∼ pT_j(x) - _x∼ qT_j(x))^2 ≤Γ^2, then, for an absolute constant c[Concretely, we prove a bound of 36/k + Γ, although we believe the constants can be improved, at least to 2π/k + Γ, and possibly further. See <Ref> for more discussion.], W_1(p,q) ≤c/k + Γ. As a special case, (<ref>) holds if for all j ∈{1, …, k}, | _x∼ pT_j(x) - _x∼ qT_j(x)| ≤Γ·√(j/1 + log k). Throughout, we let log k denote the natural logarithm of k, i.e., the logarithm with base e. <Ref> characterizes the Chebyshev moment error required for a distribution q to approximate p in Wasserstein distance. The main requirement, (<ref>), involves a weighted ℓ_2 norm with weights 1/j^2, which reflects the diminishing importance of higher moments on the Wasserstein distance. Referring to (<ref>), we obtain a bound of W_1(p,q) ≤ O(1/ k) as long as q's j^th moment differs from p's by Õ(√(j)/k). In contrast, prior work requires error Õ(1/k) for all of the first k moments to ensure the same Wasserstein distance bound (Lemma 3.1, <cit.>). As a corollary of <Ref>, we obtain the following algorithmic result: Let p be a distribution supported on [-1,1]. Given estimates m̂_1, …, m̂_k satisfying ∑_j=1^k 1/j^2(_x∼ pT_j(x) - m̂_j)^2 ≤Γ^2, <Ref> returns a distribution q with W_1(p,q) ≤ c'·(1/k + Γ) for a fixed constant c' in (k) time. <Ref> simply solves a linearly-constrained least-squares regression problem to find a distribution q supported on a sufficiently fine grid whose moments are nearly as close to those of p as m̂_1, …, m̂_k. We then obtain <Ref> by applying <Ref> to bound W_1(p,q). The linear constraints ensure that q is positive and sums to one (i.e, that it is a valid distribution). This problem is easily solved using off-the-shelf software: in our experiments, we use a solver from MOSEK <cit.>. Like prior work, our proof of <Ref> (given in <Ref>) relies on tools from polynomial approximation theory. In particular, we leverage a constructive version of Jackson's theorem on polynomial approximation of Lipschitz functions via “damped Chebyshev expansions” <cit.>. Lipschitz functions are closely related to approximation in Wasserstein distance through the Kantorovich-Rubinstein duality: W_1(p,q) = max_1-Lip f∫_-1^1 f(x)(p(x) - q(x))dx. In contrast to prior work, we couple Jackson's theorem with a tight “global” characterization of the coefficient decay in the Chebyshev expansion of a Lipschitz function. In particular, we prove that any 1-Lipschitz function f with Chebyshev expansion f = ∑_j=0^∞ c_j T_j has coefficients that satisfy ∑_j=1^∞ j^2 c_j^2 = O(1). Prior work only leveraged the well-known “local” decay property, that the j^th coefficient has magnitude bounded by O(1/j) <cit.>. This property is implied by our bound, but much weaker. §.§ Applications We highlight two concrete applications of <Ref>. Differentially Private Synthetic Data. Privacy-enhancing technologies seek to protect individuals' data without preventing learning from the data. For theoretical guarantees of privacy, differential privacy <cit.> has become the industry standard, having been used in massive data products like the US Census, and included as a core tenet of the recent Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence <cit.>. Concretely, we are interested in the ubiquitous notion of approximate differential privacy: A randomized algorithm 𝒜 is (,δ)-differentially private if, for all pairs of neighboring datasets X,X', and all subsets ℬ of possible outputs: [𝒜(X)∈ℬ]≤ e^·[𝒜(X')∈ℬ] + δ In our setting, a dataset X is a collection of n points in a bounded interval (without loss of generality, [-1,1]). Two datasets of size n are considered “neighboring” if all of their data points are equal except for one. Intuitively, <Ref> ensures that the output of 𝒜 is statistically indistinguishable from what the output would be if any one individual's data was replaced with something arbitrary. There exist differentially private algorithms for a wide variety of statistical tasks <cit.>. One task of primary importance is differentially private data synthesis. Here, the goal is to generate synthetic data in a differentially private way that matches the original dataset along a set of relevant statistics or distributional properties. The appeal of private data synthesis is that, once generated, the synthetic data can be used for a wide variety of downstream tasks: a separate differentially private algorithm is not required for each potential use case. Many methods for private data synthesis have been proposed <cit.>. Such methods offer strong empirical performance and a variety of theoretical guarantees, e.g., that the generated synthetic data can effectively answer a fixed set of data analysis queries with high accuracy <cit.>. Recently, there has been interest in algorithms with more general statistical guarantees – e.g., guarantees that the synthetic data comes from a distribution close in statistical distance to the original data <cit.>. By leveraging <Ref>, we contribute the following result to this line of work: theoremsynthdata Let X = {x_1, …, x_n} be a dataset with each x_j∈ [-1,1]. Let p be the uniform distribution on X. For any ϵ,δ∈ (0,1), there is an (, δ)-differentially private algorithm based on Chebyshev moment matching that, in O(n) + ( n) time, returns a distribution q satisfying for a fixed constant c_1, [W_1(p,q)] ≤ c_1log( n)√(log(1/δ))/ n. Moreover, for any β∈ (0,1/2), W_1(p,q) ≤ c_1√(log(1/β) + log( n))√(log( n)log(1/δ))/ n with probability ≥ 1-β. The distribution q returned by the algorithm behind Theorem <ref> is represented as a discrete distribution on O( n) points in [-1,1], so can be sampled from efficiently to produce a synthetic dataset of arbitrary size. Typically, δ is chosen to be 1/(n), in which case <Ref> essentially matches a recent break-through result of Boedihardjo, Strohmer, and Vershynin <cit.>, who give an (,0)-differentially private method with expected Wasserstein-1 error O(log^3/2(n)/( n)), which is optimal up to logarithmic factors.[An Ω(1/(ϵ n)) lower bound on the expected Wasserstein error holds via standard `packing lower bounds' which imply that even the easier problem of privately reporting the mean value of a dataset supported on [-1,1] requires error Ω(1/(ϵ n)). See e.g., <cit.>, Theorem 3.] Like that method, we improve on a natural barrier of Õ(1/(√(n))) error that is inherent to “private histogram” methods for approximation in the Wasserstein-1 distance <cit.>. The result of <cit.> introduces a “superregular random walk” to directly add noise to x_1,…,x_n using a correlated distribution based on a Haar basis. Our method is simpler, more computationally efficient, and falls directly into the empirically popular Select, Measure, Project framework for differentially private synthetic data synthetis <cit.>. In particular, as detailed in <Ref>, we compute the Chebyshev moments of p, add independent noise to each moment using the standard Gaussian mechanism <cit.>, and then recover q matching these noisy moments. We verify the strong empirical performance of the method in <Ref>. A method similar to ours was analyzed in prior work <cit.>, although that work obtains a Wasserstein error bound of Õ(1/ n^1/3). Our tighter connection between Chebyshev moment estimation and distribution approximation proven in <Ref> allows us to obtain a significantly better dependence on n. We note that <cit.> also claims a faster and simpler alternative to <cit.>. While the simplest method in that paper has error scaling with Õ(1/√(n)) , they describe a more complex method that matches our Õ(1/n) result up to a log(n) factor. While we are not aware of an implementation of that algorithm, empirically comparing alternative methods for generating synthetic data with Wasserstein distance guarantees would be a productive line of future work. Additionally, we note that, in concurrent work to ours, Feldman et al. study a stronger notion of instance optimal private distribution estimation in the Wasserstein distance <cit.>. It would be interesting to explore if Cheybshev moment matching has any applications in this setting. Matrix Spectral Density Estimation. Spectral density estimation (SDE) is a problem of central importance in numerical linear algebra. In the standard version of the problem, we are given a symmetric n× n matrix A, which has real-valued eigenvalues λ_1 ≥…≥λ_n. Letting p denote the uniform distribution over these n eigenvalues, the goal is to output q which is close to p in the Wasserstein distance. An approximate spectral density can be useful in determining a variety of properties of A's eigenvalue spectrum – e.g., if its eigenvalues are decaying rapidly or if they follow a distribution characteristic of random matrices. Efficient SDE algorithms were originally studied in computational physics and chemistry, and are widely used to compute the “density of states” of quantum systems <cit.>. More recently, the problem has found applications in network science <cit.>, deep learning <cit.>, optimization <cit.>, and beyond <cit.>. Many popular SDE algorithms are based on Chebyshev moment matching <cit.>. The i^th Chebyshev moment of the spectral density is equal to _x∼ pT_j(x) = 1/n∑_j=1^n T_i(λ_j) = (1/nT_i(A)). This trace can be estimated using a small number of matrix-vector products with T_i(A), using stochastic trace estimation techniques like Hutchinson's estimator <cit.>. Since T_i is a degree-i polynomial, each matrix-vector product with T_i(A) requires just i products with A. Thus, with a small number of products with A, we can obtain approximate moments for use in estimating p. Importantly, this approach can be applied even in the common implicit setting, where we do not have direct accress to the entries of A, but can efficiently multiply the matrix by vectors <cit.>. Recently, <cit.> gave a theoretical analysis of Chebyshev moment-matching for SDE, along with the related Kernel Polynomial Method <cit.>. They show that when n is sufficiently large, specifically, n = Ω̃(1/^2), then Õ(1/) matrix-vector products with A (and (1/) additional runtime) suffice to output q with W_1(p,q) ≤A_2, where A_2 = max_i|λ_i| is A's spectral norm. While the result of <cit.> also holds for smaller values of n, it suffers from a polynomially worse 1/ dependence in the number of matrix-vector products required. By leveraging <Ref>, we resolve this issue, showing that Õ(1/) matrix-vector products suffice for any n. Roughly, by weakening the requirements on how well we approximate A's spectral moments, <Ref> allows us to decrease the accuracy with which moments are estimated, and thus the number of matrix-vector products used by Hutchinson's method. Formally, we prove: theoremcorsde There is an algorithm that, given ∈ (0,1), symmetric A∈^n× n with spectral density p, and upper bound[The power method can compute S satisfying A_2 ≤ S ≤ 2A_2 using O(log n) matrix-vector products with A and O(n) additional runtime <cit.>. In some settings, an upper bound on A_2 may be known a priori <cit.>.] S ≥A_2, uses Õ(1/) matrix-vector products[Formally, we prove a bound of min{n, O(1/) · (1+log^2(1/) log^2(1/(δ))/n )} matrix-vector products to succeed with probability 1-δ. For constant δ, this is at worst O(log^4(1/)/), but actually O(1/) for all = Ω(n/log^4 n).] with A and Õ(n/ϵ + 1/^3) additional time to output a distribution q such that, with high probability, W_1(p,q) ≤ S. In the case when A is dense, <Ref> yields an algorithm that runs in Õ(n^2/ + 1/^3) time, which can be much faster than the O(n^ω) time required to compute p directly via a full eigendecomposition. In terms of matrix-vector products, the result cannot be improved by more than logarithmic factors. In particular, a recent lower bound on estimating the trace of a positive definite matrix <cit.> implies that Ω(1/) matrix-vector products with A are necessary to approximate the spectral density p up to error ϵA_2 (see <Ref> for details). Thus, <Ref> resolves, up to logarithmic factors, the complexity of the SDE problem in the “matrix-vector query model” of computation, where cost is measured via matrix-vector products with A. Understanding this model has become a core topic in theoretical work on numerical linear algebraic, as it generalizes other important models like the matrix sketching and Krylov subspace models <cit.>. Our work contributes to recent progress on establishing tight upper and lower bounds for central problems like linear system solving <cit.>, eigenvector approximation <cit.>, trace estimation <cit.>, and more <cit.>. § PRELIMINARIES Before our main analysis, we introduce notation and technical preliminaries. Notation. We let denote the natural numbers and denote the positive integers. For a vector x∈^k, we let x_2 = √(∑_i=1^k x_i^2) denote the Euclidean norm. We often work with functions from [-1,1] →. For two such functions, f,g, we use the convenient inner product notation: f, g∫_-1^1 f(x) g(x) dx. We will often work with products, quotients, sums, and differences of two functions f,g, which are denoted by f· g, f/g, f+g, and f-g, respectively. E.g., [f· g](x) = f(x)g(x). For a function f: [-1,1] →, we let f_∞ denote f_∞ = max_x∈ [-1,1] |f(x)| and f_1 = ∫_-1^1 |f(x)| dx. Wasserstein Distance. This paper concerns the approximation of probability distributions in the Wasserstein-1 distance, defined below. Let p and q be two distributions on . Let Z(p,q) be the set of all couplings between p and q, i.e., the set of distributions on × whose marginals equal p and q. Then the Wasserstein-1 distance between p and q is: W_1(p,q) = inf_z∈ Z(p,q)[ _(x,y) ∼ z |x - y|]. The Wasserstein-1 distance measures the total cost (in terms of distance per unit mass) required to “transport” the distribution p to q. Alternatively, it has a well-known dual formulation: [Kantorovich-Rubinstein Duality] Let p,q be as in <Ref>. Then W_1(p,q) = sup_1-Lipschitz f⟨ f, p-q⟩, where f: → is 1-Lipschitz if |f(x) - f(y)| ≤ |x-y| for all x,y ∈. Above we slightly abuse notation and use p and q to denote (generalized) probability density functions[p and q might correspond to discrete distributions, in which case they will be sums of Dirac delta functions.] instead of the distributions themselves. We will do so throughout the paper. In our analysis, it will be convenient to assume when applying Fact <ref> that, in addition to being 1-Lipschitz, f is smooth, i.e. that it is infinitely differentiable. Since any Lipschitz function can be arbitrarily well approximated by a smooth function, we can do so without changing the distance. In particular, for distributions on [-1,1] we have: W_1(p,q) = sup_1-Lipschitz, smooth f⟨ f, p-q⟩. Chebyshev Polynomials and Chebyshev Series. Our main result analyzes the accuracy of (noisy) Chebyshev polynomial moment matching for distribution approximation. The Chebyshev polynomials are defined in <Ref>, and can alternatively be defined on [-1,1] via the trigonometric definition, T_j(cosθ) = cos(j θ). We use a few basic properties about these polynomials. [Boundedness and Orthogonality, see e.g. <cit.>] The Chebyshev polynomials satisfy: * Boundedness: ∀ x ∈ [-1,1] and j ∈, T_j(x)≤ 1. * Orthogonality: The Chebyshev polynomials are orthogonal with respect to the weight function w(x) = 1/√(1 - x^2). In particular, for i, j ∈, i≠ j, T_i· w, T_j = 0. To obtain an orthonormal basis we also define the normalized Chebyshev polynomials as follows: The j^th normalized Chebyshev polynomial, _j, is defined as _j T_j / √(T_j · w, T_j). Note that T_j · w , T_j equals π for j = 0 and π/2 for j ≥ 1. We define the Chebyshev series of a function f:[-1,1] → as ∑_j=0^∞f· w, _j_j. If f is Lipschitz continuous then the Chebyshev series of f converges absolutely and uniformly to f <cit.>. Throughout this paper, we will also write the Chebyshev series of generalized probability density functions, which could involve Dirac delta functions. This is standard in Fourier analysis, even though the Chebyshev series does not converge pointwise <cit.>. Formally, any density p can be replaced with a Lipschitz continuous density (which has a convergent Cheybshev series) that is arbitrarily close in Wasserstein distance and the same analysis goes through. § MAIN ANALYSIS In this section, we prove our main result, <Ref>, as well as <Ref>. To do so, we require two main ingredients. The first is a constructive version of Jackson's theorem on polynomial approximation of Lipschitz functions <cit.>. A modern proof can be found in <cit.>. [Jackson's Theorem <cit.>] Let f : [-1,1] → be an ℓ-Lipschitz function. Then, for any k ∈, there are k+1 constants 1 = b_k^0 > … > b_k^k ≥ 0 such that the polynomial f_k = ∑_j=0^k b_k^0 · f · w, _j ·_j satisfies f-f_k_∞≤ 18 ℓ/k. It is well-known that truncating the Chebyshev series of an ℓ-Lipschitz function f to k terms leads to error O(log k·ℓ/k) in the ℓ_∞ distance <cit.>. The above version of Jackson's theorem improves this bound by a log k factor by instead using a damped truncated Chebyshev series: each term in the series is multiplied by a positive scaling factor between 0 and 1. We will not need to compute these factors explicitly, but b_k^i has a simple closed form (see <cit.>). To bound the Wasserstein distance between distributions p,q, we need to upper bound ⟨ f, p-q⟩ for every 1-Lipschitz f. The value of <Ref> is that this inner product is closely approximated by ⟨ f_k, p-q⟩. Since f_k is a damped Chebyshev series, this inner product can be decomposed as a difference between p and q's Chebyshev moments. Details will be shown in the proof of <Ref>. The second ingredient we require is a stronger bound on the decay of the Chebyshev coefficients, f · w, _j, which appear in <Ref>. In particular, we prove the following result: Let f : [-1,1] → be an ℓ-Lipschitz, smooth function, and let c_j f · w,_j for j ∈. Then, ∑_j=1^∞ (j c_j)^2 ≤π/2ℓ^2. <Ref> implies the well known fact that c_j = O(ℓ/j) for j ≥ 1 <cit.>. However, it is a much stronger bound: if all we knew was that the Chebyshev coefficients are bounded by O(ℓ/j), then ∑_j=1^∞ (j c_j)^2 could be unbounded. We show that it can in fact be bounded by O(ℓ^2). Informally, the implication is that not all coefficients can saturate the “local” O(ℓ/j) constraint at the same time, but rather obey a stronger global constraint, captured by a weighted ℓ_2 norm of the coefficients. §.§ Proof of Theorem <ref> We prove <Ref> in <Ref>. Before doing so, we show how it implies <Ref>. By (<ref>), to bound W_1(p,q), it suffices to bound ⟨f, p-q⟩ for any 1-Lipschitz, smooth f. Let f_k be the approximation to any such f guaranteed by <Ref>. We have: f, p-q = f_k, p-q + f-f_k, p-q ≤f_k, p-q + f - f_k_∞p-q_1 ≤f_k, p-q + 36/k. In the last step, we use that f - f_k_∞≤ 18/k by <Ref>, and that p-q_1 ≤p_1 + q_1 = 2. So, to bound f, p-q we turn our attention to bounding f_k, p-q. For technical reasons, we will assume from here on that p and q are supported on the interval [-1+δ,1-δ] for arbitrarily small δ→ 0. This is to avoid an issue with the Chebyshev weight function w(x) = 1/√(1-x^2) going to infinity at x =-1,1. The assumption is without loss of generality, since we can rescale the support of p and q by a (1-δ) factor, and the distributions' moments and Wasserstein distance change by an arbitrarily small factor as δ→ 0. We proceed by writing the Chebyshev series of the function (p-q)/w: p-q/w = ∑_j=0^∞p-q/w· w, _j _j = ∑_j=0^∞⟨ p-q,_j⟩·_j = ∑_j=1^∞⟨ p-q,_j⟩·_j. In the last step we use that both p and q are distributions so p-q, _0 = 1/π-1/π = 0. Next, recall from <Ref> that f_k = ∑_j=0^k c_j' _j, where each c_j' satisfies |c_j'| ≤ |c_j| for c_j ⟨ f · w, _j⟩. Using (<ref>), the fact that ⟨_i· w, _j⟩ = 0 whenever i≠ j, and that ⟨_j· w, _j⟩ = 1 for all j, we have: f_k, p-q = f_k· w, p-q/w = ∑_j=0^k c_j' _j · w, ∑_j=1^∞⟨ p-q,_j⟩_j = ∑_j=1^k c_j' ·⟨ p-q,_j⟩. Via Cauchy-Schwarz inequality and our global decay bound from <Ref>, we then have: f_k, p-q = ∑_j=1^k j c_j' ·⟨ p-q,_j⟩/j ≤(∑_j=1^k (jc_j')^2)^1/2·( ∑_j=1^k 1/j^2⟨ p-q,_j⟩^2)^1/2 ≤(∑_j=1^k (j c_j)^2)^1/2·( ∑_j=1^k 1/j^2⟨ p-q,_j⟩^2)^1/2 ≤√(π/2)( ∑_j=1^k 1/j^2⟨ p-q,_j⟩^2)^1/2. Observing from <Ref> that ⟨ p-q,_j⟩/√(π/2) is exactly the difference between the j^th Chebyshev moments of p and q, we can apply the assumption of the theorem, (<ref>), to upper bound (<ref>) by Γ. Plugging this bound into <Ref>, we conclude the main bound of <Ref>: W_1(p,q) = sup_1-Lipschitz, smooth f⟨ f, p-q⟩≤Γ + 36/k. We note that the constants in the above bound can likely be improved. Notably, the 36 comes from multiplying the factor of 18 in <Ref> by 2. As discussed in <cit.>, strong numerical evidence suggests that this 18 can be improved to π, leading to a bound of Γ + 2π/k. Finally, we comment on the special case in (<ref>). If | _x∼ pT_j(x) - _x∼ qT_j(x)| = |⟨ p-q,_j⟩| / √(π/2)≤Γ·√(j/1 + log k) for all j then we have that ∑_j=1^k 1/j^2⟨ p-q,T_j⟩^2 ≤Γ^2/1 + log k∑_j=1^k 1/j≤Γ^2. §.§ Efficient recovery The primary value of <Ref> for our applications is that, given sufficiently accurate estimates, m̂_1, …, m̂_k, of p's Chebyshev moments, we can recover a distribution q that is close in Wasserstein-1 distance to p, even if there is no distribution whose moments exactly equal m̂_1, …, m̂_k. This claim is formalized in <Ref>, whose proof is straightforward. We outline the main idea here. Recall the condition of the corollary, that ∑_j=1^k 1/j^2(m̂_j - ⟨ p, _j⟩)^2 ≤Γ^2. Now, suppose we could solve the optimization problem: q^* = _distributions q on [-1,1] ∑_j=1^k 1/j^2(m̂_j - ⟨ q, _j⟩)^2. Then by triangle inequality we would have: (∑_j=1^k 1/j^2(⟨ p, _j⟩ - ⟨ q^*, _j⟩)^2)^1/2 ≤(∑_j=1^k 1/j^2(m̂_j - ⟨ q^*, _j⟩)^2)^1/2 + (∑_j=1^k 1/j^2(m̂_j - ⟨ p, _j⟩)^2)^1/2 ≤ 2(∑_j=1^k 1/j^2(m̂_j - ⟨ p, _j⟩)^2)^1/2≤ 2 Γ. It then follows immediately from <Ref> that W_1(p,q^*) ≤ O(1/k + Γ), as desired. The only catch with the argument above is that we cannot efficiently optimize over the entire set of distributions on [-1,1]. Instead, we have to optimize over a sufficiently fine discretization. Specifically, we consider discrete distributions on a finite grid, choosing the Chebyshev nodes (of the first kind) instead of a uniform grid because doing so yields a better approximation, and thus allows for a coarser grid. Concretely, <Ref> is proven by analyzing <Ref>. The full analysis is given in <Ref>. We note that the optimization problem solved by <Ref> is a simple linearly constrained quadratic program with g = O(k^1.5) variables and O(k^1.5) constraints, so can be solved to high accuracy in (k) time using a variety of methods <cit.>. In practice, the problem can also be solved efficiently using first-order methods like projected gradient descent <cit.>. §.§ Proof of Lemma <ref> We conclude this section by proving <Ref>, our global decay bound on the Chebyshev coefficients of a smooth, Lipschitz function, which was key in the proof of <Ref>. To do so we will leverage an expression for the derivatives of the Chebyshev polynomials of the first kind in terms of the Chebyshev polynomials of the second kind, which can be defined by the recurrence U_0(x) = 1 U_1(x) = 2x U_i(x) = 2xU_i-1(x) - U_i-2(x), for i ≥ 2. We have the following standard facts (see e.g., <cit.>). [Chebyshev Polynomial Derivatives] Let T_j be the j^th Chebyshev polynomial of the first kind, and U_j be the j^th Chebyshev polynomial of the second kind. Then, for j ≥ 1, T_j'(x) = j U_j-1(x). [Orthogonality of Chebyshev polynomials of the second kind] The Chebyshev polynomials of the second kind are orthogonal with respect to the weight function u(x) = √(1-x^2). In particular, ∫_-1^1 U_i(x) U_j(x) u(x) dx = 0, for i ≠ j π/2 , for i = j With the above facts we can now prove <Ref>. Let f be a smooth, ℓ-Lipschitz function, with Chebyshev expansion f(x) = ∑_j=0^∞ c_j _j = 1/√(π) c_0 T_0 + ∑_j=1^∞√(2/π) c_j T_j. Using <Ref>, we can write f's derivative as: f'(x) = ∑_j=1^∞√(2/π) c_j T_j'(x) = √(2/π)∑_j=1^∞ j c_j U_j-1(x) By the orthogonality property of <Ref>, we then have that ∫_-1^1 f'(x) f'(x) u(x) dx = 2/π∑_j=1^∞ j^2 c_j^2 π/2 = ∑_j=1^∞ j^2 c_j^2. Further, using that f is ℓ-Lipschitz and so |f'(x)| ≤ℓ, and that the weight function u(x) = √(1-x^2) is non-negative, we can upper bound this sum by ∑_j=1^∞ j^2 c_j^2 = ∫_-1^1 f'(x) f'(x) u(x) dx ≤ℓ^2 ∫_-1^1 u(x) dx = πℓ^2/2. This completes the proof of the lemma. § PRIVATE SYNTHETIC DATA In this section, we present an application of our main result to differentially private synthetic data generation. We recall the setting from <Ref>: we are given a dataset X = {x_1, …, x_n}, where each x_i ∈ [-1,1], and consider the distribution p that is uniform on X. The goal is to design an (,δ)-differentially private algorithm that returns a distribution q that is close to p in Wasserstein distance. For the purpose of defining differential privacy (see Def. <ref>), we consider the “bounded” notation of neighboring datasets, which applies to datasets of the same size <cit.>. Concretely, X = {x_1, …, x_n} and X' = {x_1', …, x_n'} are neighboring if x_i ≠ x_i' for exactly one value of i.[Although a bit tedious, our results can be extended to the “unbounded” notation of neighboring datasets, where X and X' might differ in size by one, i.e., because X' is created by adding or removing a single data point from X.] To solve this problem, we will compute the first n Chebyshev moments of p, then add noise to those moments using the standard Gaussian mechanism. Doing so ensures that the noised moments are (, δ)-differentially private. We then post-process the noised moments (which does not impact privacy) by finding a distribution q that matches the moments. The analysis of our approach follows directly from <Ref>, although we use a slightly different method for recovering q than suggested in our general <Ref>: in the differential privacy setting, we are able to obtain a moderately faster algorithm that solves a regression problem involving O(n) variables instead of O(n^1.5). Before analyzing this approach, we introduce preliminaries necessary to apply the Gaussian mechanism. In particular, applying the mechanism requires bounding the ℓ_2 sensitivity of the function mapping a distribution p to its Chebyshev moments. This sensitivity is defined as follows: Let 𝒳 be some data domain (in our setting, 𝒳 = [-1,1]^n) and let f: 𝒳→ℝ^k be a vector valued function. The ℓ_2-sensitivity of f, Δ_2,f, is defined as: Δ_2,fX,X'∈𝒳max_neighboring datasets f(X) - f(X') _2. The Gaussian mechanism provides a way of privately evaluating any function f with bounded ℓ_2 sensitivity by adding a random Gaussian vector with appropriate variance. Let 𝒩(0, σ^2 I_k) denote a vector of k i.i.d. mean zero Gaussians with variance σ^2. We have the following well-known result: [Gaussian Mechanism <cit.>] Let f : 𝒳→ℝ^k be a function with ℓ_2-sensitivity Δ_2,f and let σ^2 = Δ_2,f^2 · 2ln(1.25 / δ) / ^2, where ,δ∈ (0,1) are privacy parameters. Then the mechanism ℳ = f(X) + η, where η∼𝒩(0, σ^2 I_k) is (, δ)-differentially private. We are now ready to prove the main result of this section, <Ref>, which follows by analyzing <Ref>. Note that <Ref> is very similar to <Ref>, but we first round our distribution to be supported on a uniform grid, 𝒢. Doing so will allow us to solve our moment regression problem over the same grid, which is smaller than the set of Chebyshev nodes used in <Ref>. We analyze both the privacy and accuracy of <Ref>. Privacy. For a dataset X = {x_1, …, x_n}∈ [-1,1]^n, let f(X) be a vector-valued function mapping to the first k = 2 n (as set in <Ref>) scaled Chebyshev moments of the uniform distribution over X. I.e., f(X) = [ 1·1/n∑_i=1^n _1(x_i); 1/√(2)·1/n∑_i=1^n _2(x_i); ⋮; 1/√(k)·1/n∑_i=1^n _k(x_i) ] By <Ref>, max_x_i ∈ [-1,1] |_j(x_i)| ≤√(2/π) for j ∈, so we have: Δ_2,f^2 = X,X'∈𝒳max_neighboring datasets f(X) - f(X') _2^2 ≤∑_j=1^k1/jn^2·8/π≤8/π n^2(1 + log k). For two neighboring datasets X,X', let X̃ and X̃' be the rounded datasets computed in line 2 of <Ref> – i.e., X̃ = {x̃_1, …, x̃_n}. Observe that X̃ and X̃' are also neighboring. Thus, it follows from <Ref> and the sensitivity bound of <ref> that m̃ = f(X̃) + η is (,δ)-differentially private for η∼𝒩(0,σ^2 I_k) as long as σ^2 = 16/π(1 + log k)ln(1.25 / δ) / (n^2^2). Finally, observe that m̂_j computed by <Ref> is exactly equal to √(j) times the j^th entry of such an m̃. So m̂_1, …, m̂_k are (,δ)-differentially private. Since the remainder of <Ref> simply post-processes m̂_1, …, m̂_k without returning to the original data X, the output of the algorithm is also (,δ)-differentially private, as desired. While we choose k = 2 n by default, any choice of k = c n for constant c suffices to obtain the bound of <Ref>. Similarly, the grid spacing in 𝒢 can made finer or coarse by a multiplicative constant. A larger k or a finer grid will lead to a slightly more accurate result at the cost of a slower algorithm. We chose defauts so that any error introduced from the grid and choice of k is swamped by error incurred from the noise added in Line 4. I.e., the error cannot be improved by more than a factor of two with difference choices. See the proof of <Ref> for more details. Accuracy. <Ref> begins by rounding the dataset X so that every data point is a multiple of 1/ n. Let p̃ be the uniform distribution over the rounded dataset X̃. Then it is not hard to see from the transportation definition of the Wasserstein-1 distance that: W_1(p,p̃) ≤1/2 n. In particular, we can transport p to p̃ by moving every unit of 1/n probability mass a distance of at most 1/2 n. Given (<ref>), it will suffice to show that <Ref> returns a distribution q that is close in Wasserstein distance to p̃. We will then apply triangle inequality to bound W_1(p,q). To show that <Ref> returns a distribution q that is close to p̃ in Wasserstein distance, we begin by bounding the moment estimation error: E ∑_j=1^k1/j^2(m̂_j(p) - ⟨p̃, T_j ⟩)^2, where k is as chosen in <Ref> and ⟨p̃, T_j ⟩ = 1/n∑_i=1^n T_j(x̃_i). Let σ^2 and η_1, …, η_k be as in <Ref>. Applying linearity of expectation, we have that: [E] = [∑_j=1^k1/j^2η_j^2] = ∑_j=1^k1/j^2[η_j^2] = ∑_j=1^k1/j^2· jσ^2 ≤ (1+log k)σ^2. Now, let q be as in <Ref>. Using a triangle inequality argument as in <Ref>, we have: Γ^2 = ∑_j=1^k1/j^2(⟨q, T_j ⟩ - ⟨p̃, T_j ⟩)^2 ≤∑_j=1^k1/j^2(⟨q, T_j ⟩ - m̂_j)^2 + ∑_j=1^k1/j^2(⟨p̃, T_j ⟩ - m̂_j)^2 ≤ 2E. Above we use that p̃ is a feasible solution to the optimization problem solved in <Ref> and, since q is the optimum, ∑_j=1^k1/j^2(⟨q, T_j ⟩ - m̂_j)^2 ≤∑_j=1^k1/j^2(⟨p̃, T_j ⟩ - m̂_j)^2. It follows that [Γ^2] ≤ 2[E], and, via Jensen's inequality, that [Γ] ≤√(2[E]). Plugging into <Ref>, we have for constant c: [W_1(p̃,q)] ≤[Γ] + c/k ≤√(2 (1+logk) σ^2) + c/k = log ( n) √(log(1/δ))/ n By triangle inequality and (<ref>), W_1(p,q) ≤ W_1(p̃,q) + W_1(p̃,p) ≤ W_1(p̃,q) + 1/2 n. Combined with the bound above, this proves the accuracy claim of the theorem. Recall from <Ref> that the constant c in <Ref> is bounded by 36, but can likely be replaced by 2π, in which case it can be checked that the c/k term in (<ref>) will be dominated by the √(2 (1+logk) σ^2) term for our default of k = 2 n in <Ref>. However, any choice k = Θ(ϵ n) suffices to prove the theorem. We also remark that our bound on the expected value of W_1(p̃,q) can also be shown to hold with high probability. See <Ref> for details. We conclude by noting that, as in our analysis of <Ref> (see <Ref>), <Ref> requires solving a linearly constrained quadratic program with r = 2 n + 1 variables and r+1 constraints, which can be done to high accuracy in ( n) time. § SPECTRAL DENSITY ESTIMATION In this section, we present a second application of our main result to the linear algebraic problem of Spectral Density Estimation (SDE). We recall the setting from <Ref>: letting p be the uniform distribution over the eigenvalues given λ_1 ≥…≥λ_n of a symmetric matrix A ∈^n × n, the goal is to find some distribution q that satisfies W_1(p,q) ≤A_2. In many settings of interest, A is implicit and can only be accessed via matrix-vector multiplications. So, we want to understand 1) how many matrix-vector multiplications with A are required to achieve (<ref>), and 2) how efficiently can we achieve (<ref>) in terms of standard computational complexity. We show how to obtain improved answers to these questions by using our main result, <Ref>, to give a tighter analysis of an approach from <cit.>. Like other SDE methods, that approach uses stochastic trace estimation to estimate the Chebyshev moments of p. In particular, let m_1, …, m_k denote the first k Chebyshev moments. I.e., m_j = 1/n∑_i=1^n T_j(λ_i). Then we have for each j, m_j = 1/n∑_i=1^n T_j(λ_i) = 1/n(T_j(A)), where is the matrix trace. Stochastic trace estimation methods like Hutchinsons method can approximate (T_j(A)) efficiently via multiplication of T_j(A) with random vectors <cit.>. In particular, for any vector g∈^n with mean 0, variance 1 entries, we have that: [g^T T_j(A)g] = (T_j(A)). T_j(A)g, and thus g^T T_j(A)g, can be computed using j matrix-vector products with A. In fact, by using the Chebyshev polynomial recurrence, we can compute g^T T_j(A)g for all j = 1, …, k using k total matrix-vector products: T_0(A) g = g T_1(A) g = Ag … T_j(A) g = 2 A T_j-1(A) g - T_j-2(A)g. Optimized methods can actually get away with ⌈ k/2 ⌉ matrix-vector products <cit.>. Using a standard analysis of Hutchinson's trace estimator (see, e.g., <cit.> or <cit.>) <cit.> prove the following: Let A be a matrix with A_2 ≤ 1. Let C be a fixed constant, j∈, α, γ∈ (0,1), and ℓ_j = ⌈ 1 + Clog^2(1/α)/n j γ^2⌉. Let g_1,…,g_ℓ_j∼Uniform(-1,1^n) and let m̂_j = 1/ℓ_j n∑_i=1^ℓ_j g_i^⊤ T_j(A) g_i. Then, with probability 1-α, m̂_j-m_j≤√(j)γ. We combine this lemma with <Ref> to prove the following more precise version of <Ref>: There is an algorithm that, given ∈ (0,1), symmetric A∈^n× n with spectral density p, and upper bound S ≥A_2, uses min{n, O(1/· (1+log^2(1/) log^2(1/(δ))/n ))} matrix-vector products with A and Õ(n/ϵ + 1/^3) additional time to output a distribution q such that, with probability at least 1-δ, W_1(p,q) ≤ S. First note that, if ϵ≤ 1/n, the above result can be obtained by simply recovering A by multiplying by all n ≤ 1/ϵ standard basis vectors. We can then compute a full eigendecomposition to extract A's spectral density, which takes o(n^3) time. So we focus on the regime when ϵ > 1/n. Without loss of generality, we may assume from here forward that A_2 ≤ 1 and our goal is to prove that W_1(p,q) ≤ϵ. In particular, we can scale A by 1/S, compute an approximate spectral density q with error , then rescale by S to achieve error S. As mentioned in <Ref>, an S satisfying A_2 ≤ S ≤ 2A_2 can be computed using O(log n) matrix-multiplications with A via the power method <cit.>. Given such an S, <Ref> implies an error bound of 2ϵA_2. In some settings of interest for the SDE problem, for example when A is the normalized adjacency matrix of a graph <cit.>, A_2 is known a priori, so we can simply set S = A_2. Choose k = ĉ/ for a sufficiently large constant ĉ and apply <Ref> for all j = 1, …, k with γ = 1/k√(1+log k), and α = δ/k. By a union bound, we obtain estimates m̂_1, …, m̂_k satisfying, for all j, |m̂_j - m_j| ≤√(j)γ = √(j)·1/k√(1+log k). Applying <Ref> (specifically, (<ref>)) and <Ref>, we conclude that, using these moments, <Ref> can recover a distribution q satisfying: W_1(p,q) ≤2c'/k. I.e., we have W_1(p,q) ≤ as long as ĉ≥ 2c'. This proves the accuracy bound. We are left to analyze the complexity of the method. We first bound the total number of matrix-vector multiplications with A, which we denote by T. Since ℓ_j ≤ℓ_j-1 for all j, computing the necessary matrix-vector product to approximate m_j only costs ℓ_j-1 additional products on top of those used to approximate m_j-1. So, recalling that ℓ_j = ⌈ 1 + Clog^2(1/α)/n j γ^2⌉, we have: T = (1 + Clog^2(k/δ)/nγ^2) + (1 + Clog^2(k/δ)/2nγ^2) +… + (1 + Clog^2(k/δ)/knγ^2). Using the fact that 1 + 1/2 + … + 1/k ≤ 1+log(k) we can upper bound T by: T = (k + log^2(k/δ)log(k)/nγ^2) = (k + k^2 log^2(k/δ)log^2(k)/n), which gives the desired matrix-vector product bound since k = O(1/). In terms of computational complexity, <Ref> immediately yields a bound of (1/) time to solve the quadratic program in <Ref>. However, this runtime can actually be improved to 1/^3 by taking advantage of the fact that m̂_1,…, m̂_k obey the stronger bound of (<ref>) instead of just (<ref>). This allows us to solve a linear program instead of a quadratic program. In particular, let be a grid of Chebyshev nodes, as used in <Ref>. I.e., = x_1,…,x_g where x_i = cos2i-1/2gπ. Let q^_1, …, q^_g be any solution to the following linear program with variables z_1,…,z_g: [ minimize 0; subject to ∑_i=1^g z_i = 1; z_i ≥ 0, ∀ i ∈1,…,g; ∑_i=1^g T_j(x_i) z_i≤m̂_j +(√(j)γ +j √(2 π)/g), ∀ j ∈{1, …, k}; ∑_i=1^g T_j(x_i) z_i≥m̂_j - (√(j)γ +j √(2 π)/g), ∀ j ∈{1, …, k}. ] We first verify that the linear program has a solution. To do so, note that, by <Ref> in <Ref>, there exists a distribution p̃ supported on = x_1,…,x_g, such that m_j(p)-m_j(p̃)≤j √(2 π)/g. By (<ref>) and triangle inequality, it follows that p̃ is a valid solution to the linear program. Next, let q^ = ∑_i=1^g q^_i δ(x - x_i) be the distribution formed by any solution to the linear program. We have that, for any j, m_j - ⟨q^, T_j⟩≤⟨q^, T_j⟩ - m̂_j + m̂_j - m_j≤ 2√(j)γ +j √(2 π)/g. Setting g = k^1.5√(1 + log(k)) and plugging into <Ref>, we conclude that W_1(p,q^) ≤ O(1/k). The linear program in (<ref>) has g = Õ(k^1.5) variables, boundary constraints for each variable, and 2k + 1 other constraints. It follows that it can be solved in Õ(gk·√(k)) = Õ(k^3) time <cit.>, which equals Õ(1/^3) time since we chose k = O(1/ϵ). § EMPIRICAL EVALUATION OF PRIVATE SYNTHETIC DATA In this section, we empirically evaluate the application of our main result to differentially private synthetic data generation, as presented in <Ref>. Specifically, we implement the procedure given in <Ref>, which produces an (, δ)-differentially private distribution q that approximates the uniform distribution, p, over a given dataset X = x_1, …, x_n ∈ [-1,1]. We solve the linearly constrained least squares problem from <Ref> using an interior-point method from MOSEK <cit.>. We evaluate the error W_1(p,q) achieved by the procedure on both real world data and data generated from known probability density functions (PDFs), with a focus on how the error scales with the number of data points, n. For real world data, we first consider the American Community Survey (ACS) data from the Folktables repository <cit.>. We use the 2018 ACS 1-Year data for the state of New York; we give results for the (personal income) column from this data. We also consider the California Housing dataset <cit.>; we give results for the (median house age in district) column, from this data. Finally, we consider the CDC Diabetes Health Indicators dataset <cit.>; we give results for the (number of physically unhealthy days) from this data. For each of these data sets, we collect uniform subsamples of size n for varying values of n. In addition to the real world data, we generate datasets of varying size from three fixed probability distributions over [-1,1]. We set the probability mass for x∈ [-1,1] proportional to a chosen function f(x), and equal to 0 for x∉ [-1,1]. We consider the following choices for f: , f(x) = e^-0.5 x^2; , f(x) = sin(π x) + 1; and , f(x) = (x + 1.1)^-2. For all datasets, we run <Ref> with privacy parameters =0.5 and δ=1/n^2; this is a standard setting for private synthetic data <cit.>. We use the default choice of k = 2 n. In <Ref>, we plot the average Wasserstein error achieved across 10 trials of the method as a function of n. Error varies across trials due to the randomness in <Ref> (given its use of the Gaussian mechanism) and due to the random choise of a subsample of size n. As we can see, our experimental results strongly confirm our theoretical guarantees: the average W_1 error closely tracks our theoretical accuracy bound of O(log(ϵ n)√(log(1/δ))/ϵ n) from <Ref>, which is shown as a blue dotted line in <Ref>. § ACKNOWLEDGEMENTS We thank Raphael Meyer for suggesting the lower bound on the number of matrix-vector multiplications required for spectral density estimation. We thank Tyler Chen for close proofreading and Gautam Kamath for helpful pointers to the literature. This work was partially supported by NSF Grants 2046235 and 2045590. § PROOF OF COROLLARY <REF> In this section, we give the full proof of <Ref>. We require the following basic property about the Chebyshev nodes: Let = x_1,…,x_g be the degree g Chebyshev nodes. I.e., x_i = cos2i-1/2gπ. Let r_: [-1,1] → be a function that maps a point x ∈ [-1,1] to the point y ∈ that minimizes cos^-1(x) - cos^-1(y), breaking ties arbitrarily. For any x∈ [-1,1], cos^-1(x) - cos^-1(r_(x))≤π/2g. For any two consecutive points x_i, x_i+1 in the , cos^-1(x_i) - cos^-1(x_i+1) = π/g. Since cos^-1(x) is non-increasing, for any x ∈ [x_i+1, x_i], cos^-1(x)∈ [cos^-1(x_i),cos^-1(x_i+1)]. So, cos^-1(x) has distance at most π/2g from either cos^-1(x_i) or cos^-1(x_i+1). Additionally, we can check that cos^-1(x) - cos^-1(x_1)≤π/2g for any x < x_1 and cos^-1(x) - cos^-1(x_g)≤π/2g for any x> x_g. With <Ref> in place, we are ready to prove <Ref>. Let and r_:[-1,1] → be as in <Ref>. For i ∈1,…,g, let Y_i be the set of points in [-1,1] that are closest to x_i ∈, i.e., Y_i = x ∈ [-1,1]: r_(x) = x_i. Let p̃ be a distribution supported on the set with mass ∫_Y_i p(x) dx on x_i ∈. For all j∈ 1, …, k we have: ⟨ p, _j ⟩ - ⟨p̃, _j⟩ = ∑_i=1^g∫_Y_i_j(x) p(x) dx- ∫_Y_i p(x) dx_j(x_i) = ∑_i=1^g∫_Y_i p(x) dx_j(y_i) - ∫_Y_i p(x) dx_j(x_i) for some y_i ∈ Y_i ≤∑_i=1^g∫_Y_i p(x) dx_j(y_i) - _j(x_i) = ∑_i=1^g∫_Y_i p(x) dx·√(2/π)·cos(j cos^-1(y_i)) - cos(j cos^-1(x_i)) ≤∑_i=1^g∫_Y_i p(x) dx·√(2/π)·jπ/2g = j √(π/2)/g The second equality follows from the intermediate value theorem. The first inequality follows by triangle inequality. The third equality follows by the trigonometric definition of the (normalized) Chebyshev polynomials. The second inequality follows from <Ref> and the fact that the derivative of cos(j x) is bounded by j. The bound in (<ref>) then yields: (∑_j=1^k 1/j^2⟨ p, _j ⟩ - ⟨p̃, _j ⟩^2)^1/2≤√(π k/2)/g. Observe also that, since p̃ is supported on , it is a valid solution to the optimization problem solved by <Ref>. Accordingly, we have that: (∑_j=1^k 1/j^2m̂_j - ⟨q, _j ⟩^2)^1/2≤(∑_j=1^k 1/j^2m̂_j - ⟨p̃, _j ⟩^2)^1/2 Applying triangle inequality, followed by (<ref>), triangle inequality again, and finally (<ref>), we have: (∑_j=1^k 1/j^2⟨ p, _j ⟩ - ⟨q, _j ⟩^2)^1/2 ≤(∑_j=1^k 1/j^2⟨ p, _j ⟩ - m̂_j^2)^1/2 + (∑_j=1^k 1/j^2m̂_j - ⟨q, _j ⟩^2)^1/2 ≤(∑_j=1^k 1/j^2⟨ p, _j ⟩ - m̂_j^2)^1/2 + (∑_j=1^k 1/j^2m̂_j - ⟨p̃, _j ⟩^2)^1/2 ≤ 2(∑_j=1^k 1/j^2⟨ p, _j ⟩ - m̂_j^2)^1/2 + (∑_j=1^k 1/j^2⟨p, _j ⟩ - ⟨p̃, _j ⟩^2)^1/2 ≤ 2Γ + √(2π k)/g. Setting g = ⌈ k^1.5⌉, we can apply <Ref> to conclude that, for a fixed constant c', W_1(p,q) ≤c/k + 2Γ + √(π/2)/k≤ c' ·(1/k + Γ). § THEOREM <REF> HIGH PROBABILITY BOUND In this section, we prove the high probability bound on Wasserstein distance stated in <Ref>, which follows from a standard concentration bound for sub-exponential random variables <cit.>. We recall that a random variable X is subexponential with parameters (ν, α) if: [e^λ(X-[X])] ≤ e^ν^2λ^2/2 for all |λ| ≤1/α. We require the following well-known fact that a chi-square random variable with one degree of freedom is subexponential: [Sub-Exponential Parameters <cit.>] Let η∼(0,σ^2). Then, η^2 is sub-exponential random variable with parameters (2σ^2, 4σ^2). We also require the following concentration inequality for a sum of sub-exponential random variable: [<cit.>] Consider independent random variables γ_1, …, γ_k, where, ∀ j∈ 1, …,k, γ_j is sub-exponential with parameters (ν_j,α_j). Let ν_* = √(∑_j=1^k ν_j^2) and α_* = maxα_1,…,α_k. Then we have: ∑_j=1^kγ_j - [γ_j]≥ t ≤exp-t^2/2 ν_*^2 for 0 ≤ t ≤ν_*^2/α_* exp-t/2 α_* for t > ν_*^2/α_* Recalling the proof of the expectation bound of <Ref> from <Ref>, it suffices to bound E =∑_j=1^k1/j^2(m̂_j(p) - ⟨p̃, T_j ⟩)^2 with high probability. Let γ_j =η_j^2/j^2, where η_j ∼𝒩(0,jσ^2) is as in <Ref>. Then recall that E = ∑_j=1^kγ_j. From <Ref>, γ_j is a sub-exponential random variable with parameter 2σ^2/j, 4σ^2/j. We can then apply <Ref>, for which we have ν_* = √(∑_j=1^k4 σ^4/j^2)≤2 πσ^2/√(6) and α_* = 4 σ^2. For any failure probability β∈ (0,1/2), setting t = 8 log(1/β) σ^2, we conclude that: E - [E] ≥ 8 log1/βσ^2≤β Recalling from <Ref> that [E] ≤ (1+log k) σ^2, we conclude that E ≤ 8 log1/βσ^2 + (1+log k) σ^2 with probability at least 1-β. The rest of the details follow as before. In particular, as in <Ref>, we can bound: W_1(p,q) ≤√(2)Γ + 36/k + 1/2 n, where Γ≤√(2E). Plugging in k = 2 n (as chosen in <Ref>) and recalling that σ^2 = 16/π(1 + logk)ln(1.25 / δ)/(^2 n^2), we conclude that with probability ≥ 1-β, for a fixed constant c, W_1(p,q) ≤ c(√(log ( n) + log(1/β))√(log ( n)log(1/δ))/ n) § SPECTRAL DENSITY ESTIMATION LOWER BOUND In this section, we provide a lower bound on the number of matrix-vector multiplications required for spectral density estimation. We first need the following theorem from <cit.>, which shows that estimating the trace of a positive semi-definite matrix A to within a multiplicative error of (1 ±) requires Ω(1/) number of matrix-vector multiplications with A. For all δ>0 and = 1/√(log(1/δ)), any algorithm that is given matrix-vector multiplication access to a positive semi-definite (PSD) input matrix A ∈^n × n with A_2 ≤ 1, n/4 ≤(A) ≤ n and succeeds with probability at least 1-δ in outputting an estimate t̃ such that t̃ - (A)≤·(A) requires Ωlog(1/δ)/ matrix-vector multiplications with A. As a corollary of this result, we obtain the following lower bound, which shows that <Ref> is tight up to log(1/ϵ) factors: Any algorithm that is given matrix-vector multiplication access to a symmetric matrix A with spectral density p and A_2 ≤ 1 requires Ωlog(1/δ)/ matrix-vector multiplications with A to output a distribution q such that W_1(p,q) ≤ϵ. The proof is via a direct reduction. Consider a PSD matrix A with A_2 ≤ 1, n/4 ≤(A) ≤ n, and spectral density p. Suppose we have a spectral density estimate q of p such that W_1(p,q) ≤/4. We claim that t̃ = n·∫_-1^1 xq(x) dx yields a relative error approximate to A's trace, implying that computing such a q requires Ω(log(1/δ)/) by <Ref>. In particular, applying Kantorovich-Rubinstein duality (<Ref>) with the 1-Lipschitz functions f(x) = x and f(x) = -x, we have that: ∫_-1^1 xp(x) dx - ∫_-1^1 xq(x) dx ≤/4 and ∫_-1^1 xq(x) dx - ∫_-1^1 xp(x) dx ≤/4. We have that ∫_-1^1 xp(x) dx = 1/n(A). So (<ref>) implies that t̃ = n·∫_-1^1 xq(x) dx satisfies: |t̃ - (A)| ≤ n·/4 ≤·(A).
http://arxiv.org/abs/2408.11131v1
20240820182843
The upcoming CTEQ-TEA parton distributions in a nutshell
[ "A. Ablat", "A. Courtoy", "S. Dulat", "M. Guzzi", "T. J. Hobbs", "T. -J. Hou", "J. Huston", "P. Nadolsky", "I. Sitiwaldi", "K. Xie", "C. -P. Yuan" ]
hep-ph
[ "hep-ph" ]
[ [ August 26, 2024 =================== Accurate parametrizations for parton distribution functions (PDFs) in the nucleon are essential for a wide range of measurements pursued at the Large Hadron Collider (LHC) and in other experiments. The CTEQ Tung Et Al. (CTEQ-TEA) group is involved in the global analysis of QCD measurements across a large energy span aimed at determination of PDFs at the precision frontier. As the LHC enters its high-luminosity decade, the knowledge of PDFs becomes even more central for accuracy control in many tests of the Standard Model (SM) and searches for beyond-SM physics. At the DIS'2024 workshop, the CTEQ-TEA group reported on a program of theoretical and methodological developments leading to a new generation of PDFs for advanced studies at the LHC Run-3. The new PDFs will replace the widely used CT18 PDFs <cit.> in general-purpose and specialized applications. Given the elevated requirements for accuracy of central PDFs and quantification of PDF uncertainties, the development of the new PDF series takes several years and involves implementation of new advances in theoretical calculations, adoption of new data sets constraining the PDFs, and refinement of the fitting methodology. During the last year, our group published several articles focusing on specific aspects of our PDF fits. In addition, we recently published a summary <cit.> that highlights the key outcomes from the totality of our recent publications and emphasizes the synergy of all ongoing CTEQ-TEA efforts toward obtaining accurate, comprehensive, and reliable future PDFs. Traditionally, theoretical developments involve implementation of radiative contributions to improve the perturbation theory. With the growing availability of perturbative hard cross sections at N3LO in QCD, there is also increasing interest in producing N3LO PDFs. The CTEQ fitting package already includes components of the N3LO PDF analysis, notably the complete quark flavor infrastructure <cit.> for implementation of 3-loop radiative contributions in DIS with massive charm and bottom quarks in the SACOT-MPS general mass scheme and for evolution of PDFs at N3LO accuracy. However, many more components at N3LO are needed to guarantee the N3LO accuracy of the PDF fits and will not be available for a while. PDF-fitting groups <cit.>, including CTEQ-TEA, implement the mandatory components as they become available. An additional consideration is that, when perturbative uncertainties are suppressed by including appropriate radiative contributions, other types of uncertainties become prevalent in the full PDF uncertainty. Efforts to control these other uncertainties must complement implementation of N3LO contributions. The upcoming CTEQ-TEA analysis will therefore release NNLO PDFs as well as some investigations done at partial N3LO – the highest order of QCD available for PDF fits at the moment. The summary article <cit.> reviews the multi-prong efforts toward this goal: investigations of the impact of candidate data sets from lepton pair <cit.>, top-quark pair <cit.>, and inclusive (di)jet production <cit.> from the LHC at 5, 7, 8, and 13 TeV; advances in methodology to quantify the mutual agreement of experimental constraints <cit.> and streamline estimations of uncertainties due to PDF parametrizations <cit.>; studies of small-x dynamics that affects the gluon and other PDFs at all x via sum rules, as well as forward charm production <cit.> and high-energy neutrino scattering <cit.>; exploration of first lattice QCD constraints on strangeness quark-antiquark asymmetry <cit.>; NNLO PDF fits with contributions from nonperturbative (power-suppressed) charm quarks <cit.> and photons in the proton <cit.> and neutron <cit.>; investigations of implications and future prospects ranging from low-energy parity-violating DIS <cit.> to combined PDF-SMEFT fits <cit.>. In this contribution, we highlight one aspect of the ongoing analysis: investigation of the likely impact of new precision data sets from the LHC Runs 1 and 2 on the upcoming PDFs. In Refs. <cit.>, we examined constraints on NNLO PDFs using measurements from 20 publications on Drell-Yan pair, top-antitop, inclusive jet, and jet pair production published by the ATLAS, CMS, and LHCb Collaborations. Most of these publications provide several differential distributions that may be suitable for the PDF fits. The above publications find that the considered distributions may have non-identical preferences for PDFs because of the impact of collateral systematic factors that are not the same across various measurements. One important case is the gluon PDF at momentum fractions of x≈ 0.05 and QCD scales close to 125 GeV that controls Higgs boson production rates via gluon-gluon fusion. Sensitivity analyses like the one in <cit.> indicate that the main relevant constraints on the gluon PDF arise from several types of processes: scaling violations in DIS at HERA, BCDMS, and NMC; hadron jet production; and, as experimental precision grows, even from Drell-Yan and tt̅ pair production data sets. Ref. <cit.> pointed out that the relevant high-precision LHC data sets generally exert opposing pulls: in the considered scenarios, Drell-Yan pair and tt̅ pair production on the whole preferred a smaller gluon PDF at x≳ 0.05, while inclusive jet production preferred a larger gluon at x≳ 0.1. We also compared the PDF constraints from Run-1 and 2 jet production measurements presented either as distributions of single-inclusive jets or distributions of jet pairs. We found that the PDF constraints from single-inclusive jet distributions were less affected by the QCD scale dependence than the counterpart dijet ones. For the combined analysis, the central question therefore concerns the selection of the data sets that are both constraining on the PDFs and consistent with one another. Based on the in-depth investigations in Refs. <cit.>, we identified such an optimal selection, in which 13 distributions from the three categories of processes (for production of lepton pairs, top-quark pairs, and single-inclusive jets), with a total of 776 new data points, were added to the CT18 NNLO baseline fit, which contains a total of 3681 data points. This extension of the CT18 global data is named “CT18+nDYTTIncJet”. The corresponding fit used the same underlying parametrization and analysis settings as the CT18 NNLO fit. Table <ref> lists the newly added LHC data sets, together with the χ^2/N_ pt values obtained in the “CT18+nDYTTIncJet” NNLO fit and using the original CT18 NNLO PDFs (in parentheses). Figure <ref> illustrates the χ^2/N_ pt values for all experiments in the CT18+nDYTTIncJet study by plotting them against the numerical ID and number of points of each experiment. r0.45 < g r a p h i c s > A histogram of the effective Gaussian variable, S_n ≡√(2χ^2_n) - √(2 N_ pt -1), distributed over all CT18 and CT18+nDYTTIncJet data sets. The CT18+nDYTTIncJet fit is in good overall agreement with the old and new data sets, with the total χ^2/N_ pt=5402/4457≈ 1.21 comparable to that in the CT18 NNLO baseline fit. We also observe some tensions among the data sets, which reduce their net constraining power. For example, in Fig. <ref>, χ^2/N_ pt increases for the NuTeV (124 and 125), CCFR(126 and 127) and E866 DY (203) experiments, and the CMS 7 TeV muon charge asymmetry and electron charge asymmetry data sets (266 and 267). χ^2/N_ pt decreases for the CMS 8 TeV muon charge asymmetry (249) and LHCb 8 TeV W/Z (250) experiments. The overall level of tensions remains about the same as in the CT18 NNLO baseline fit, as revealed by the histogram of the effective Gaussian variable <cit.>, defined as S_n ≡√(2χ^2_n) - √(2 N_ pt -1), for individual data sets in the CT18 and CT18+nDYTTIncJet fits shown in Fig. <ref>. The S_n values do not follow the standard normal distribution as would be expected in an ideal fit <cit.>. The S_n values of the DIS HERA (160) and ATLAS 8TeV IncJet (553) data sets are exceptionally large, while the neutrino-iron DIS CCFR F2 (111) data have a very good S_n value. Figure <ref> illustrates the combined impact of these new LHC Drell-Yan, tt̅ and inclusive jet data sets on the gluon PDF at Q=100 GeV. In the left panel, the CT18+nDYTTIncJet fit prefers a softer gluon PDF at x > 0.3. Specifically, the downward pull on the large-x gluon by the included Drell-Yan and tt̅ data sets (with the latter given by the “nTT2” combination of tt̅ observables introduced in <cit.>) overcomes the net upward pull from single-inclusive jet data sets. We also observe reduction of order 1% in the gluon at moderate x relevant for Higgs boson production at the LHC. In the right panel, we see that the nominal CT18 uncertainty on the gluon is reduced across most of the x range upon addition of the nDYTTIncJet combination. In this case, the uncertainties on the blue and red error bands are estimated according to the same prescription (tolerance) as in <cit.>. We note that the tolerance depends on methodology and may result in different uncertainty estimates even when fitting the same data sets <cit.>. Advancing toward the near-future CTEQ-TEA analysis, we expect the PDF tolerance to evolve, on the one hand, to account for the residual tensions that remain significant in the CT18+nDYTTIncJet fit, cf. Fig. <ref>, and, on the other hand, to quantify the dependence on PDF parametrization forms along the possibilities explored e.g. in <cit.>. Further details on the new CTEQ-TEA precision fit and its phenomenological implications will be presented in our future publication. § ACKNOWLEDGMENTS The work of AA, SD and IS was supported by the National Natural Science Foundation of China under Grants No.11965020 and No. 11847160. The work of T.-J. Hou was supported by Natural Science Foundation of Hunan province of China under Grant No. 2023JJ30496. AC was supported by the UNAM Grant No. DGAPA-PAPIIT IN111222 and CONACyT Ciencia de Frontera 2019 No. 51244 (FORDECYT-PRONACES). MG was supported by the National Science Foundation under Grant No. PHY-2112025 and No. PHY-2412071. The work of T. J. Hobbs at Argonne National Laboratory was supported by the U.S. Department of Energy, Office of Science, under Contract No. DE-AC02-06CH11357. PMN was partially supported by the U.S. Department of Energy under Grant No. DE-SC0010129. The work of CPY and KX was supported by the U.S. National Science Foundation under Grant No. PHY-2310291. KX was also supported by the U.S. National Science Foundation under Grant No. PHY-PHY-2310497. The work of PN and KX was performed in part at the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452. This work used resources of high-performance computing clusters from SMU M2/M3, MSU HPCC, KSU HPC, as well as Pitt CRC. IS2024_CT.bbl
http://arxiv.org/abs/2408.11606v1
20240821133132
A Quantum Diophantine Equation Solution Finder
[ "Lara Tatli", "Paul Stevenson" ]
math.NT
[ "math.NT", "quant-ph" ]
Quantum Diophantine Solver]A Quantum Diophantine Equation Solution Finder 1]Lara Tatli lara.tatli@durham.ac.uk [2]Paul Stevensonp.stevenson@surrey.ac.uk [1]Department of Physics, University of Durham, Durham, DH1 3LE, UK [2]School of Mathematics and Physics, University of Surrey, Guildford, Surrey, GU2 7XH, UK Diophantine equations are multivariate equations, usually polynomial, in which only integer solutions are admitted. A brute force method for finding solutions would be to systematically substitute possible integer solutions and check for equality. Grover's algorithm is a quantum search algorithm which can find marked indices in a list very efficiently. By treating the indices as the integer variables in the diophantine equation, Grover's algorithm can be used to find solutions in brute force way more efficiently than classical methods. We present an example for the simplest possible diophantine equation. [ * August 2024 =============== § INTRODUCTION A Diophantine equation is an equation, typically polynomial with integer coefficients, in more than one integer variable. A famous example occurs as Fermat's Last Theorem, which states that x^n+y^n=z^n has no solutions for n≥3 where n, x, y, and z are all natural numbers. The simplest Diophantine equation is linear in two variables and is of the form ax+by=n, where a, b, and n are given constants. While this equation has well-known solutions, in many other cases, solutions are not known (see e.g. the regularly-updated paper by Grechuk keeping track of some open and solved problems <cit.>). Seeking solutions to Diophantine solutions through numerical search is an established method, where searches can prove the existence of solutions where it is posited that none exist <cit.>. Here, we bring quantum computing to bear upon the search for Diophantine equation solutions, using Grover's algorithm <cit.> to look for solutions for the simplest linear equation of the form (<ref>), with a=b=1 and n=5 arbitrarily chosen for definiteness, and as a simple example that can be encoded in a workable number of qubits on an available simulator. While we are not aware of works explicitly solving Diophantine equations with a quantum search algorithm, we note recent work using Grover's algorithm to perform a series of basic arithmetic procedures <cit.>. In our work we use a standard classical adder for arithmetic and use Grover for the search for equality. § GROVER'S ALGORITHM AS EQUATION SOLUTION SEARCHER We give here a brief discussion of the principles of a quantum search algorithm, following the treatment in Nielsen and Chuang's textbook <cit.>. The search algorithm generally searches through a search space of N elements. It is supposed that one can work at the level of the index of the elements such that if presented with the index, it is easy to check if it is the element sought. This is the case in our example where checking if given numbers x and y are solutions of the given equation is straightforward by direct substitution and evaluation. The algorithm uses an oracle, 𝒪, which acts as 𝒪|x⟩|q⟩→|x⟩|q⊕ f(x)⟩. Here, |x⟩ is a register of index qubits, and |q⟩ is the oracle qubit. ⊕ is addition modulo 2 and f(x) is a function which returns 0 if index x is not a solution to the search problem, and 1 if index x is a solution. If the oracle qubit is prepared in the state |-⟩=(|0⟩-|1⟩)/√(2) then the action of the oracle is 𝒪|x⟩(|0⟩-|1⟩/√(2))→(-1)^f(x)|x⟩(|0⟩-|1⟩/√(2)), thus the action of the oracle marks out, with a phase change, components of the register state |x⟩ which are solutions to the problem - i.e. have f(x)=1. The full Grover algorithm then amplifies the states which have been marked, and suppresses the unmarked states, using a “diffuser” circuit. The oracle-diffuser combination together constitute a single Grover iteration, and a total of O(√(N/M)) iterations are needed in general to have the solutions selected in the register with high probability, where M is the number of solutions in the N-element space. Note that the standard diffuser requires that valid solutions do not account for the majority of the solution space, but this is the usual condition for an interesting Diophantine equation. The indexing register works in our case by having 2m qubits in which each half encodes one of the numbers x and y. The encoding is made directly in standard binary and we do not consider negative numbers. Clearly the size of m will determine the available integers in the search space, and one must apply ever more qubits to increase the size of the search space, though one benefits from an exponential increase in search space as the number of qubits increases linearly. For this exploratory study, to find solutions to the equation x+y=5 we use a 2m=6 qubit register |x⟩ to encode two 3-bit numbers to add together. The oracle performs the addition and checks the result against the desired solution. The details of the quantum adder we use is given in the next section. § QUANTUM ADDER CIRCUIT A quantum adder capable of calculating the sum of two 3-qubit binary numbers was produced using Qiskit. The adder was designed in such a way that the registers storing the input numbers were not overwritten during the calculation, as is the case with e.g. ripple-carry adders <cit.>. Retaining the input numbers is useful for use in further calculation, though not vital in our case. In this setup, shown in Fig. <ref>, the first 3 qubits, x_0, x_1 and x_2, denote the binary digits representing a natural number x in the format x_0x_1x_2, where x_2 is the least significant bit. In the same manner, qubits y_0, y_1 and y_2 denote the natural number y in the format y_0y_1y_2. Qubits a_0 and a_1 represent ancillary qubits used to hold carry bits in the addition. Qubits s_0, s_1, s_2 and s_3 denote the solution to x + y in the form s_0s_1s_2s_3, where s_3 is the least significant bit. The figure shows all qubits in that are needed for the full Grover algorithm. Qubit q_12 is the oracle qubit |q⟩ as in equation (<ref>). The dividers labelled A, B, and C in the circuit help label different functional parts. In the section terminated by divider A, an addition operation is performed on the qubits representing the least significant bits x_2 and y_2 using two CNOT gates and one Tofolli gate, with the result stored in the qubit s_3 and the first carry bit stored in a_0. In the section between dividers A and B, the qubits representing x_1, y_1, and the carry bit a_0 are added using three CNOT gates; the target is set to the sum digit s_2. Three Tofolli gates are used to compute the second carry bit, stored in a_1. In the section between B and C, the sum digit s_1 is calculated using three CNOT gates acting on the qubits representing x_0, y_0, and the second carry bit in a_1. The final sum digit, s_0, is calculated using three Toffoli gates and takes into consideration the second carry bit. In total, this adder employs 8 CNOT gates and 7 Toffoli gates collectively acting over 12 qubits. In terms of scaling to larger registers, adding two m-bit numbers requires 4m qubits (2m representing the numbers to be added, m-1 ancillary carry bits, and m+1 to represent the sum). The number of gates is 3m-1 CNOT gates and 3m-2 Tofolli gates. § APPLICATION OF GROVER'S ALGORITHM In order to apply Grover's algorithm to solve a linear Diophantine equation ax + by = n in the case a=b=1 and n=5, it is first necessary to apply a Hadamard gate to each of the qubits |x_0… x_2,y_0… y_2⟩ encoding x and y. This produces the initial superposition state with all possible solution strings present with equal amplitude. We then construct a quantum oracle capable of “marking” the solutions once queried. This consists of the quantum adder and its inverse circuit with a query circuit in between which applies a phase shift of -1 to the solution qubits of the adder, if and only if, the solution is in the state |s_0s_1s_2s_3⟩ = |0101⟩. All other states are left unchanged. This is achieved using two X-gates and a multi-controlled Toffoli gate targeting q_12, configured to be in the |-⟩ state prior to implementing Grover's algorithm. X-gates are re-applied to reverse the computation. The query circuit design used for this example is provided in the left-hand part of Fig. <ref>. Each iteration of the oracle is followed by the circuit used for the diffusion operator, which by acting across the six qubits |x_0… x_2,y_0… y_2⟩ amplifies states that sum to give the desired solution only. In this diffuser circuit, shown for our case in the right-hand part of Fig. <ref>, the combination of Hadamard and X-gates, in conjunction with a multi-controlled Toffoli gate, enable a phase change of -1 to be applied to the initial superposition state. This completes one full iteration of the Grover algorithm. After the desired number of algorithms, one would then perform a measurement on a real quantum computer, identically prepared through many repeated experiments, to build up a histogram of most probable outcomes corresponding to the sought solution(s). In our present work, we simulate our circuit using a full quantum statevector, so present results in the next section by simply reading off the amplitudes of each register state. § IMPLEMENTATION AND RESULT The full quantum circuit, including the Hadamards to initialize the superposition of the x and y register qubits and the |-⟩ initialization of the oracle qubit, is shown in for one iteration in Fig <ref>. By running this full quantum circuit on BlueQubit's 34-qubit CPU statevector simulator, it is shown that two iterations of Grover's algorithm are sufficient to generate the full set of solutions to our simple Diophantine equation. The histogram displayed after one iteration is displayed in Fig. <ref>; the histogram for two iterations is displayed in Fig. <ref>. Note that the solution should be read from left to right, with the first three digits representing x_0x_1x_2 and the following digits y_0y_1y_2. The solutions are seen to be correct solutions of the Diophantine equation x+y=5, and we tabulate them for clarity in Table <ref> We find that six iterations of Grover's algorithm are required to return to the probability distribution shown in Fig. <ref>. § CONCLUSIONS Grover's algorithm can be implemented to search for solutions to simple linear Diophantine equations. We have not attempted implementation on a real quantum computer, and the ability of our circuit to operate on noisy intermediate-scale quantum devices would need to be evaluated. Nevertheless, further work could investigate more complicated Diophantine equations; for example, when a and b are not unity, or where the variables are raised to powers greater than unity. In that case, more interesting unsolved cases, like those listed in Grechuk's paper <cit.> could be tackled. Furthermore, we have not attempted to refine or optimize the algorithm, rather concentrating on a straightforward implementation. Techniques to improve the Grover convergence <cit.> could be applied, while inclusion of a quantum counting approach <cit.> would allow one to gain knowledge of how many Grover iterations should be applied in advance of performing each calculation. For a more general Diophantine equation solver, such enhancements would be desirable. § ACKNOWLEDGEMENTS PDS acknowledges support from UK STFC under grant ST/W006472/1. We acknowledge the use of IBM Quantum Lab in the early stages of this work.
http://arxiv.org/abs/2408.11692v1
20240821151248
A JWST MIRI MRS View of the $η$ Tel Debris Disk and its Brown Dwarf Companion
[ "Yiwei Chai", "Christine H. Chen", "Kadin Worthen", "Alexis Li", "Antranik Sefilian", "William Balmer", "Dean C. Hines", "David R. Law", "B. A. Sargent", "Mark Wyatt", "Cicero X. Lu", "Marshall D. Perrin", "Isabel Rebollido", "Emily Rickman", "G. C. Sloan" ]
astro-ph.EP
[ "astro-ph.EP", "astro-ph.SR" ]
Yiwei Chai mchai3@jhu.edu 0009-0008-5865-5831]Yiwei Chai William H. Miller III Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, USA 0000-0002-8382-0447]Christine H. Chen Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA William H. Miller III Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, USA 0000-0002-5885-5779]Kadin Worthen William H. Miller III Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, USA 0009-0001-7058-8538]Alexis Li William H. Miller III Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, USA 0000-0003-4623-1165]Antranik A. Sefilian Astrophysikalisches Institut und Universitätssternwarte, Friedrich-Schiller-Universität Jena, Schillergäßchen 2–3, D-07745 Jena, Germany Center for Advanced Mathematical Sciences, American University of Beirut, P.O. Box 11-0236, Riad El-Solh, Beirut 11097 2020, Lebanon 0000-0001-6396-8439]William Balmer William H. Miller III Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, USA 0000-0003-4653-6161]Dean C. Hines Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 0000-0002-9402-186X]David R. Law Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 0000-0001-9855-8261]B. A. Sargent Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA William H. Miller III Department of Physics and Astronomy, Johns Hopkins University, 3400 N. Charles Street, Baltimore, MD 21218, USA Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK 0000-0001-9352-0248]Cicero X. Lu Gemini Observatory/NSF’s NOIRLab, 670N. A’ohoku Place, Hilo, HI 96720, USA 0000-0002-3191-8151]Marshall D. Perrin Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 0000-0002-4388-6417]Isabel Rebollido Centro de Astrobiología (CAB), INTA-CSIC, Camino Bajo del Castillo s/n - Villafranca del Castillo, 28692 Villanueva de la Cañada, Madrid, Spain 0000-0003-4203-9715]Emily Rickman European Space Agency (ESA), ESA Office, Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA 0000-0003-4520-1044]G. C. Sloan Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218, USA Department of Physics and Astronomy, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3255, USA § ABSTRACT We report JWST MIRI MRS observations of the β Pic moving group member, η Tel A, along with its brown dwarf binary companion, η Tel B. Following PSF subtraction, we recover the spatially resolved flux from the debris disk around η Tel A, along with the position of the companion exterior to the disk. We present a new 5–26 μ m epoch of spectroscopy for the disk, in which we discover a 20 μ m silicate feature, and the first ever 11–21 μ m spectrum of η Tel B, which indicates a bare photosphere. We derive a new epoch of relative astrometry for the companion, extending the baseline of measurements to 25 years, and find that it is currently located near the apocentre of an eccentric, long-period orbit. The companion's orbit is close enough to the disk that it should significantly perturb the planetesimals within it, resulting in a detectable mid-IR pericenter glow and near-alignment with the companion. Contrary to expectations, however, we find that the disk appears to be axisymmetric and potentially misaligned with the companion in the MIRI MRS data. We posit that this may be due to the presence of an additional, yet-undetected ∼0.7–30 M_J planet orbiting interior to the disk with a semimajor axis of ∼3–19 au. § INTRODUCTION Observations of planetary systems with multiple components offer a valuable opportunity to understand the interplay and mutual influence of objects within the system throughout its evolutionary timeline. Young systems with circumstellar disks (e.g. debris disks) in particular offer an exciting view into the sculpting of disk morphologies due to dynamical interactions in the system. Understanding these dynamically-induced disk structures can also aid in inferring the presence of yet-undetected planets within systems that have circumstellar disks <cit.>. To date, however, there are only a few observed examples of young debris disk systems with wide-separation binary companions (e.g. HD 106906 <cit.>). η Telescopii (henceforth η Tel), as a relatively young (∼23 Myr; <cit.>) debris disk–hosting triple system, therefore offers an interesting target for observational study. The η Tel system is located 49.5 pc away <cit.> within the β Pictoris moving group, and consists of (1) η Tel A, an A0V-type primary <cit.>, (2) η Tel B, a M7/8-type brown dwarf companion at a separation of 4" <cit.>, and (3) HD 181327, a F6-type co-moving star at a separation of 7'. The primary is host to an edge-on, North-South aligned debris disk extending to at least 24 au in the mid-IR <cit.>. Interestingly, HD 181327 is also host to a well-studied debris disk, albeit one that is face-on <cit.>. The debris disk around η Tel A was first identified based on IRAS measurements indicating an excess in emission at 12, 25, and 60 μ m <cit.>, for which a dust optical depth was calculated to be τ=L_IR/L_*≈3.5×10^-4 <cit.>. A 2004 Spitzer IRS observation revealed a largely featureless spectrum from 5–35 μ m, with the exception of a possible 10 μ m silicate feature suggesting the presence of large grains <cit.>. Additionally, the excess emission was found to be best fit by two different temperatures of dust: a `warm' 370 K component, and a `cool' 116 K component. However, from the spectrum alone, it was not possible to resolve degeneracies regarding the spatial structure of the dust (e.g. two dust populations at different locations versus two populations with different grain sizes at the same location). 18.3 μ m ground-based imaging with T-ReCs on Gemini South spatially resolved the outer component of the disk <cit.>. Modelling of the T-ReCs disk images was consistent with a two-component disk structure comprising of an unresolved inner 'warm' component inwards of ∼4 au (as also inferred by <cit.>), and a resolved `cool' component in the shape of a narrow ring centred at 24 au. High resolution optical spectroscopy of the η Tel disk using FEROS detected Ca II K absorption lines at ∼-23 km s^-1 that were attributed to circumstellar gas <cit.>. Far- and near-UV spectroscopy with HST STIS likewise detected absorption features at -23 km s^-1 for multiple atomic lines, as well as features at -18 km s^-1 <cit.>. The -23 km s^-1 and -18 km s^-1 components were respectively attributed to circumstellar and to interstellar gas. In particular, the blueshifting of the -23 km s^-1 absorption features with respect to the star's reference frame was interpreted to indicate gas outflow in a radiatively driven disk wind. However, subsequent work <cit.> tested the posited circumstellar origin of the gas by comparing the η Tel absorption features to those of HD 181327, HD 180575, and ρ Tel, three stars with a similar line of sight. The absorption features at ∼-23 km s^-1 were found in the Ca II K lines of the three other stars, strongly implying that the η Tel absorption lines attributed to circumstellar gas are instead more likely due to an interstellar cloud traversing η Tel's line of sight. The brown dwarf companion, η Tel B (a.k.a. HR 7329 B), was first discovered with HST NICMOS coronography at a separation of 4" from the primary <cit.>. HST STIS spectroscopy indicated a spectral type of M7-8 (Lowrence et al. 2000), which was confirmed with H-band spectroscopy from VLT ISAAC <cit.>. Although initial attempts to show common proper motion from measurements of the companion's separation and position angle (PA) were inconclusive <cit.>, additional imaging observations from HST NICMOS and VLT NACO across a baseline of 11 years between 1998 and 2009 were used to confirm η Tel B's status as a comoving companion <cit.>, possibly detecting a small linear change in separation (2.91 ± 2.41 mas yr^-1) and finding no change in PA. It was suggested that this indicates the companion is currently located near apocentre of an inclined and/or eccentric orbit. Magnitude estimates for η Tel B were also used to derive a mass of 20-50 M_J from evolutionary tracks <cit.>. No additional companions up to 9" separation from the primary were detected in the 1998 HST NICMOS and 2004–2008 VLT NACO H-band images. Later coronagraphic imaging with SPHERE/IRDIS from 2015–2017 likewise did not detect any satellites around the companion itself, placing an upper limit on potential satellites from 3 M_J at 10 au to 1.6 M_J at 33 au <cit.>. Additionally, several attempts have been made to characterise the orbit of η Tel B from its astrometric measurements. An analytical approach assuming a face-on, circular orbit gave a companion semi-major axis of a=220^214_-84 au, and a period of ∼2000 years <cit.>; this was refined given the existence (and thus stability) of the edge-on debris disk around η Tel A, which allows for a potential constraint to be placed on the eccentricity of the companion's orbit. Assuming that η Tel B's apocentre distance is indeed r_max∼200 au, and that it has sculpted the outer edge of the debris disk around η Tel A to be r_disk∼ 24 au, e=0.47. This gives a semi-major axis a=136 au and an orbital period of P∼1000 years <cit.>. <cit.> used the same 11 year baseline of astrometric measurements from <cit.> to perform an orbital fit using the OFTI (Orbits for the Impatient) algorithm, obtaining median orbital parameters of a=192^+240_-67 au, P=1490^+3350_-710 yr, e=0.77^+0.19_-0.43, and i=86^+10_-19 deg, with uncertainties at a 68% (1-σ) confidence interval. Most recently, the orbit-fitting package <cit.> was used to derive the companion's orbital parameters from 2015–2017 SPHERE/IRDIS observations combined with previous astrometric measurements over a baseline of 19 years <cit.>. This fit reported an inclination of i=82^+3_-4 degrees, a semi-major axis of a=218^+180_-41 au, and an eccentricity of e=0.34±0.26. While the orbital inclination has been fairly consistent across the literature, derivations of the companion's semi-major axis and eccentricity remain relatively poorly constrained. To date, the large uncertainties on these two parameters illustrate the challenge of characterising the orbit of long-period companions, for which astrometric observations may only cover a small fraction of the total orbital period. In this paper, we present a new observation of the η Tel system with JWST MIRI MRS. [sec:obsndata]Section 2 details the observation and processing of the data. [sec:Adisk]Section 3 presents a new epoch of mid-IR spectroscopy for η Tel A and the discovery of a 20 μ m silicate feature, dust modelling for the MRS spectrum, and our analysis of the spatially resolved disk. In [sec:Bdisk]Section 4, we present the first 11-21 μ m spectrum for the brown dwarf companion, η Tel B, finding that the object does not possess a mid-IR excess. In [sec:Borb]Section 5, we discuss (1) our new epoch of astrometry for η Tel B from MIRI MRS, which extends the baseline of measurements to 25 years and (2) our new orbital derivation for the companion. In [sec:discussion]Section 6, we consider how dynamical interactions with η Tel B are expected to impact the radial extent and symmetry of η Tel A disk; we suggest that a yet-undetected planetary mass may explain the disagreement between our observations and the expected effects from the companion. We summarise our results and state our conclusions in [sec:concs]Section 7. § OBSERVATIONS AND DATA PROCESSING §.§ Data Acquisition The JWST data presented in this article are obtained from the Mikulski Archive for Space Telescopes (MAST) at the Space Telescope Science Institute. The specific observations analysed can be accessed via [doi: 10.17909/0js8-gs60]https://doi.org/10.17909/0js8-gs60. As part of GTO Program 1294 (PI: Chen), we use the Mid-Infrared Instrument (MIRI) Medium Resolution Spectrograph (MRS) <cit.> to observe η Tel A (A0V, K=5.01) <cit.> on May 13, 2023. The MIRI MRS is comprised of four IFU channels, with a wavelength-dependent FOV that increases in size per channel; i.e. the FOV is 3.2×3.7" for Channel 1, 4.0×4.8" for Channel 2, 5.2×6.2" for Channel 3, and 6.6×7.7" for Channel 4. Each channel is further divided into three gratings, which cover the short (A), medium (B), and long (C) wavelength ranges of the channel. The total wavelength coverage of the instrument is from 4.9–28 μ m. Since our observation uses all four IFU channels, we also observe η Tel B in Channels 3 and 4 due to their larger FOVs. To avoid saturation by the primary, we use the FASTR1 readout pattern. For the MRSSHORT detector (comprised of Channels 1 and 2), we use 5 groups per integration with 17 total integrations. For the MRSLONG detector (comprised of Channels 3 and 4), we use 17 groups per integration with 6 total integrations. From the JWST Exposure Time Calculator (ETC, Pontopiddan et al. 2016), we expect the SNR for η Tel A using such an observing set-up to be ∼650 at 5.35 μ m (ETC Workbook 171617). As the MRS is Nyquist-sampled only at the central wavelengths of the detector, we employ a 4-point point-source dither pattern for each exposure to achieve Nyquist sampling across the detector <cit.>; this also mitigates effects from bad pixels and cosmic rays. The resulting total exposure time is 1121.116 s for channels 1 and 2, and 1187.717 s for channels 3 and 4. At the time of observation, the position angle of the aperture used was 283^∘. The recommended observing sequence (following <cit.>) for high-contrast imaging with MIRI MRS is to take a background observation, immediately followed by a science observation, and then a calibration star observation. Such an observing sequence enables background and classical reference PSF subtraction, which are necessary to eliminate background noise and to recover the spatially resolved disk and the brown dwarf companion. It is known that the MIRI MRS receives significant background emission across its wavelength range, with contributions from zodiacal light and the Milky Way dominating at λ<12.5 μ m, and contributions from thermal self-emission of the telescope itself dominating at λ>12.5 μ m <cit.>. The behaviour of the thermal background shows a time-dependency that is currently not well-modelled; thus, it is preferable to take background and reference observations close in time to the science observation to perform background subtraction. Unfortunately, due to constraints as a GTO program with a fixed amount of telescope time, we took only the science observation for η Tel. Since no dedicated background observation was taken, we search MAST for a publicly available background observed as close in time as possible to our η Tel observation. We elect to use the SMP-LMC-058 background observation from Program 1532, taken three days before our data on May 10, 2023. The SMP-LMC-058 background consists of a single exposure with 45 groups per 1 integration for each channel, giving a total exposure time of 124.88 s per channel. This is a much larger number of groups per integration than our η Tel observation, which has 5 groups for Channels 1 and 2, and 17 groups for Channels 3 and 4. Likewise, no dedicated PSF reference observation was taken for η Tel. To optimise PSF subtraction, it is important to use a reference source that is similar in spectral type and brightness to the science target, so that the PSF is measured with a similar SNR. A PSF reference observation of N Car (A0II, K=4.218) <cit.> was already taken for this program several months earlier, to enable PSF subtraction for observations of β Pic <cit.>. The N Car observation consists of 4 exposures in a 4-point point-source dither pattern, each with 5 groups per integration for MRSSHORT and 15 groups per integration for MRSLONG. The total exposure time for N Car is 1853.73 s for Channels 1 and 2, and 1764.93 s for Channels 3 and 4. Since N Car is similar in spectral type and magnitude to η Tel A (an A0V star with K=5.01), we elect to use the observation of N Car as a reference for PSF subtraction. For N Car's background, we use the dedicated β Pic background observation taken as part of the same observing sequence in order to maintain contemporaneity. This background observation consists of two exposures in a 2-point dither pattern optimised for extended sources, with the same number of groups per integration as for N Car. This gives a total exposure time of 263.63 s for each of the four channels. Both of these observations were taken on January 11, 2023. A more detailed description of this observing sequence is provided in <cit.>. Target acquisition is performed for both η Tel A and N Car using the stars themselves, so that the target is well centred within the FOV. This is done to minimise the difference between the two pointings, since effects like fringing can be corrected with varying degrees of effectiveness depending on the offset <cit.>. §.§ Data Reduction We reduce the raw data using version of the JWST Spectroscopic Pipeline, with CRDS context . We use the same pipeline set-up for the η Tel science, N Car reference, and both background observations. The pipeline comprises of three key stages: , , and . The stage applies detector-level corrections to the raw data for each individual exposure by fitting accumulating counts (`ramps') into count-rates ('slopes'). Since the background estimates are different based on the number of groups per integration, with the threshold being at around 20 groups/integration, it is necessary to ensure at this stage that the number of groups/integration being included is the same between our science and science-background observations (D. Law, private communication). The SMP-LMC-058 background has 45 groups/integration, which is much higher than both the 5 groups/integration for Channels 1 and 2, and the 15 groups/integration for Channels 3 and 4, of our η Tel science observation. As such, we customise the script from the pipeline so that only the first 5 groups for the MRSSHORT detector (Channels 1 and 2), and the first 15 for MRSLONG (Channels 3 and 4), are used when running on the raw SMP-LMC-058 background data. We also set the jump detection threshold step from 3-σ to 100-σ to prevent the introduction of artefacts into the calibrated data, which occurs due to an over-flagging of jumps in the raw data when using the default pipeline settings. At the stage, specific instrument calibrations are applied to the individual exposure outputs from , in order to calibrate the data into physical astrometric and brightness units. Additionally, for the background observations, a 1D spectrum is extracted for each exposure. We do not make any changes to the default pipeline settings for this stage. The stage takes the corrected exposures from and combines the 4 dither positions per exposure into a single 3D spectral cube, consisting of one wavelength axis and two spatial axes. We build cubes separately for each of the 12 MIRI MRS sub-bands to avoid averaging different measurements from each of the three wavelength gratings across the four IFU channels. Master background subtraction from the background spectra extracted in is also applied at this stage. In the step, we set the coordinate system to `' in order to avoid interpolation of the cubes from the instrument to sky frames, as well as to facilitate subsequent PSF subtraction using the science and reference cubes. We build our spectral cubes using the algorithm <cit.>, retaining the default pipeline pixel sizes for each channel (i.e. 0.13" for Channel 1, 0.17" for Channel 2, 0.20" for Channel 3, and 0.35" for Channel 4). §.§ PSF Subtraction The resolved disk and the companion are both ∼10^-4 times of magnitude fainter than the primary star <cit.>. Thus, to recover the spatial extent of the debris disk around η Tel A, as well as improve the S/N at which the brown dwarf companion is detected, we perform classical reference PSF subtraction on the calibrated data cubes output by the pipeline. To do this, we calculate the centroids for both η Tel A and N Car by fitting a 2D Gaussian to each wavelength slice in the cubes. Averaging over the centroids for all slices in the cube gives us the final centroid positions for each cube. We then interpolate the N Car cubes to the nearest value so that the location of the N Car centroid in each slice aligns with that of the η Tel centroid. After scaling the flux of the N Car slices to an η Tel A photosphere model from <cit.>, we finally perform a slice-by-slice subtraction of the N Car cube from the η Tel cube. We scale the PSF to the η Tel photosphere to obtain the total flux contribution of the disk; however, in using this scaling, we do not see the double-lobed structure reported in <cit.>. We then apply a second PSF scaling, following the method used by <cit.>, in which we scale the flux of the N Car slices to the peak flux of the observed η Tel slices, before subtracting N Car from η Tel. With this scaling, we are able to recover the spatially resolved flux component of the debris disk, and identify the two lobed structure expected from a compact, edge-on disk. [fig:1]Figures 1 and [fig:2]2 show an example slice of the calibrated MIRI MRS η Tel data before and after the peak-flux scaled PSF subtraction, with the latter indicating the location of all astronomical objects within the FOV. In both scaling methods, the brown dwarf companion can be seen in the top-left corner of the IFU-aligned cubes across sub-bands 3A to 4A (∼11–21 μ m). Additionally, using the second scaling method, we detect the presence of background galaxy 2CX0 J192251.5-542530 within the FOV in sub-bands 1A to 3C, and a second, previously undetected, extended source that is likely a background galaxy in sub-bands 1A to 3A. The calibrated, PSF-subtracted spectral cubes are the final data products which we use for our following analysis. § THE Η TEL A DEBRIS DISK §.§ A New Epoch of Mid-Infrared Spectroscopy We extract the η Tel A spectrum over 5–29 μ m by performing point-source aperture photometry with the function of the pipeline. We set our aperture radius to be 2.0 × FWHM. As the pipeline-produced spectrum shows slight vertical offsets between the MRS sub-bands, particularly at longer wavelengths (i.e. Channels 3 and 4), we also perform absolute flux calibration by applying a Relative Spectral Response Function (RSRF), as described by: RSRF=reference model spectrum/reference extracted spectrum For the RSRF, we use N Car to calibrate our observations. We use an N Car photosphere model from <cit.> as the model spectrum, and extract a spectrum from the MRS observations of N Car across Channels 1–4 using the same aperture size as for our η Tel A extraction. We truncate the calibrated spectrum at 25.63 μ m, beyond which noise significantly worsens the SNR. Our calibrated 5–26 μ m MIRI MRS spectrum for η Tel A is shown in <ref>. We find that the MIRI MRS spectrum is consistent with the updated reduction of the 2004 Spitzer IRS spectrum <cit.>, indicating that the disk has not noticeably evolved over time. Although the lower angular resolution of the IRS means that the flux of the brown dwarf companion is included in its aperture, we do not consider the IRS spectrum of η Tel A to be significantly impacted by flux from η Tel B since the primary is at minimum ∼10^3 brighter than the companion over the IRS wavelength range, as indicated by comparison with our atmosphere model for the brown dwarf (see [ssec:Batmomod]Section 4.1 for modelling details). Following photosphere subtraction from the MIRI MRS spectrum, we clearly recover the 10 μ m silicate feature suggested by <cit.>, and identify a broad 20 μ m feature for the first time in this disk. Broad spectral features at 10 and 20 μ m have been observed in many debris disks and T Tauri stars, with the 20 μ m feature being fairly common in cases where a 10 μ m feature is present (e.g. <cit.>, <cit.>). These two features are indicative of the presence of amorphous silicates, which are known to show broad spectral bands at 10 μ m due to Si-O stretching and 20 μ m due to O-Si-O vibrations <cit.>. Indeed, previous modelling of the Spitzer IRS data predicted a 20 μ m dust component contribution to the overall spectrum for a composition of large amorphous olivine grains <cit.>, although no such feature was evident in the IRS data, likely due to its lower SNR. §.§ Dust Modelling To better understand the disk's spectral features, we perform detailed modelling of the new MIRI MRS spectrum over 7.5–26.9 μ m using code originally developed by <cit.> to model silicate and silica features in the Spitzer IRS spectra of T Tauri stars. We truncate the MRS spectrum shortwards of 7.5 μ m, as the fitting code attempts to reproduce wiggles in the spectrum at shorter wavelengths created by the incomplete correction of stellar absorption features. We also truncate the spectrum longwards of 26.9 μ m as the spectrum becomes substantially noisier at longer wavelengths. The dusty disk around η Tel is believed to contain dust grains that radiate as a featureless continuum <cit.>. We assume a simplified case of two `bands' of dust populations, and set uniform priors on the temperatures of the cold and warm dust populations to T_c=80–200 K and T_w=201–800 K respectively. These temperature ranges are divided into 7 steps (i.e. 17 K and 86K increments), which are explored over to determine the best fit. We obtain a best-fit warm black body dust temperature of T_w = 319±59 K and a cool black body dust temperature of T_c = 127 ± 25 K; here, the uncertainties do not represent the 1-σ confidence level, but rather the temperature fitting precision given the prior range on dust grain temperatures and the number of temperature bins used for the fit. <ref> shows the warm and cool black body components overlaid onto our photosphere subtracted MRS spectrum to show the contribution of the black body continuum to the overall spectrum. We find that the best-fit warm and cool dust temperatures are broadly consistent with those found for the 2004 Spitzer IRS spectrum (T_w = 370 K, T_c =115 K; <cit.>). To more clearly view the spectral features, we then subtract the continuum. Previous modelling of the Spitzer IRS η Tel spectrum suggested that the 10 μ m emission was due to the presence of amorphous silicates in the disk's warm dust population <cit.>. Modelling also suggested that the same warm amorphous silicates could give rise to emission at 20 μ m, although such a feature was not detected in the Spitzer IRS data. We model the 10 and 20 μ m spectral features and find that large warm amorphous olivine is the primary contributor to the 10 μ m feature, consistent with the literature. The broad 20 μ m feature, however, appears to be best fit by primarily a combination of large warm and cool amorphous olivine, large cool amorphous pyroxene, with some contribution from warm and cool silica at longer wavelengths. Our best-fit model for the 10 and 20 μ m spectral features is shown in <ref>. The χ^2_red for the entire spectral fit is 3.8. We note that the 20 μ m feature appears shifted to a slightly longer wavelength compared to the model. This, along with the presence of some small peaky structures in the 10 μ m feature, suggests that there may be additional dust components contributing to the data that we have not presently accounted for in our modelling. More detailed modelling, which is outside the scope of this work, may help to resolve this mismatch between the model and the data. §.§ The Spatial Distribution of Dust in the Disk Following PSF subtraction of the science images, we are able to obtain information about the spatial distribution of dust in the disk from 8.67–27.89 μ m (MRS sub-bands 2B to 4C). By scaling the PSF to the photosphere of the star before subtraction, we can recover the total flux contribution from the dust in the disk. In these images, unresolved excess flux dominates. As a result, we perform a different scaling of the PSF to the peak flux of the science images before subtraction, as in <cit.>, to highlight the spatially extended emission. <ref> and [fig:7]7 show the collapsed cube images of the disk resulting from these two PSF scalings. We sum up all the wavelength slices in each cube in order to obtain the highest possible SNR. We assess which features in the cubes are due to real disk morphology, and which are due to artefacts. At short wavelengths, we find inconsistent structures close to the location of the star in both scalings; these are likely due to PSF residuals. The ellipsoidal feature in the upper left corner of the Channel 1 & 2 sub-bands is background galaxy 2CX0 J192251.5-542530. We note the 3-σ detection of two small unknown features: one to the immediate right of the primary across across sub-bands 1A–2A, and one in the lower right corner of 1A–1C and 3A. The detection of these features across several sub-bands makes them unlikely to be due to warm pixels. However, interpreting their spectra has proved challenging due to their low S/N and significant discontinuities between sub-bands; as such, the sources of the two features remain inconclusive. Emission from the disk itself starts to pick up from ∼9 μ m (sub-band 2B) onwards. We observe an apparent increase in radial extent of the disk with wavelength, which has two potential explanations. If real, this could be due to the increased sensitivity at longer wavelengths to cooler dust populations farther out from the star. This may indicate that the η Tel disk possesses a more continuous structure, contrary to the 2-component structure suggested by <cit.>, which consists of a narrow ring of material at a fixed distance of ∼24 au from the star, along with an unresolved flux component at inwards of ∼4 au. Alternatively, the apparent radial increase could be an artefact introduced by the increase in pixel and PSF size with wavelength `smearing' the flux from the dust, thus causing it to appear farther out from the star at longer wavelengths. As resolving this degeneracy will require modelling that is beyond the scope of this work, we focus our considerations on the case for the disk with a 2-component structure, as held in the literature <cit.>. In the second scaling, we also note an apparent increase in size of an inner cavity between the lobes with wavelength. Since this apparent increase is not observed in both scalings, we conclude it is likely an artefact introduced by the PSF subtraction, due to the increasing pixel and PSF size with wavelength. § DOES Η TEL B HAVE AN INFRARED EXCESS? Since η Tel B is young, and both η Tel A and HD 181327 are known to host debris disks, it is natural to wonder if η Tel B likewise possesses a debris disk. Motivated by this question, we seek to determine if the companion possesses an infrared excess at longer wavelengths indicative of the presence of warmed circumstellar dust. §.§ η Tel B Atmosphere Modelling To understand the extracted spectrum of η Tel B from the MIRI MRS data, we must first understand the companion's expected atmosphere. We model the atmosphere of η Tel B with the package <cit.> by fitting existing spectra and photometry for the brown dwarf companion to a BT-SETTL (CIFIST) model grid <cit.>. Our model fit uses spectroscopic measurements from HST/STIS <cit.> and VLT/SINFONI <cit.>, as well as photometric measurements from HST/NICMOS H band <cit.> and Gaia G band <cit.>. Additionally, we use photometry derived from the observed magnitude difference of η Tel A and B in the following instrument filters: HST/NICMOS F110W <cit.>, Paranal/ISAAC K band <cit.>, Paranal/NACO H, K, and L_p bands <cit.>, and VLT/VISIR PAH <cit.>. We vary parameters for T_eff,B, log(g), radius, and parallax, setting uniform priors for the first three (<ref>), and Gaussian priors on the parallax and the companion's mass. We assume that the parallax is the same as that for the primary, given as 20.6028±0.09 mas in Gaia EDR3 <cit.>. We set a companion mass prior of M_B= 35±15 M_Jup, following <cit.>. The data is weighted such that each dataset, spectroscopic and photometric, is equal. This prevents each point in the spectroscopic datasets from being weighted equally to each photometric point. We use the nested sampling algorithm <cit.> to sample 300 live points from the prior. <ref> summarises the priors used in our model fitting, and the best-fit parameters for η Tel B. The resulting atmospheric model, calculated at the native resolution of MIRI MRS (R∼2700) is shown in <ref> at R=1000. We note that our derived companion mass of M_B=29^+16_-13 M_J is lower than the M_B=47^+5_-6 M_J value obtained by <cit.> using AMES-COND models <cit.>, although it is consistent when considering both sets of error bars. This discrepancy in mass may be due to the fact that we do not account for the age of the system in our atmospheric modelling. §.§ η Tel B Spectrum The detection of warm circumstellar dust around brown dwarfs is dependent on the mass and temperature of the dust, with submillimetre and millimetre observations being most suited to identifying the presence of traditional 100 au sized debris disks <cit.>. However, if η Tel B does possess a compact debris disk, we may also be able to detect it via an infrared excess across the MRS wavelength range. To extract an MRS spectrum of η Tel B, we perform aperture photometry at each wavelength slice of the cubes for sub-bands 3A, 3B, 3C, and 4A. We omit sub-bands 4B and 4C, as the increase in background noise and lower instrument throughput renders the companion irrecoverable at these longer wavelengths (<ref>). Since η Tel B is a faint source located at the edge of the MRS FOV, the pipeline does not do a satisfactory job of extracting the spectrum. As such, we manually employ a tapered-column extraction technique for the η Tel B unresolved point source. This involves increasing the aperture size over wavelength to account for the diffraction limit being proportional to wavelength (θ=1.22λ/d). Due to the relative faintness of η Tel B, it is difficult to empirically obtain its FWHM. As such, we use the FWHM calculated for the reference star N Car, since the FWHM of the instrument should behave similarly irrespective of observing target. Additionally, if the aperture is too large, it could include additional noise into our extraction. Thus we restrict the radius of our aperture to be 0.87 × FWHM in channel 3 and 0.30 × FWHM in channel 4 (or ∼1" on-sky for both channels) multiplied by a factor of λ/λ_0, in order to reduce flux contribution from the background and to improve the SNR of the extraction. We again perform absolute flux calibration and align discontinuities between subbands in the spectra by applying an N Car RSRF; in this case, however, we extract our N Car spectrum across sub-bands 3A to 4A using the same aperture sizes and tapered-column method as for our η Tel B extraction. As the calibrated spectrum remains fairly noisy, particularly for 4A, we bin the spectra for sub-bands 3A–3C by a factor of 10, and collapse 4A into a single photometric point. We present the final η Tel B spectrum in <ref>, overplotted onto our atmosphere model (see [ssec:Batmomod]Section 4.1). To calculate the error-bars, we perform an injection-recovery test of N Car; this involves scaling N Car to the model flux of η Tel B and injecting it into the η Tel Stage 3 cubes on the opposite side of the primary to η Tel B, before applying PSF subtraction and extracting its spectra using the exact same methods detailed above. Taking the average residuals between the injected and recovered spectrum for each sub-band gives us the error-bars for that sub-band. Our spectrum shows a good fit to the atmosphere model. This indicates that η Tel B does not possess an infrared excess between 11–24 μ m. We note that this does not necessarily rule out the presence of circumstellar dust around η Tel B; the companion's low luminosity may mean that there is very little dust at 100–260 K temperatures, which would make any excess in the 11–24 μ m range simply too faint to be identified by MIRI MRS. However, from our current observations, we conclude that we do not identify the presence of a debris disk around η Tel B. § THE ORBIT OF Η TEL B §.§ A New Epoch of Astrometry The high angular resolution of MIRI MRS allows us to obtain positional accuracy for η Tel A and B to 10 and 23 mas for Channels 3 and 4 respectively <cit.>. This enables us to derive a new epoch of relative astrometry, extending the baseline of astrometric measurements by 6 years since the most recent measurement with VLT SPHERE <cit.>, and by 25 years since the first measurement with HST NICMOS <cit.>. To do this, we fit a 2D Gaussian to the collapsed image of each sub-band cube from 3A–4A in order to first identify the pixel coordinates of the η Tel A and B centroids; we exclude the cubes for sub-bands 4B and 4C due to higher noise levels increasing the uncertainty in the precise location of the η Tel B centroid. Transforming the pixel coordinates to right ascension (RA) and declination (Dec) values using the World Coordinate System (WCS) then allows us to calculate the separation and position angle of η Tel B with respect to the primary star. To obtain the final angular separation and positional angle, we average over the results from sub-bands 3A–4A, estimating the uncertainties as the standard deviation between the measurements. We calculate a final separation of 4199±15 mas (∼200 au) and a position angle of 167.49±0.18^∘. As both the primary and companion are observed in Gaia DR3 <cit.>, we also use it calculate the Gaia relative astrometry: ρ=4197.3±3.7 mas and θ=167.44±0.09^∘. Our new astrometric measurements are shown in <ref>, right panel), alongside all previous relative astrometry reported in the literature <cit.> and our calculated relative astrometry from Gaia. We observe no significant change in separation or proper motion. The new MIRI MRS measurements for both separation and position angle are consistent with η Tel B being a common proper motion companion to η Tel A located at or near the apocentre of a long-period orbit. This confirms previous analysis <cit.>. §.§ Orbit Fitting To better characterise the orbital properties of η Tel B, and to understand any potential companion-disk interactions, it is necessary to first understand the orbit of η Tel B. We perform an orbital fit for the brown dwarf companion with the Python package <cit.>, using relative astrometry and stellar absolute astrometry from the DR3 Hipparcos-Gaia Catalogue of Accelerations (HGCA; <cit.>). Additionally, although HARPS radial velocity (RV) data exists for η Tel A <cit.>, the primary shows a high rms RV scatter of 12.805±0.007 km s^-1 due to its A0V spectral type, youth, and fast rotation. This makes it difficult to obtain meaningful constraints from the RV data. As such, we omit RV data from our orbital fit for the companion. We run a parallel-tempered Markov-Chain Monte Carlo (MCMC; <cit.> algorithm with 10 temperatures, 500 walkers and 10^6 steps, burning the first 100 steps and thinning every 1000 steps; we select these MCMC parameters to maintain consistency with those used by <cit.>. We set normal priors on the stellar mass (2.09±0.03 M_⊙, <cit.>, and parallax of the system (π=20.6028±0.0988 mas; <cit.>), as well as a uniform prior on the companion mass (0.019–0.048 M_⊙, i.e. 20–50 M_J, <cit.>). Uninformative priors are adopted for all other orbital elements; we use the defaults given in <ref>. We calculate posteriors for 9 parameters: semi-major axis a_B, eccentricity e_B, inclination i_B, argument of pericentre ω_B, longitude of ascending node Ω_B, and epoch of pericentre τ_B. <ref> gives the full list of our derived orbital parameters to 1-σ uncertainties. We also do not specify initial positions for the MCMC chains, instead using the default, which randomly determines the initial position of the walkers such that they are uniformly distributed across the prior phase space. To test for convergence, we check trace plots and posterior histograms for each parameter. We obtain best-fit median values of semi-major axis a_B=142^+18_-11 au, eccentricity e_B=0.50^+0.1_-0.1, and inclination i_B=79^+5_-6 degrees. This gives an apocentre distance of r_max,B=213 au, a pericentre distance of r_min,B=71 au, and an orbital period t_B∼1100 years. A lack of significant change in orbital motion across twenty-five years of observations is therefore reasonable, as we have only observed ∼2% of the companion's total orbit. <ref> shows a sample of 100 potential orbits, as well as the corresponding projected change in separation and position angle for each of these 100 orbits. Due to the long-period nature of η Tel B's orbit, it will be difficult to observe any significant changes within the next decade; placing more robust constraints on the companion's orbital parameters may not be possible until several decades from now. We note that, while our values for a, e, and i are in agreement with the orbital parameters inferred by <cit.>, our values for a and e differ considerably from those derived by <cit.> using <cit.>. To investigate the potential reasons for this discrepancy, we perform several additional fits using (1) their Gaussian prior on the companion mass of 47±15 M_J, (2) only their relative astrometry data, and (3) their initial distribution values. Corner plots for these additional fits are shown in the <ref>. We find that the original derived parameters for a and e remain robust to the change in companion mass prior and the additional Gaia and MIRI MRS relative astrometry points. The fit using the <cit.> initial distribution values returns a bimodal posterior distribution for a and, to a lesser extent, e. The tallers peaks (a∼149 au and e=0.5) are consistent with our posteriors but the shorter peaks (a∼230 au and e∼0.3) are consistent with their results. This suggests that the choice of initial position may have some affect on the posteriors. In particular, instead of using a uniform distribution to set the initial position of the walkers for each parameter, <cit.> use a log-normal distribution for the semi-major axis and a normal distribution for all other orbital values. In difficult cases, such as determining a and e for long-period orbits, this may preferentially concentrate exploration of values to those near the chosen initial values. However, the fact that does not exactly reproduce the posteriors from <cit.> despite using the same initial distributions, and also consistently has smaller uncertainties than , suggests that some more fundamental difference between the two fitting packages may also be contributing to the different fit outcomes. For example, parametrises e_B as e_BsinΩ_B and e_BcosΩ_B, whereas does not). Further investigation may provide more illuminating information, but as a deep-dive into the workings of both fitting packages is outside the scope of this work, we leave it to future work. We also note that, since observations of η Tel B to date only cover a small fraction of its total orbital period, its astrometric acceleration between Hipparcos and Gaia in the HGCA has a low significance of 1.96-σ for two degrees of freedom <cit.>. As such, while fitting for the stellar absolute astrometry is able to constrain the direction of orbital motion, it is unlikely to provide strong constraints on the dynamical mass of the companion. This is reflected in our median companion mass posterior of M=42±14 M_J, which appears to be largely prior-driven. In our following analysis, we assume the best-fit orbital parameters described in Table 2; however, we acknowledge that these parameters may be in part due to the fitting package used, and as such also reproduce the analysis using the best-fit parameters from <cit.>. § DISCUSSION An isolated, nearly edge-on debris disk comprised of planetesimals on circular orbits will feature a symmetric, double-lobed structure. However, dynamical interactions due to the presence of a massive second body in the system can sculpt the structure of the disk, leading to asymmetries <cit.>. In the case of the η Tel AB system, we find that the eccentricity of the companion's orbit (e_B=0.50) corresponds to a pericentre distance of ∼71 au from the primary star (see <ref>). Given the outer disk's radial extent between r_in∼22 and r_out∼26 au <cit.>, we expect that the companion passes close enough at its pericentre to gravitationally perturb the material within disk over secular timescales. For low eccentricity orbits, we can expect secular precession to act on a timescale given by (<cit.>, see Eqs. 7 and 8 therein), t_sec=6.15α^-2.5α̅^2/b^(1)_3/2(α_B)0.651t_B/μ For higher eccentricities, the above expression should still give a reasonable estimate. In the case of η Tel A and B, the ratio of perturber to disk semi-major axes is α=a_d/a_B≈24/142≈0.17, with α̅=α since a_B>a_d. The Laplace coefficient is b^(1)_3/2(α_B)≈3α≈0.56, and the perturber's orbital period in years is t_B≈1100 yrs (see <ref>). We set the ratio of perturber to star masses as μ≡ M_B/M_*=35M_J/2.09 M_⊙≈0.02, using the companion mass derived in [ssec:Borbfit]Section 5.2. This gives t_sec≈1 Myr. Performing the same calculation using the companion orbital parameters derived by <cit.> provides a similar result of t_sec≈2 Myr. Placed into context with the β Pic moving group age of ∼23 Myr, we should expect that the observed properties of the η Tel A disk are consistent with the stable end-product of secular interactions with η Tel B, regardless of which fit parameters are used. We next discuss the predicted disk properties due to dynamical interaction with η Tel B, and compare these predictions to the observed MRS data. For our analysis, we consider both our best-fit orbital parameters as well as those derived by <cit.>. §.§ Radial Extent of the Disk In the case of material orbiting a primary star, with a secondary binary companion acting as a perturber on the material, there should be a critical semi-major axis at which the orbit of the material is stable against gravitational perturbations from the companion. <cit.> empirically derive an expression for this critical semi-major axis, a_c, as a function of the primary-secondary mass ratio, μ, and the semi-major axis, a_B, and eccentricity, e_B, of the secondary perturber's orbit: a_c = [(0.464±0.006) + (-0.38±0.01)μ + (-0.631±0.034)e_B + (0.586±0.061)μ e_B + (0.15±0.041)e_B^2 + (-0.198±0.074)μ e_B^2]a_B For η Tel A and B where μ∼0.02, we obtain a critial semi-major axis of a_c=26.2 au. This is comparable to the inferred radial extent of the disk (r_out=26 au, <cit.>), suggesting that the observed structure of the disk is consistent with truncation due to the orbit of the brown dwarf companion. For the <cit.> values of a_B=218 au and e_B=0.34, we obtain a_c=57 au. This is greater than the outer radial extent of the disk reported in the literature. However, we note that, if the apparent increase in radial extent of the disk seen in the MIRI MRS data is real and not an artefact of PSF subtraction (see <ref>, <ref> and [fig:7]7), then at greatest extent the disk does not seem to exceed past ∼60 au. This could be consistent with truncation by an η Tel B with a larger semi-major axis of ∼220 au. §.§ Symmetry of the Disk Although an axisymmetric, double-lobed structure is expected for an isolated debris disk, secular perturbations due to the gravitational influence of a second, eccentric body in the system can force the orbit of dust within the disk to become likewise eccentric. This shifts the symmetry of the disk away from the star, resulting in an observable `pericentre glow' as dust at the forced pericentre of the disk is heated by increased proximity to the stellar host <cit.>. Since a particle's forced eccentricity e_f depends only on the eccentricity of the perturber's orbit along with the ratio of its semimajor axis to that of the perturber (<cit.>, Eq. 39), we estimate the forced eccentricity due to η Tel B as follows: e_f ≃5/4a_d/a_B× e_B = 5/424 au/143 au× 0.50 ≈ 0.1 Here, the perturber semi-major axis, a_B, and orbital eccentricity, e_B, come from our derived orbital parameters for η Tel B. We again take the mean planetesimal belt distance from the star, a_d, to be 24 au <cit.>. This gives us planetesimal belt apocentre and pericentre distances of 26.5 au and 21.5 au respectively. We then calculate the grain temperature of the dust at both these distances using the following <cit.>: T_gr = 0.707T_*√(R_*/D_gr) where D_gr is the distance of the grains from the star, T_*=9700 K, and R_*=1.7R_⊙ <cit.>. This gives a grain temperature of 118 K at the disk's apocentre, and 132 K at the disk's pericentre. Taking the ratio of blackbody flux densities across λ_c for each MRS sub-band, we estimate an expected brightness asymmetry of 96% at 18 μ m, which we then divide by a factor of 1+e_f to account for particle bunching at apocentre <cit.>. This gives us a final expected brightness asymmetry of 77% at 18 μ m, with the pericentre lobe being brighter. It is worth noting that this may be an overestimation of the brightness asymmetry due to the assumption that the dust is a blackbody; in reality, the temperature of the dust may be hotter. However, given the cool dust component temperature of 127 K from dust modelling (see [ssec:Adustmod]Section 3.2), this would not be a large correction. Likewise, a higher T_eff for the primary would give rise to hotter grain temperatures and a smaller brightness asymmetry. Repeating the above calculations using the best-fit orbital parameters from <cit.> produces a forced eccentricity of e_f=0.04, which should produce a 18 μ m pericentre brightness asymmetry of ∼30%. Both scalings of the PSF-subtracted MRS data cubes, however, appear largely axisymmetric (<ref> and [fig:7]7), which is inconsistent with expectations of an observable brightness asymmetry. <ref> compares an 18 μ m slice of the MIRI MRS data (after peak-scaled PSF subtraction) to a model of the disk with e_f=0.1. Subtracting the MIRI MRS data from the model image reveals residuals that indicate the model is brighter in the pericentre lobe than the data; i.e. the data is less asymmetric than expected from consideration of the system's dynamics. To perform a more rigorous check for potentially fainter asymmetries, we apply angular differential imaging (ADI) to each collapsed sub-band image of the disk. This is done by rotating the image 180^∘ and subtracting it from the unrotated image. We also perform the same ADI on the N Car data in order to check whether any potential structures are due to instrument effects. Although we find some asymmetric structure, it is inconsistent across wavelengths and, more critically, appears in both sets of observations. This indicates that these structures are likely caused by the instrument rather than any real physical asymmetry in the η Tel disk. Thus, we find that the η Tel A disk is essentially axisymmetric, contrary to our expectation of an observable disk asymmetry due to gravitational perturbation by η Tel B. §.§ Mutual Inclination of the Disk and Companion Secular precession induced by the orbit of η Tel B should cause the orbital planes of the disk and the companion to become aligned; i.e., for a 23 Myr system, we should expect to observe an aligned mutual inclination between the disk and the companion, even if the two were initially misaligned. The mutual inclination of the disk and η Tel B can be calculated using, cosi_m=cosi_dcosi_B+sini_dsini_Bcos(Ω_B-Ω_d) where the i terms are inclinations relative to the sky plane and the Ω terms are the longitudes of ascending node. For disk and companion parameters of i_d=90±20 deg and Ω_d=172±1 deg <cit.>, and i_B=79^+5_-6 deg and Ω_B=169^+3_-2 deg (see <ref>, [ssec:Borbfit]Section 5.2), we obtain a mutual inclination of i_m∼11^+15_-14 deg. Thus, we find that the disk and the companion may potentially be misaligned, contrary to expectation. However, further modelling, particularly of the disk's parameters, is needed to improve uncertainties before we can determine whether or not the disk is truly misaligned with the companion's orbit. §.§ An Additional Interior Planet? The absence of compelling evidence for asymmetry in the debris disk, and its potential misalignment with the companion, presents an intriguing puzzle that deserves explanation. Motivated by the work of <cit.> concerning the HD 106906 debris disk, which is perturbed by both an exterior companion and an inner stellar binary, we propose that our observations could be explained by the presence of additional perturbing masses in the η Tel system. In principle, these masses may include either single or multiple planets interior to the disk, and/or the self-gravitational effects of the disk itself, if massive enough. The reasoning behind this is that the presence of such additional masses could counteract the gravitational effects of η Tel B on the debris disk, explaining the identified discrepancies ([ssec:symm]Section 6.2, [ssec:mutinc]6.3). We consider what may be the simplest scenario: an additional, single, yet-undetected planet on a circular orbit completely interior to, and coplanar with, the debris disk (assumed to be massless). It is then possible to constrain the mass m_pl and semimajor axis a_pl of such a planet using the so-called “Laplace radius” <cit.>. The Laplace radius, denoted by r_L, describes the location where the gravitational perturbations experienced by a planetesimal due to both the inner and outer companions are equal and cancel out. Thus, for a given system, planetesimal dynamics interior (exterior) to r_L will be dominated by the inner (outer) companion, with planetesimals lying in the dominant companion’s orbital plane. The Laplace radius, in the limit of m_pl<<M_*, can be written as follows (see equation (1) in <cit.>): r_L^5 = a_B^3 a_pl^2 m_pl/M_B(1-e_B^2)^3/2 Since the observed disk structure seems to be inconsistent with that expected based on secular perturbations due to η Tel B alone ([ssec:symm]Sections 6.2 and [ssec:mutinc]6.3), in <ref> we set the minimum Laplace radius to be the disk’s outermost radius (i.e., r_L≥ r_out≈ 26 au, <cit.>), and solve for m_pl as a function of a_pl. The results are shown in <ref>; a planet whose parameters lie above the light pink line (i.e. values of m_pl and a_pl for which r_L ≥ r_out) could maintain the disk’s axisymmetry and its misalignment with the outer companion. We note that using the <cit.> orbital parameters to solve for m_pl as a function of a_pl (shown in dark pink in <ref>, where again we have set r_L=r_out=26 au) results in a larger possible parameter space for an undetected interior planet. We further constrain the possible parameter space as follows. First, we use MIRI MRS 3- and 5-σ contrast curves to determine instrument detection limits for potential companions. This allows us to rule out the region of the parameter space as shown using the grey lines in <ref>. For comparison, we also calculate the expected contrast for a β Pic b-like planet around η Tel using the following equation: f_1 = (d_2/d_1)^2 × f_2 We use the β Pic b atmosphere model from <cit.> to obtain its flux f_2 at λ_c=5.3, 6.2 and 7.1 μ m (the central wavelengths of sub-bands 1A–1C respectively). The distances of η Tel and β Pic are d_1=47.7 and d_2=19.44 pc respectively. We then divide f_2 by the stellar flux at each wavelength to obtain the final contrasts, finding that MIRI MRS should be able to detect a β Pic b-like planet of m_pl∼12 M_J at ≥18 au (within 5-σ, and at ≥ 12 au within 3-σ). Second, assuming that the inner edge of the disk is carved by the overlap of first-order mean motion resonances due to the planet (e.g. <cit.> and references therein), the planet’s semimajor axis cannot exceed a_p=a_d - Δ a_p. Here, Δ a_p is the half-width of the chaotic zone around the planetary orbit given by the following expression <cit.>: Δ a_p ≈ 1.3(m_p/M_*+m_p)^2/7 a_p This is shown in <ref> using a purple curve. Given these bounds, we are able to rule out certain areas of the planet’s possible mass and semimajor axis, as summarised in <ref>. The central white region therein represents the allowed parameter space for an undetected planet interior to the disk. Looking at <ref> it is evident that a planet of mass ∼0.7–30 M_J and semimajor axis ∼3–19 au may be responsible for the observed disk structure (alternatively, 0.15–40 M_J between 1.5–20 au if using companion orbital parameters from <cit.>). That being said, however, we stress that our aim here is not to offer a quantitative prediction but rather to highlight that the observed disk structure is a plausible consequence of the presence of an additional planet. This is because several additional factors may influence our predictions, which we discuss below. First, the Laplace radius of Equation (7) does not account for potential interactions between the inner and outer perturbers, and is instead derived assuming a_pl << a_B. Second, in our calculations, we do not account for non-gravitational forces. This is fairly reasonable for mm-sized or larger grains; however, the MIRI MRS data traces μ m-sized grains which are subject to non-gravitational forces such as radiation pressure and gas drag. Given the system’s relatively young age, it is possible for the disk to contain significant amount of gas (see, however, <cit.>) which can affect the dust dynamics, such as through migration and the damping of orbital eccentricities/inclinations <cit.>. If this is the case, then a less massive planet than that identified in <ref> would instead be required to produce the same observed structure. Third, we assume that the debris disk is massless, neglecting its (self-)gravitational effects. However, if the disk is massive enough, it may suppress planetesimal eccentricities forced by the eccentric companion <cit.>, affecting our inferences. In the extreme case, the disk self-gravity alone, without an additional interior planet, can potentially explain the observed disk structure <cit.>. Regardless, the disk self-gravity, even if not dominant, may well affect our planetary inferences by forcing an inward shift in the Laplace radius (see <cit.>). Finally, it is important to acknowledge that the inferred parameter space is contingent upon accurate knowledge of the outer companion’s orbital parameters. Any updates or improvements to these orbital parameters may necessitate revisions to <ref>. § CONCLUSIONS As part of GTO Program 1294, we present MIRI MRS observations of the η Telescopii system. Our main findings are: * We detect an infrared excess in the spectrum of η Tel A, indicating the presence of thermal emission from circumstellar dust. We recover the 10 μ m silicate feature and discover a new broad 20 μ m silicate feature. Dust modelling suggests the continuum is best-fit by two different grain populations at 319 K and 127 K, with the 10 and 20 μ m silicate features arising due to the presence of large amorphous grains. * We detect the brown-dwarf companion η Tel B at a separation of 4" in MRS sub-bands 3A to 4A. We calculate a new epoch of astrometry for η Tel B, with ρ =4199±15 mas and PA = 167.36±0.19^∘. Our measurements extend the baseline of astrometric measurements to 25 years. We detect no significant change in orbital motion. * We derive the orbit of η Tel B using relative astrometry and obtain the orbital parameters a_B=142^+18_-11 au, e_B=0.50±0.1, and i_B=79^+5_-6 degrees. This gives an orbital period of t_B∼1100 years. We find that, for our apocentre distance of 214 au, the companion's current location at 209 au validates previous literature suggesting the companion is located at or near apocentre of a long-period orbit. * We present the first 11–21 μ m spectrum of η Tel B. We do not detect an infrared-excess for the object. We perform atmospheric grid model fitting to obtain the following parameters for η Tel B: T_eff,B=2830^+20_-30 K, log(g)=4.3^+0.1_-0.2, R=2.28±0.03 R_J, log L_B/L_⊙=-2.48±0.01, M_B=42±14 M_J. * Using PSF subtraction, we spatially resolve the debris disk around η Tel A from 9.4–26.05 μ m. We find that the disk has an axisymmetric double-lobed structure across the MRS wavelength range. This is inconsistent with the expected 77% brightness asymmetry at 18 μ m due to secular perturbations from η Tel B, assuming our median orbital parameters for the companion. * The disk's axisymmetric structure and potential misalignment with the companion may be due to the presence of another mass in the system that is large enough to dominate over secular precessional effects induced by η Tel B. For the case of a single, yet-undetected planet, we constrain its mass to be between ∼0.7–30 M_J with a semi-major axis within the ∼3–19 au range (<ref>). YC and CC acknowledge that this work is based [in part] on observations made with the NASA/ESA/CSA James Webb Space Telescope. The data were obtained from the Mikulski Archive for Space Telescopes at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127 for JWST. These observations are associated with program #1294. Support for program #1294 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-03127. A.A.S. is supported by the Alexander von Humboldt Foundation through a Humboldt Research Fellowship for postdoctoral researchers. This research has made use of the following software projects: https://astropy.org/Astropy <cit.>, https://matplotlib.org/Matplotlib <cit.>, http://www.numpy.org/NumPy and https://scipy.org/SciPy <cit.>, https://jwst-pipeline.readthedocs.io/en/latest/JWST DataPipeline (Bushouse et al. 2023), https://github.com/farhanferoz/MultiNestMultiNest <cit.>, https://orbitize.readthedocs.io/en/latest/orbitize! <cit.>, http://johannesbuchner.github.io/PyMultiNest/pymultinest.htmlPyMultiNest <cit.>, https://species.readthedocs.io/en/latest/species <cit.>, and the NASA Astrophysics Data System. aasjournal § CORNER PLOTS § ADDITIONAL FITS FOR Η TEL B
http://arxiv.org/abs/2408.12592v1
20240822175629
Exposing Shadow Branches
[ "Chrysanthos Pepi", "Bhargav Reddy Godala", "Krishnam Tibrewala", "Gino Chacon", "Paul V. Gratz", "Daniel A. Jiménez", "Gilles A. Pokam", "David I. August" ]
cs.AR
[ "cs.AR" ]
Texas A&M University cpepis@tamu.edu Princeton University bgodala@princeton.edu Texas A&M University krishnamtibrewala@tamu.edu Intel Corporation ginoachacon@intel.com Texas A&M University pgratz@gratz1.com Texas A&M University djimenez@tamu.edu Intel Corporation gilles.a.pokam@intel.com Princeton University august@princeton.edu 42 Authors have contributed equally. Exposing Shadow Branches David I. August August 26, 2024 ======================== plain § INTRODUCTION Modern data center and commercial workloads place enormous pressure on the processor core front-end and instruction cache <cit.>. Many recent works explore mechanisms to reduce front-end pressure and instruction cache misses <cit.>. Instruction prefetching has been explored extensively to improve the efficiency of the instruction cache <cit.>. In particular, Fetch Directed Instruction Prefetching (FDIP) <cit.> is a common front-end design that decouples the instruction fetch and branch prediction structures, allowing for an instruction prefetcher to runahead of the current instruction stream. A decoupled front-end alleviates stalls due to instruction cache misses however its predictions are solely dependent on the branch predictor to provide instruction addresses. Recent works demonstrate that the front-end is a considerable source of performance loss <cit.>, with upwards of 53% of performance <cit.> bounded by the front-end. Figure <ref> shows the average misses per kilo-instructions (MPKI) across a set of 16 L1-I bound commercial workloads. Each bar shows the total MPKI, and the MPKI of BTB misses where the branch is already in the L1-I cache is shown in orange. As the figure shows, the vast majority, 75%, for a nominally sized, 8K-entry BTB, of BTB-missing, unidentified branches are actually present in instruction cache lines that FDIP has previously fetched. To the best of our knowledge, we are the first to observe and recognize this behavior. Nevertheless, these BTB-missing branches have yet to be decoded and inserted into the BTB. This is because they lie either before the branch target that brought the line into the cache or after a taken branch that leaves the cache line. We call these branches in the shadow of the executed basic block "Shadow Branches" as pictured in Figure <ref>. These missing shadow branches are typically rarely encountered, or "cold" branches. Nevertheless, these cold shadow branches cause a significant portion of branch resteers. Even if these branches are previously accessed, they are so infrequently encountered they are not retained in the BTB. In such cases, the cold branches are likely present in the instruction cache but are not decoded and added to the BTB. This is because a different branch instruction, that appears earlier in the instruction's cache line, is encountered first and overshadows the cold branch. A good example of code that creates this sort of behavior would be frequently used functions being placed next to colder functions in the binary. Normally, the branches and returns in frequently used functions would be correctly retained in the BTB but the less used functions co-located on the same cache line would never be decoded until they are finally executed sometime later, leading to a BTB miss on an L1-I cache hitting line. A commonly proposed solution is to enhance FDIP to address both L1-I and BTB misses <cit.>. These prefetchers use the information from the BTB to runahead of the current instruction stream and proactively fill the L1-I and BTB. While these prefetchers can reduce the BTB miss rate, they are ultimately dependent on the contents of the BTB to generate predictions, with cold branches unlikely to be on their predicted path. Any mispredictions due to prefetching down the speculative path risks polluting the L1-I and BTB, further exacerbating the front-end bottleneck. Moreover, most of these approaches operate on fixed-length instruction sets since the predecoder relies on the perfect, 4-byte alignment of all instructions. Few proposals <cit.> address the challenges of variable-length instruction sets and those that do similarly seek to fill the BTB based on L1-I misses requiring virtualized metadata storage in the Last-Level Cache. We note that FDIP, in the process of fetching and forwarding cache lines containing code to be executed to the Fetch Engine to forward to decode, often also forwards the bytes containing shadow branches co-resident with true-path branches on those cache lines. Thus, these shadow branches are already fetched into the front-end of the machine. We will leverage this fact to decode these shadow branches in parallel with the true-path branches. §.§ Key Contributions To address the high BTB miss rate in contemporary commercial workloads, here we introduce Skeia [Skeia is Greek for "shadow", thus we use this term to signify our technique.], a new and novel technique to leverage the previously not-decoded bytes on FDIP fetched cache lines. Figure <ref> summarizes the performance benefit that Skeia can provide across a range of baseline BTB sizes. In the figure we show the percent speedup of several designs, normalized against a small, 4K entry BTB. The performance here is the geomean average across all benchmarks examined. In the plot there are three lines, the lowest, blue line represents the performance a stock BTB of the given number of entries. The red line shows the performance of a stock BTB with an extra 12.25KB of storage space, 12.25KB being the size of the default Skeia structures. The top, green line shows the performance of a system with Skeia added. In each case, for all sizes of BTB, we see that our proposed Skeia design produces approximately 2X the gain of giving the same state space to the BTB. This paper makes the following contributions: * The first work to observe and identify the phenomena of cold, not-decoded "Shadow Branches" on FDIP fetched cache lines. * Introduces Skeia, a new and novel mechanism designed for the speculative identification and decoding of shadow branches, applicable to both fixed and variable-length instructions. * Introduces an efficient and small structure, the Shadow Branch Buffer (SBB), for storing these shadow branches that can be filled off the critical path and accessed in parallel with the real BTB, achieving a Geomean ∼5.7% performance improvement with only 12.25KB of state. * The branches stored in the SBB are notably different compared to those in a similarly sized BTB. Our findings indicate that allocating the same amount of state storage space exclusively to the SBB, as opposed to BTB, results in higher performance gains across nearly all BTB sizes. § BACKGROUND AND MOTIVATION This section reviews the background of modern processor front-ends, the decoding of CISC variable-length instructions, and branch types. It also discusses the motivation for the work concerning BTB scaling and shadow branch placement. §.§ Fetch-Directed Instruction Prefetching Figure <ref> depicts the typical superscalar core front-end microarchitecture. Notably, it shows the decoupling of the Instruction Fetch Unit (IFU) from the Instruction Address Generator (IAG) by the presence of the Fetch Target Queue (FTQ). The Branch Prediction Unit (BPU), a critical component of the IAG, comprises the conditional branch predictor, the direct jump address predictor (commonly referred to as the Branch Target Buffer or BTB), the indirect jump predictor, and a return address stack. These elements collectively provide prediction data for the IAG, enabling it to speculatively compute the address of the next Basic Block and insert this in the form of predicted cache line addresses into the FTQ. The FTQ operates as a First-In-First-Out queue, capturing the targets computed by the IAG along the predicted execution path. Each entry in the FTQ corresponds to a Basic Block, and the cache lines associated with each Basic Block are identified and prefetched into the L1-I cache. This prefetching mechanism allows non-resident instruction blocks to be prefetched into the L1-I cache upon entry into the FTQ, rather than waiting for them to be fetched on demand when the address reaches the IFU. Typically, the IAG operates ahead of the back-end, ensuring that the FTQ remains consistently filled, except in cases of pipeline squashes. The effectiveness of FDIP hinges greatly on the BTB. When FDIP encounters a BTB miss, it risks failing to identify a branch, potentially leading to fetching incorrect I-cache lines for taken branches. These unused prefetches are placed in the cache regardless of the later branch resteer. §.§ CISC Variable Length Instruction Decoding Typical CISC (Complex Instruction Set Computer) instruction sets, such as x86, encode instructions using a variable number of bytes. As a result, decoding CISC instructions incurs some serialization, making parallel decoding of multiple instructions, as required in superscalar cores, more challenging since the beginning byte of an instruction is only known in two cases: one, when the prior instruction is decoded and its length is known; and two, when a branch target, i.e. entry point to the cache line, location is known. In CISC instruction decoding, these variable-length instructions must be parsed and translated into micro-operations (uops) that the processor can execute. Decoding begins when the cache line containing the to-be executed basic block is fetched from the I-Cache by the IFU. When the control flow shifts to a new cache line, such as through a branch, the decoding process advances from the first byte of the branch target address in that new line. That said, the taken branch leading to that line is typically not the final instruction in the cache line, nor is the branch target at the cache line's beginning. Consequently, the bytes following a taken branch and those preceding the branch target entry point in a cache line remain undecoded during branch execution. We note, however, that these shadow bytes are nevertheless fetched from the I-Cache along with the rest of the cache line, though they are ultimately unused or not decoded. These shadow bytes could contain a branch instruction that, if decoded and captured in a structure, could allow FDIP to continue running ahead of the current instruction stream. We categorize these shadow branches into Head (beginning to entry point) and Tail (exit point to end) shadow branches. The variability in instruction lengths within CISC architectures, makes predicting instruction boundaries difficult. Unlike RISC's (Reduced Instruction Set Computer) fixed-length instructions, such as ARM, CISC instructions can range from 1 to 15 bytes. This complexity requires sophisticated decoding mechanisms to identify the start of valid instruction sequences accurately. §.§ Head and Tail Shadow Branches Figure <ref> shows the byte-by-byte contents of segments of two instruction cache lines. The figure shows the boundaries between instructions consecutively (without border), and branch instructions are shown in red. Figure <ref>(a) shows a cache line segment with an entry offset of byte 24, implying that bytes 0 through 23 (shown as shaded in the figure) are fetched from the L1-I cache and fed into the decoder, only to be ignored as not part of the basic-block beginning at byte 24. We designate branches in bytes 0 through 23 as Head shadow branches, shown in red in the figure. Figure <ref>(b) presents a segment of a different cache line that exits at byte 12. Bytes 13 to 63 are also loaded into the L1-I cache but are not sent to the decoder as the control flow directs the front-end away from these instructions before they are reached. Any branches these byes contain are classified as Tail shadow branches, also shown with a red font in the figure. §.§ Branch Types Branch instructions are fundamental components of a processor's Instruction Set Architecture (ISA), allowing the flow of execution to jump to different parts of a program sometimes conditionally. There are several types of branch instructions, each serving a specific purpose. Here is a quick rundown of the relevant types of branches which we are concerned with in this work. IndirectUnCond: Jump to an address stored in a register or memory location. DirectCond: Jump to a specified address only if a certain condition is met, typically based on condition codes e.g. “jump if zero.” DirectUnCond: Jump to a specified address, changing the flow of execution unconditionally. Return: Returns from a subroutine to the address saved by a CALL instruction. Call: A form of DirectUncond Jump that saves the return address in a register or on the stack. Importantly, since we intend to opportunistically decode, and insert into the SBB, branches on the unused fragments of cache lines before and after the executed basic block, we will not have a history of previous execution to rely on to produce a predicted target. Thus, only those branches where the target is determined from the PC, potentially with an encoded offset (Direct Unconditional branches and Calls) or those where it can be determined from recent Calls (Returns) are viable for our technique. §.§ Motivation Figure <ref> shows the breakdown of BTB misses seen in an 8K-entry BTB for each of the 16 workloads examined, by the type of branch as defined in the previous subsection. One important, broad distinction between branches is whether they are direct or indirect. This distinction tells us whether there is enough information between the bytes encoded in the branch instruction itself and the branch's PC to determine the target of the branch (i.e. direct) or if more information is required, either from a register or memory (i.e. indirect). Thus, the target of a previously unseen direct (non-BTB resident) branch is generally available at decode time. In contrast, previously unseen indirect branches require completion of the instruction in the core before the target can be known. Looking at Figure <ref>, we see that, at least among BTB misses, indirect branches are a vanishingly small percentage for each workload examined. A lack of indirect branches implies that it should be possible to decode a majority of unseen branches, and we should be able to directly insert them into the BTB without requiring the actual execution of those branches to resolve first. Referring back to Figure <ref>, we see that the vast majority of BTB missing branches are either Head or Tail Shadow Branches as defined in Section <ref>. Thus, putting both observations together, it should be feasible to reduce BTB misses significantly by decoding the direct, unused, Head and Tail Shadow Branches on cache lines that FDIP already brings into the core's front-end. This is the goal of Skeia. § SKEIA DESIGN In the context of commercial and data center workloads, the issue of instruction stream resteers resulting from BTB misses on infrequently encountered, or 'cold', branches is a significant challenge that needs to be addressed. Previous solutions attempt to mitigate this issue by prefetching into the BTB, which necessitates substantial hardware modifications or software profiling but does not adequately address a significant portion of cold BTB misses. As discussed previously, we observe that the branches that miss in the BTB are consistently already in the L1-I cache, having been fetched to execute other basic blocks containing instructions in the same cache lines. However, the missed branches remain un-decoded because they are overshadowed by an executed branch that leaves the cache line or by being before the target of a branch into the cache line. Surprisingly, the bytes encoding these shadow branches are often already pulled into the processor front-end by FDIP but they are discarded as being off the current executed path. This section describes our approach to identifying and decoding shadow branch instructions in a variable-length ISA. As discussed in Section <ref>, each cache line can contain either unused Head or Tail bytes, requiring different approach for Head and Tail decoding. The following subsections discuss our approach to decoding those Head and Tail Shadow Branches and the microarchitectural modifications for storing them until their use. §.§ Discovering Shadow Branches Identifying and decoding shadow branches varies depending on whether they are Head or Tail Shadow Branches, with Head Shadow Branches being significantly more challenging to identify. Here, we will discuss the identification and decoding processes for each type of shadow branch in detail. By opportunistically decoding bytes from the beginning of the cache line up to the entry offset (the target of the jump instruction) within that cache line, we can broadly identify Head Shadow Branches. However, Tail branches are identified by decoding bytes starting from a taken branch and continuing to the end of the cache line. §.§ Head Shadow Branch Identifying Head Shadow Branches poses significant challenges in CISC ISAs. With variable-length instruction encoding, identifying instructions looking backward from the branch target may yield more than one possible set of instructions. Figure <ref> illustrates the problem. The figure shows the entry point at byte 7 in the line and the shadow region covering bytes 0-6 in the line. Critically, as the figure shows, this case has two possible valid decodings of instructions in the shadow region. This is possible because we do not know if byte 0 in the cache line is the beginning or somewhere in the middle of a variable-length instruction. In this particular case, two possible sets of instructions could be decoded out of bytes 0-6; the first starts with an instruction at byte 0 of the cache line, and the second starts with an instruction at byte 1 of the cache line. Of course, only one of these decodings is actually "true" (here the ret is a bogus branch). Interestingly, in this particular case both "paths" (i.e. starting decode from either byte 0 or 1), converge after the first instruction and the shadow branch (highlighted in red) will be correctly decoded. We call when different decode paths converge to one path, "merging path" and discuss it further below. For Head decoding, we specifically target cache lines related to the start of the FTQ entry. This targeting is strategic because the FTQ entry's beginning corresponds to the target of a branch, as each entry represents a continuous set of instructions. The observation discussed above that instructions may not start at the beginning of the cache line prompts us to target these specific cache lines for head shadow branch decoding. The Shadow Branch Decoding process initiates upon completing the prefetch request and confirming that the cache line is present in the L1-I cache[We emphasize that shadow branch decoding is far off the critical path of the front-end. It is done in parallel with the regular FDIP-to-Decode path, as the branches decoded are typically not used for some time after the initial executed path decode of the line. Thus, the process can take multiple cycles.]. This process consists of two main phases: Index Computation and Path Validation. The first phase focuses on annotating the instruction boundaries, while the second phase is concerned with identifying direct unconditional branches and returns. These stages are structured into distinct stages to optimize the Head Decoding process. These phases are illustrated in Figure <ref>. The first phase, identifies the instruction boundaries within the cache line to determine the beginning of the target segment for shadow branch decoding. This stage involves computations and analytical processes to pinpoint the potential byte offset within the cache line where shadow branch decoding should start from, as elaborated in the following section. Once the Index Computation stage identifies the start offset, it initiates a Path Validation phase, focusing on decoding bytes from this start index up to the branch's target. The target/end-byte offset is identified from the FTQ information of the cache line. §.§.§ Index Computation The Index Computation phase determines the byte length of potential instructions within a byte stream. This process begins by sequentially feeding bytes from position 0 to the decoder until it detects a complete instruction. We record the length of this instruction in a vector called Length. Once the length is recorded, the index is incremented, and the process repeats, continuing until the entry offset is reached. This iterative approach ensures that every potential instruction starting from each byte position is accurately measured and recorded. Referring to Figure <ref>, the decoding process reveals that byte 0, represented by the hexadecimal 45, is identified as a single-byte instruction. In a similar vein, the decoder requires a sequence of 5 bytes to successfully decode an instruction commencing at byte 3, indicating the start of a potential instruction spanning 5 bytes (E9 F9 03 00 00). Progressing to the subsequent byte (F9), the decoder is capable of generating an instruction consisting of just one byte. In this scenario, there is an overlap, making it impossible for both decodings to be correct; therefore, we need to validate them. Finally, the presence of a zero in the figure denotes the inability to decode a valid instruction from that specific byte, for instance, at byte 7. §.§.§ Path Validation In the Path Validation phase, the information derived from the Index Computation phase is employed to identify shadow branches within all potential instruction sequences. The process begins by constructing a path starting with the value at index 0 of the Length vector. Subsequently, the index is incremented by this retrieved length value, and the newly acquired length is added to the path. This iterative procedure continues until the path aligns with the entry point of the line. If the path aligns, indicating the accuracy of the Index Computation phase, we check for the presence of any supported branches (jmp, call, return) and insert them into the corresponding SBB structure. For example, in Figure <ref>, we start with path = Length[0] = 1, then path += Length[path] = 3, continuing this process until path is equal to the entry offset if possible. Path 0 and Path 1 lead to a correct path, but Path 2 does not as path = Length[2] = 2, then path += Length[path] = 4, leading to index 7 which is not a valid instruction. It's evident that Path 0 and Path 1 share the same path from a specific point onward, this creates a "merging path". Based on this observation, we introduce optimizations to enhance the accuracy of decoded shadow branches: Valid Encodings: During path generation, if a maximum of six valid paths is reached based on empirical selection criteria, the associated cache line is discarded. This method ensures thorough exploration of potential instruction chains while effectively managing computational resources. i.e. We have 3 valid paths (path: 0, 1, 3) in Figure <ref>. Valid Index: Observing that numerous valid paths converge into a merge path, we examined which path would yield the best performance. Empirical selection revealed that consistently using the First Index provides better results compared to using the Zero Index or the Merge Index. We define the First Index as the index where the first valid path is found, the Zero Index denotes the point where, upon finding a valid path, byte decoding begins starting from index zero, and the Merge Index as the most common recent index among all valid paths. In Figure <ref> for example, the First Valid Index is equal to Zero Index = 0 (Byte 45) and the Merge Index = 3 (byte E9). These optimizations contribute to the accurate decoding of shadow branches and improve the overall efficiency of the Path Validation phase. §.§ Tail Shadow Branch For Tail shadow branch decoding, we specifically target cache lines that mark the end of the FTQ entry. The end of the FTQ entry is denoted by a branch instruction that redirects the control flow away from that particular cache line. This provides the opportunity to decode the remaining shadow bytes. These bytes typically remain undecoded as the control flow exits the cache line, making them prime candidates for efficient shadow branch decoding. Figure <ref> illustrates Tail Shadow Branch Decoding. As the figure shows, because the branch ending the FTQ entry is known, the start byte of the first instruction in the shadow region is also known. Thus there is only one possible set of instructions in the shadow region making the decode process far more straight forward. §.§ Head v/s Tail Shadow Branch Decoding Discovering head shadow branches involves computational steps that can occasionally yield incorrect results, potentially leading to an incorrect start point for decoding. This can result in the decoding of instructions that do not actually exist in the program's flow. Furthermore, these incorrect instructions might contain nonexistent, or "bogus", branches that could adversely affect the BPU, leading to inaccuracies in branch prediction and thus, performance degradation. When it comes to discovering tail shadow branches, the challenge of determining a starting point for decoding is less pronounced. This is because we already know the start and end points of the taken branch instruction. Therefore, we can begin decoding from the end of the branch instruction until the end of the cache line without the ambiguity that arises with head shadow branches. This clarity in determining the decoding start point simplifies the process for tail shadow branches compared to head shadow branches, where the variability in instruction lengths and the presence of prefixes make identifying the beginning of a valid instruction sequence more complex. The design and implementation of discovering head and tail shadow branches are orthogonal, meaning they are independent and can be utilized separately or in combination based on power, area budget and performance requirements. This flexibility allows architects to choose between focusing on discovering head shadow branches, tail shadow branches, or both, depending on the specific needs of their system. In Section <ref>, the performance impact of discovering just head shadow branches, just tail shadow branches, and both combined is described and discussed in detail. § DESIGN IMPLEMENTATION Figure <ref> illustrates the integration of the Skeia's components into the BPU, including the Shadow Branch Decoder (SBD) and the Shadow Branch Buffer (SBB). When the SBD identifies a supported branch instruction, it inserts it into the corresponding SBB. During a BTB lookup, the SBB is accessed concurrently. If a BTB miss occurs, the SBB will supply a target if one is available. Each of these components is described in detail below. §.§ Shadow Branch Decoder (SBD) The Shadow Branch Decoder (SBD) is a highly simplified decoder focused solely on identifying instruction boundaries and decoding supported branch instructions. Upon fetching a new cache line, the SBD scans the bytes using the algorithms discussed in Section <ref> above for decoding head and tail shadow branches. When SBD identifies a branch, it is inserted into the SBB as shown in Figure <ref>. §.§ Shadow Branch Buffer (SBB) One might assume that once the SBD identifies shadow branches, they should be inserted directly into the BTB. However, the BTB is a critical path component of the front-end, which we want to avoid taking bandwidth away from and prevent inserting possibly incorrect branches into, causing pollution. We propose a novel parallel structure to the BTB that delivers significant performance enhancements despite its simplicity and compact size. Figure <ref> illustrates the division of the SBB into two distinct buffers: the DirectUncond SBB (U-SBB) and the Return SBB (R-SBB). The U-SBB exclusively stores Direct Unconditional branches, whereas the R-SBB is dedicated to Return instructions. This approach allocates specific roles to the U-SBB and R-SBB, optimizing both area utilization and performance. Figure <ref> depicts the structure of each entry. An entry consists of 10 bits designated for the tag, 1 bit to indicate validity, and 1 bit for the Least Recently Used (LRU) status per way. BTB entries allocate 2 bits for identifying the branch type and 64 bits for the branch target address. U-SBB entries utilize 1 bit to mark retire instructions and 64 bits for the target address. R-SBB entries use 6 bits for the offset and 1 bit to denote retirement. In total, an entry in the R-SBB requires only 20 bits compared to 78 bits for an entry in the U-SBB. This efficiency allows the SBB structure to retain more overall entries. §.§ Replacement policy In the implementation of the SBB structures, the Least Recently Used (LRU) replacement policy is utilized. Additionally, when a branch target provided by the SBB is committed, the "Retired" bit is set in the corresponding SBB entry. This ensures that "bogus" branches are evicted first, allowing the "useful" branches to remain longer. § SIMULATION METHODOLOGY The section outlines the simulation framework, the large instruction footprint workloads and different configurations to evaluate Skeia, our shadow branch decoding technique. §.§ Baseline Simulation Model Our baseline CPU setup mimics the Golden Cove  <cit.> (commercially known as Alder Lake) CPU core microarchitecture using the gem5 simulator <cit.>, as detailed in Table <ref>. The study employs simulating workloads through an out-of-order, execution-driven CPU model (O3CPU) within Full system simulation, which emulates a complete operating system (Ubuntu) and runs multi-threaded JAVA applications. The O3CPU has been extended to model branch-misprediction-based wrong path execution. Initially, the workloads undergo a warm-up phase of around 10 million instructions, during which the caches, branch predictor, and other structures are primed. Following this warm-up phase, the simulation transitions to detail mode (O3CPU) and continues for an additional 100 million instructions. Also, we used the cacti <cit.> tool to approximate the latency as the BTB scales. §.§ Core Front-End Modeling One notable contribution and unique aspect of this work is our faithful modeling of an exceptionally aggressive processor front-end. We achieve this by extending gem5's O3CPU model to incorporate FDIP, enabling support for a decoupled front-end. Given that FDIP's performance directly depends on the branch predictor's accuracy, we enhanced gem5's BPU by integrating an ITTAGE indirect predictor <cit.> and employing with an 8K-entry BTB. We also integrate support for the BPU indirect predictor and the BTB to queue predicted cache lines into the FTQ. The FTQ is capable of directly issuing prefetches into the L1-I cache. In the event of control flow resteers, the FTQ is flushed before resuming fetching from the correct path. Since gem5 operates in an execution-driven manner, it accurately models the effects of such wrong-path resteers. In our baseline setup, we employ a 24-entry FTQ, with each entry corresponding to a basic block. This configuration strikes a balance by providing enough depth to tolerate miss latency while avoiding excessive front-end aggressiveness that could lead to adverse effects. The commercial processor vendors have been using FDIP-based front-end designs for over a decade, as evidenced by recently disclosed commercial CPU designs <cit.>. Ishii et al. <cit.> raised similar concerns regarding necessity of using FDIP for modern front-end research. Therefore we used gem5 with FDIP model using in PDIP <cit.> with further enhancements are the baseline in our work. §.§ Benchmarks We evaluated our approach using 16 widely used client-side and server-side multi-threaded workloads with substantial code footprints, thus stressing the CPU front-end. These workloads were selected from various benchmark suites, as listed in Table <ref> <cit.>. Benchmarks with an L1-I MPKI of over 10 are used in this work as shown in Figure  <ref>. BOLT <cit.>, is a relatively recent software technique where the binary is instrumented and then profiled and this profiling data is used to improve the instruction cache and BTB behavior. By its nature it can only be applied to pre-compiled binaries, thus of the applications we examined it was only applied to Verilator (hence all results to this point are shown as "verilator-bolted"). For completeness we also examined our Skeia technique on a non-bolted version of Verilator. §.§ OS and IO bottlenecks : Full System In Full System simulation the kernel being simulated is responsible for software context switches which adds potential noise when multi-threaded workloads are used. In addition to the kernel scheduler noise, IO interrupts also trigger context switch which is another source of noise. To address these concerns we have similar approach proposed in PDIP <cit.>. We have ensured that divergence between different configurations is within 0.2%. § EVALUATION This section discusses the results of our evaluation of Skeia using the described framework and methodology. We emphasize Skeia's impact on system performance and its effects on L1-I and BTB. §.§ Performance Analysis Figure <ref> depicts the relative IPC gains observed across our benchmark suite under different configurations: head-only, tail-only, and combined (head and tail) opportunistic shadow decoding. Overall, shadow decoding demonstrates a significant geomean speedup of 5.72% compared to the baseline performance of FDIP. Interestingly, head shadow decoding yields a 3.74% geomean speedup, while tail shadow decoding alone achieves a respectable, higher improvement in performance of 4.45%. Given the complexity of implementing head shadow decoding, due to the non-determinism of the paths, designers may choose to only implement tail shadow decoding and achieve most of the performance benefit. §.§.§ Performance analysis with respect to BTB Misses Benchmarks , , and show a lower IPC gain relative to the others. We note that these benchmarks also have lower total BTB misses, as illustrated in Figure <ref>. The lack of shadow branches considerably narrows the potential and impact of the opportunistic decoding technique, leading to marginal gains. §.§.§ Performance with respect to L1I cache misses Figure <ref>, provides insights into all BTB miss branches. The stacked bar chart indicates a significant percentage of BTB miss branches associated with cache lines that were present in L1I. Notably, exhibits numerous branch cache lines experiencing L1I cache hits but with BTB misses, in contrast to and . Despite this observation, the IPC gain remains minimal. This behavior can be attributed to the low occurrence of direct calls and returns, as illustrated in Figure <ref>. §.§.§ Skeia MPKI reduction Figure <ref> shows the MPKI rate for the baseline BTB versus the same BTB with an additional 12.25KB of storage space (equal to the size of the SBB) and versus Skeia with its SBB. The figure demonstrates that Skeia reduces the average BTB MPKI by ∼30 when compared to the baseline BTB configuration. In contrast, allocating the same hardware budget used for the SBB to the BTB results in only a ∼18 MPKI reduction. §.§.§ Verilator Bolted vs Pre-Bolt While these results have been elided for brevity and to prevent confusion, we can summarize them as in general the non-bolted verilator has exhibits significantly more BTB misses. As a result, Skeia improves performance significantly more than it does in the verilator-bolted shown above (10.27%). Given that Skeia achieves significant performance gain when the application is bolted as well, this indicates that Skeia provides robust gaines regardless of software techniques such as BOLT. §.§ SBB Sensitivity Analysis We investigate the performance impact of scaling the sizes of U-SBB and R-SBB by varying the number of entries while maintaining a constant associativity of 4 relative to the FDIP baseline using an 8K-entry BTB. The top chart in Figure <ref> illustrates the effectiveness of combining both structures and identifies the optimal configuration while maintaining a constant state size of 12.25 KB. The preferred setup entails allocating 768 entries to U-SBB and 2024 entries to R-SBB. This distribution results in 7.3125 KB for U-SBB and 4.9375 KB for R-SBB, totaling 12.25 KB. In the bottom chart of Figure <ref> we show how the performance is scaled if we provide more hardware budget while keeping the number of entries ratio between U-SBB and R-SBB the same. § RELATED WORK Previous studies mitigate front-end stalls by minimizing instruction cache misses and improving BTB efficiency by introducing Hardware or Software approches. §.§ Hardware-Based Approaches Confluence <cit.> introduces AirBTB, a structure that tracks the branches within cache blocks brought into the L1-I. This information is provided to the branch predictors when a cache line is fetched to allow the front-end to continue speculating the instruction stream. AirBTB's organization allows it to track branches on a cache line basis; in particular, its design ensures that its contents are present in the L1-I, making it unlikely to retain or consistently identify cold branches present in those cache lines. Boomerang <cit.>, similar to Confluence, attempts to reconcile BTB ineffiencies by extracting branch information from cache lines brough into the L1-I due to a cache miss. On a BTB miss, Boomerang accesses the cache line containing the missing branch information in the L1-I or prefetches it into the L1-I and predecodes it. Any branch information in the cache line is placed in a BTB prefetch buffer until the BTB entry receives a demand request. Boomerang enables aggressive instruction stream speculation but risks polluting both the L1-I and BTB with speculative entries, especially when encountering large instruction footprints. In contrast, Skeia leverages the already present cache lines in the L1-I, does not require additional access to the L1-I, and does not consume BTB bandwidth in its operation. Shotgun <cit.> focuses on BTB-directed instruction fetch and BTB prefetching by dividing the BTB structure into an Unconditional BTB (U-BTB) that tracks conditional branches' targets and the spatial footprint surrounding each branch target, a Conditional BTB (C-BTB) that tracks conditional branch outcomes, and a Return Instruction Buffer (RIB) to track local control flow by recording return instructions. The authors proactively fill the BTB structures by inserting branch information predecoded from missed instruction cache lines into a BTB prefetch buffer, which migrates its entries into a corresponding BTB when it experiences a hit on one of its entries. Shotgun extends Boomerang and suffers similar challenges when faced with large instruction footprints, potentially inducing cache pollution through aggressive L1-I prefetching while not retaining cold branches in the BTB structures. Divide and Conquer <cit.> takes a different approach and divides its front-end prefetching mechanism to target different instruction stream behaviors. They incorporate a sequential prefetcher to identify sequential accesses and a discontinuity prefetcher to identify non-sequential accesses for prefetching into the L1-I. Cache lines prefetched by the discontinuity prefetcher are also sent to a predecoder to identify branches to place in a BTB prefetch buffer. To reduce the number of cache lookups, they track the last eight recently demanded, or prefetched, cache lines and compare incoming prefetches to them to reduce redundant lookups. The authors address variable-length ISAs and record the byte offsets of previous predecoded branches in the LLC. Again, this proposal requires prefetching into the L1-I and additional cache lookups, and while tracking recently accessed cache lines reduces redundant accesses, Skeia leverages cache lines already going to the front end to identify potential BTB misses, incurring no additional L1-I traffic. These approaches successfully reduce the overall miss rate of the BTB but are susceptible to cold branch behavior, suffering from similar hardware overhead constraints as the BTB. Identifying infrequent cold branches requires large metadata structures that can prioritize high-impact cold BTB misses. In contrast, Skeia is a low overhead mechanism that incurs no additional L1-I accesses or pollutes any structures on the critical path with speculative instructions, and leverages instruction cache lines already sent to the front-end to improve performance. §.§ Software-Based Approaches Alternatively to hardware based mechanisms, software-based profile-guided solutions have been proposed to reduce the number of BTB misses and reduce the impact of large instruction footprints. Twig <cit.> takes a software-based approach to improving BTB performance. The authors use software profiling to identify branches that result in a high number of BTB misses. Twig introduces a BTB prefetch instruction that inserts along paths with a high conditional probability of leading to a BTB miss based on a profile of the application's control flow. They further improve this approach by compressing the branch target to lower the BTB prefetch's overhead, storing key-value pairs in memory when the target cannot be compressed. The key-value pairs also encode a spatial footprint of nearby branches to be prefetched with a coalesced BTB prefetch instruction. Thermometer <cit.> uses software profiling to collect a trace of branch execution and then simulate an optimal replacement policy to identify a particular branch's temperature or hit-to-taken ratio. The temperature is injected as hints into the unused bits of x86 branch instructions to be passed to and stored in the BTB to inform its replacement policy, prioritizing cold branches for eviction. Software profiling techniques provide a low overhead solution to reduce BTB misses as the majority of collection and analysis is performed offline. However, profiling can be challenging to deploy in commercial systems as changing the underlying application may change the hints' accuracy, requiring re-profiling and re-analysis to regain performance benefits. § CONCLUSIONS Contemporary data center and cloud applications continue to become more complex, with increasing code footprints resulting in a high number of BTB misses. FDIP can help reduce L1-I cache misses, but it heavily depends on the contents of the BPU's tracking structures. When it encounters a BTB miss, the BPU may not identify the current instruction as a branch to FDIP, resulting in mis-speculation and decreased performance. We observe that the vast majority, 75%, of unidentified branches that cause BTB-misses are present in instruction cache lines that FDIP has previously fetched. We find that these branches are in the shadow of executed basic block, already present in the front-end, but are in the cache line before the branch target that brought the line into the cache or are present after a taken branch leaves the cache line. We propose Skeia, a novel shadow branch decoding technique that identifies and decodes unused bytes in cache lines already fetched by FDIP, inserting them into a Shadow Branch Buffer (SBB). We demonstrate that Skeia, with a minimal size of 12.25KB, delivers a geomean speedup of ∼5.7% over a 8K-entry BTB (78KB) and ∼2% versus adding an equal amount of state to the BTB, across 16 L1-I bound, commercial workloads. plain
http://arxiv.org/abs/2408.11939v1
20240821184421
Matmul or No Matmal in the Era of 1-bit LLMs
[ "Jinendra Malekar", "Mohammed E. Elbtity", "Ramtin Zand Co" ]
cs.AI
[ "cs.AI", "cs.LG" ]
Advances in Preference-based Reinforcement Learning: A Review 1st Youssef Abdelkareem Electrical and Computer Engineering University of Waterloo Waterloo, Canada yafathi@uwaterloo.ca 2nd Shady Shehata Electrical and Computer Engineering University of Waterloo Waterloo, Canada sshehata@uwaterloo.ca 3rd Fakhri Karray Electrical and Computer Engineering University of Waterloo Waterloo, Canada karray@uwaterloo.ca ================================================================================================================================================================================================================================================================================================================================================================================== § ABSTRACT The advent of 1-bit large language models (LLMs) has attracted considerable attention and opened up new research opportunities. However, 1-bit LLMs only improve a fraction of models by applying extreme quantization to the projection layers while leaving attention heads unchanged. Therefore, to avoid fundamentally wrong choices of goals in future research, it is crucial to understand the actual improvements in computation and memory usage that 1-bit LLMs can deliver. In this work, we present an adaptation of Amdahl's Law tailored for the 1-bit LLM context, which illustrates how partial improvements in 1-bit LLMs impact overall model performance. Through extensive experiments, we uncover key nuances across different model architectures and hardware configurations, offering a roadmap for future research in the era of 1-bit LLMs. § INTRODUCTION Generative Large language models (LLMs) such as GPT <cit.>, OPT <cit.> and LLaMA <cit.> have attracted significant attention in recent years because of their impressive performance across various tasks, including but not limited to machine translation <cit.>, conversational chatbots <cit.>, question answering <cit.>, and even code generation <cit.>. However, the remarkable performance of LLMs has come with increasing computational and energy costs. This necessitates expanding high-performance computing resources in data centers, potentially delaying green energy commitments and causing adverse environmental impacts <cit.>. Consequently, recent research has focused on optimizing the energy footprint of LLMs through various model optimization and compression techniques like pruning <cit.> and quantization <cit.>. In addition to enhancing the energy efficiency of running LLMs in data centers, model optimization and compression can facilitate the deployment of LLMs on mobile and edge computing devices for real-time processing <cit.>. This advancement could benefit various emerging applications, including social robotics <cit.>, and augmented and virtual reality <cit.>. Among LLM compression approaches, quantization has become a focal point, driven by works such as Smoothquant <cit.>, Omniquant <cit.>, and more recently ShiftAddLLM <cit.> which emphasize post-training quantization (PTQ) as a cost-effective approach avoiding the processes of retraining and fine-tuning which can be particularly costly for LLMs. A newly emerging approach to optimizing and compressing models through quantization-aware training (QAT) involves the extreme quantization of certain portions of LLMs, using binary {-1,1} and ternary {-1,0,1} weights <cit.>. This development has initiated the era of 1-bit LLMs. Besides the memory utilization advantages, extreme quantization transforms the costly matrix multiplication (MatMul) operations into more efficient addition and subtraction operations, and thus leading to MatMul-free operations. However, it is important to note that not all MatMul operations can undergo extreme quantization due to the resulting drop in accuracy. Specifically, MatMul operations in the attention heads still require higher precisions, such as 16-bit floating point (FP16) or 8-bit integer (INT8). A follow-up work <cit.> has built upon BitNet <cit.>, aiming to eliminate MatMul operations in attention heads by using Hadamard products. The advent of 1-bit LLMs has paved the way for various new research directions. However, since they currently address only a fraction of the model, a crucial question arises: [] What effects do partial enhancements in 1-bit LLMs have on the overall performance of the model? Answering this question is vital for guiding future research effectively and avoiding misguided or ineffective goals. For example, if the MatMul operations that are replaced with MatMul-free operations in current 1-bit LLMs account for the majority of the model's computation and memory usage, then focusing on optimizing the relatively minor MatMul operations in attention heads may be less impactful. In this case, prioritizing custom hardware development to fully leverage extreme quantization would be more sensible. Conversely, if the conversion of the fraction of MatMul operations to MatMul-free operations in current 1-bit LLMs does not significantly affect overall computation and memory usage, it would be more prudent to focus on optimizing the MatMul operations in the attention heads—currently not optimized in 1-bit LLMs—rather than investing in hardware development for current 1-bit LLMs. In this study, we address the highlighted research question through extensive experiments and analyses on various LLMs. We explore various model hyperparameters across two types of hardware designed for edge and cloud environments. In addition, we propose an adaptation of Amdahl's Law for LLMs to identify how partial improvements in LLM can translate into overall enhancements for the entire model. Our findings reveal important nuances that can guide future research in the 1-bit LLMs era. § RELATED WORK §.§ Transformer Quantization Transformer quantization can be categorized into weight-only and weight-and-activation quantization. Weight-only quantization reduces memory requirements, while weight-and-activation quantization also enhances computational efficiency. SmoothQuant <cit.> supports both activation and weight quantization, based on the principle that the difficulty of activation quantization can be mathematically transformed. This method demonstrates a 1.5× speedup and 2× memory reduction for LLMs with negligible loss, supporting W8A8 (8-bit weight, 8-bit activation) quantization. Omniquant <cit.> is another method supporting a broader spectrum of weight-activation quantization configurations, including W4A4, W4A16, W3A16, and W2A16. This is achieved through learnable weight clipping, which optimizes the clipping threshold, and learnable equivalent transformation, which mitigates activation outliers by shifting them to weights. LLM.int8() <cit.> loads the model in 8-bit format, with 99.9% of operations performed in 8-bit, except for emergent outliers. It uses vector-wise quantization with different normalization constants to preserve model performance. GPTQ <cit.> focuses solely on weight quantization, achieving 3-bit and 4-bit quantization with speedups of 3.24× and 4.53× on A6000 and A100 GPUs. QuIP# <cit.> is a weight-only quantization method that achieves state-of-the-art performance in sub-4-bit quantization. It employs a randomized Hadamard transform combined with vector quantization and then fine-tunes the model to enhance its fidelity. All the methods discussed above are related to PTQ. §.§ Era of 1-bit LLMs Before the LLM era, the concept of binary weight and activation quantization was explored in works such as Binarized Neural Networks (BNN) <cit.> and Ternary Neural Networks (TNN) <cit.>. While these studies did not focus on generative language tasks, they achieved significant performance improvements, with BNNs demonstrating up to 7× performance gains and TNNs showing up to 3.1× energy efficiency improvements. These findings underscore the remarkable ability of neural networks to operate effectively with just 1-bit precision. Recently, following the rise of LLMs and advancements in model performance, the BitNet <cit.> introduced a 1-bit transformer quantization technique for LLMs. This method replaces the conventional `nn.Linear` layer in PyTorch with a new BitLiner layer, where weights are restricted to either 1 or -1, and activations are represented in 8-bit precision. Despite this quantization, other components like self-attention remain in 8-bit format. BitNet's design suggests that it can scale to even larger transformer models, following scaling laws similar to those used for full-precision transformers. A variant of BitNet, known as BitNet 1.58 <cit.>, employs ternary weights (-1, 0, 1) and achieves perplexity and end-task performance comparable to full-precision transformers (FP16 or BF16). In terms of the computation, 1-bit LLMs transform MatMul operations into addition operations, due to the 1-bit nature of the weights, except for layers like attention, which need to remain in high precision to maintain performance. Additionally, 1-bit LLMs address the challenge of transferring model parameters from DRAM to on-chip accelerators such as SRAM, prompting the development of architectures optimized for the efficient operation of 1-bit LLMs. § APPROACH §.§ Demystifying the Underlying Operations of LLMs The overall architecture of the generative LLMs is shown in Figure <ref>, which includes N decoder blocks, each consisting of self-attention and feedforward layers followed by add and normalization operations <cit.>. The core of the LLM is the self-attention mechanism with multiple heads (h). For an attention head, the computation begins with three linear projections of the token vector to form the Key (K), Query (Q), and Value (V) vector sequences: Q = W_Q*I, K = W_K*I, V = W_V*I where I in the input token vector and W_Q, W_K, and W_V are trainable weight matrices with d × d dimensions where d is the embedding dimension. The operator * denotes the MatMul operation. The generated K, Q, and V vectors are then divided into h vectors with reduced dimensionality of d/h where h is the number of attention heads. At this stage, the value and key vectors are concatenated with the previous l-1 value and key tokens that are cached from previous token generation iterations to form a d/h × l matrix in each head, where l is the sequence length. Next, the attention scores are computed using the scaled dot-product of the queries and keys multiplied with the generated value matrix <cit.>. This step includes two MatMul (*) operations: Attention (Q, K, V) = softmax (Q*K^T/√(d)) * V Subsequently, the output of the attention heads are concatenated and linearly transformed as follows: MultiHead (Q,K,V) = Concat (head_1, ..., head_h) * W_X where head_i = Attention (Q_i, K_i, V_i), and W_X is a d × d matrix with trainable elements. Finally, the attention output is followed by a feed-forward network (FFN) that involves two linear and one nonlinear transformations. As shown in Figure <ref>, Gaussian error linear unit (GELU) <cit.> activation function is frequently utilized in LLMs to introduce nonlinearity into FFN. Therefore, the FFN computation can be described as follows, which involves two more MatMul operations: FFN (x) = GELU (x*W_I + b_I)*W_O + b_O where W_I and W_O are trainable matrices with d_FF× d and d × d_FF dimensions, respectively, where d_FF is the feed-forward layer size. Also, the x is MultiHead (Q,K,V). Both the self-attention and FFN layers are followed by layer normalization which can also introduce some nonlinearity in the process of division by the standard deviation. In summary, the computation of decoder blocks in LLMs involves a combination of nonlinear operations (LayerNorm, softmax, GELU), and linear MatMul operations. In particular, there are a total of 2h+6 MatMul operations in a decoder block including six weight-to-activation MatMuls (W_Q, W_K, W_V, W_X, W_I, and W_O projections) plus 2h activation-to-activation MatMuls to compute Attention(Q, K, V) in each head (h). In decoder-only LLMs, MatMuls are matrix-vector multiplications since the inference is performed iteratively, processing one input token per iteration and the keys and values from previous iterations are cached as shown in Figure <ref>. Table <ref> lists the dimensions of each of these MatMul operations. Previous works <cit.> have shown that designing dedicated hardware to compute nonlinear operations in LLMs can make their computation overhead negligible compared to MatMul operations. Consequently, optimizing MatMul operations can lead to a significant speedup in LLM computation. The 1-bit LLMs <cit.> as a recent approach that has attracted considerable attention involves extreme quantization of MatMuls in LLMs, except for the attention heads, which require higher precision to maintain accuracy. Quantizing all the weight matrices (W_Q, W_K, W_V, W_X, W_I, and W_O) in the LLMs to inlcude only binary {-1, 1} or ternary {-1, 0, 1} elements transforms the weight-to-activation MatMul operations to simple addition and subtraction operations, dividing the LLM models into two portions one with MatMul and one without MatMul (or MatMul-free), as shown in Figure <ref>. Due to variations in the model hyperparameters (d, l, h, and d_FF) across different types of LLMs, the proportion of the linear projections (weight-to-activation MatMuls) and attention head (activation-to-activation MatMuls) computation relative to the entire model can vary significantly. Therefore, performance analysis with layer-wise granularity, targeted in this work, can illuminate the effectiveness of the 1-bit LLMs and help determine future research directions. §.§ Design of LLM-Specific Hardware For the performance analysis, we leverage tensor processing unit (TPU) architectures which are specifically designed to accelerate the MatMul operations dominant in machine learning workloads. TPUs maximize data reuse while minimizing data transfer by utilizing systolic arrays at their core <cit.>. A systolic array typically consists of two-dimensional arrays of processing elements (PEs). Each PE performs a multiply-and-accumulate (MAC) operation, multiplying weights and inputs using a multiplier circuit and adding the result to previously computed partial sums using an accumulator circuit. The MAC result is either retained within the same PE or broadcast to other PEs for further computations, depending on the dataflow architecture. This MAC operation is executed in every PE of the systolic array, enabling efficient MatMul operations by maximizing data reuse and minimizing additional data transfer overhead. The dataflow in a systolic array is a mapping scheme determined by the microarchitecture of the PEs, which dictates how input data is fed into the array and how partial results and outputs are generated and stored. Rather than repeatedly loading and storing data to and from memory, each PE typically follows one of these dataflow architectures: (1) Input Stationary (IS): The inputs (or activations) remain fixed in the PEs while the weights are sequentially fed into the PEs; (2) Output Stationary (OS): The outputs are attached with the MAC units, while the inputs and weights circulate among the PEs. Inputs and weights are loaded, multiplied, and the results are accumulated with partial sums held in the PE. (3) Weight Stationary (WS): Each weight is preloaded into a register within each PE. During each cycle, inputs are multiplied by the weights and broadcast across the PEs. Figure <ref> shows the architecture of TPU, consisting of weight, input, and output memories, and a systolic array of size S=N × N PEs surrounded by the first-in-first-out (FIFO) buffers. Additionally, our TPU includes a Nonlinear Functional Unit, featuring custom hardware to support nonlinear operations in the LLMs. The Dataflow Generator block generates the memory read/write addresses to store or retrieve the inputs, weights, and outputs according to the selected dataflow. The Main Controller manages the data transfer between memories, FIFOs, and the systolic array. As previously discussed, the MatMul operations in generative LLMs involve matrix-vector multiplications. Consequently, the sizes of the input and output vectors are always smaller than those of the weight matrices. For activation-to-activation MatMuls in the attention head, where there are no weight values (See Figure <ref>), we store the concatenated Value and Key matrices (with d/h × l and l × d/h dimensions, respectively) in the weights memory, while the Query and attention score vectors are stored in the input memory. Based on the pattern in the size of the input, weight, and output tensors in matrix-vector multiplications involved in LLMs (mentioned in Table <ref>), a TPU design with larger weight memory compared to input and output memories would be more efficient, as it reduces the need for costly accesses to the main DRAM memory to load the weights. For the dataflow architecture, we conducted comprehensive experiments utilizing IS, OS, and WS dataflows. Based on the results obtained (refer to Appendix A), the OS dataflow architecture demonstrated the best performance. The OS dataflow is particularly advantageous for accumulating results since partial sums remain stationary and do not need to be moved frequently. Additionally, once weight and input values are fetched from their respective memories, they are reused by passing from one PE to another, leveraging the spatial dataflow capabilities of TPUs. §.§ Amdahl's Law of LLMs Since current 1-bit LLMs only improve a part of the model (the projections without enhancing the attention heads), we need a mechanism to determine how these partial improvements translate into overall enhancements for the entire model. This is crucial for addressing the main research question of the paper. The Amdahl's Law provides a framework for such scenarios. Amdahl's Law is a formula used to find the maximum improvement in a system when only part of it is improved. It can be expressed as: S_total=1/1-F + F/S_partial where F is the fraction of the system that is improved, S_partial is the factor by which the part F is improved, and S_total is the overall improvement of the entire system. In the context of 1-bit LLMs, we define F as the fraction of the MatMul operations that can be replaced with MatMul-free operations by using extreme quantization, relative to all MatMul operations in the model. Here, our focus is on quantifying the value of F across various LLMs and hardware configurations. Enhancing S_partial is closely tied to the design of custom hardware accelerators for binary and ternary operations, which is beyond the scope of this paper. § EXPERIMENTS §.§ Simulation Setup For our experiments, we study 13 different LLMs including GPT, OPT, and LLaMA models. Table <ref> lists all models and their corresponding hyperparameters. To save space in the main body of the paper, we only provide the results for the seven OPT models, as they represent a diverse range of model hyperparameter combinations. The results for GPT and LLaMA models are provided in the Appendix B. For the hardware, we designed two TPUs tailored for different applications: cloud and edge processing. The cloud TPU features a 256× 256 systolic array with 16MB of SRAM, while the edge TPU has a 32 × 32 systolic array with 8MB of SRAM. Both systolic arrays employ an OS dataflow. Also, in both designs, 2MB of memory is allocated for internal use, including storing control and configuration data, tracking computation states, managing data flow, and ensuring seamless data movement. All memories use double buffering to mask the latency associated with SRAM access. Table <ref> provides the memory distribution of both edge and cloud TPU designs. We utilize the cycle-accurate SCALE-Sim framework <cit.> to measure compute cycles and memory accesses in various LLMs. Our experiments investigate the distribution of computation and memory utilization between the attention heads, which require MatMul operations, and the projections, which can be binarized or ternarized and consequently do not involve MatMuls. In the results presented, the terms “MatMul” and “MatMul-Free” refer to these respective parts of the LLM computation. §.§ Performance Analysis on Cloud Setup In the first set of experiments, we examine various language models deployed on the cloud TPU setup to determine the fraction of the models that can become MatMul-Free in 1-bit LLMs, i.e., projection layers, through extreme quantization. To achieve this, we compare seven OPT models of various sizes (ranging from 350M to 66B parameters) with different sequence lengths (ranging from 128 to 4096). We vary the sequence length (l) because it only affects the computation in the attention heads and determines the size of the remaining MatMul operations in 1-bit LLMs (refer to Table <ref>). Figure <ref> exhibits the fraction of MatMul-Free operations in the OPT models deployed on the cloud setup, measured in terms of compute cycles and memory accesses. The fraction of MatMul-Free operations varies from roughly 23% to 98% for compute cycles and from 59% to 99.8% for memory access across different configurations. In general, MatMul-Free operations increase with model size and decrease with sequence length. §.§.§ Compute Cycle Analysis. Typically, the context length of the OPT models is equal to 2048. The MatMul-Free compute cycles for these OPT models can be observed in the 2048 row of Figure <ref> (a). For smaller language models like OPT 350M, approximately 63% of the computation occurs in the attention heads, involving MatMul operations, while only 37.1% of the computation benefits from MatMul-Free operations provided by 1-bit LLMs. Moreover, the OPT 1.3B model with a sequence length of 2048 is particularly noteworthy, as half of its computation can be MatMul-Free, while the other half involves MatMul operations. Models like LLaMA 7B and 13B generally use larger sequence lengths of 4096. In terms of model size, they are comparable to OPT 6.7B and 13B models. As illustrated in the 4096 row of Figure <ref> (a), MatMul-Free operations account for 64.5% and 69% of the computation for OPT 6.7B and OPT 13B models, respectively. A detailed analysis of the LLaMA models can be found in the Appendix B. §.§.§ Memory Access Analysis. 1-bit LLMs, such as BitNet <cit.> and BitNet 1.58 <cit.>, can achieve up to 16× and 8× reductions in weight memory utilization compared to FP16 LLMs because they use just 1 bit and 2 bits to represent binary and ternary weights, respectively. Besides memory capacity, another crucial factor affecting LLM throughput is the number of memory accesses required for inference. Therefore, we also analyzed the memory access of the MatMul-Free and MatMul components of the 1-bit LLM architecture. Figure <ref> (b) shows the ratio of memory reads and writes associated with the projection layers (MatMul-Free parts) to those of the entire model. The results reveal a trend similar to the compute cycle analysis. However, unlike compute cycles, for all cases, the majority of memory accesses are associated with the projection layers. For example, for OPT models with a sequence length of 2048, the fraction of memory access for the OPT 350M is 74%, increasing to 96% for the OPT 66B. The memory access analysis suggests opportunities for research into hybrid memory hierarchy designs, where the components of the model dominating memory accesses can be offloaded to faster memory technologies. For example, in OPT 6.7B with a sequence length of 4096, moving the projection layers' weight and activation data to faster memory can speed up more than 85% of all memory accesses. §.§.§ Amdahl's Law of LLMs. Here, we leverage Amdahl's Law of LLMs proposed in Equation (<ref>) to show how partial improvements in the LLMs can enhance overall performance. In particular, we vary S_partial from 1 to 100, and using the fractions (F) shown in Figure <ref>, calculate the overall improvement of model (S_total). Figure <ref> demonstrates an Amdahl's Law analysis when improvements are applied to either attention layers (MatMul parts) or the projections layers (MatMul-Free parts) of LLMs. The dashed lines show the effects of improving the projection layers, while the solid lines represent the impact of improvements to the attention layers. This analysis is based on OPT models with a typical sequence length of 2048. For Amdahl's Law analyses with other sequence lengths, please refer to the Appendix C. The Amdahl's Law analysis of LLMs deployed on the cloud setup reveals three key takeaways: * For smaller language models like OPT 350M, improving the attention heads has a greater impact than enhancing the attention layers. Therefore, using 1-bit LLM paradigms to improve these models may result in limited overall performance gains. * Medium-sized LLMs, such as OPT 1.3B and 2.7B, can benefit from combining 1-bit LLM improvements with enhancements to the attention heads, such as replacing MatMul operations with Hadamard products and additive operators as proposed in <cit.>. * For larger models like OPT 6.7B and above, improvements to the attention heads have a minimal effect on overall performance. In these cases, employing 1-bit LLM methods alone can lead to significant gains in performance and throughput. §.§ Performance Analysis on Edge Setup Figure <ref> presents the compute and memory analysis for the edge setup. In all cases, most of the computations occur in the MatMul-Free portion. The only instance where the computation is relatively evenly distributed between the MatMul and MatMul-Free parts is for OPT 350M with a 4096 sequence length. A similar pattern is observed in memory accesses. This happens because, in the edge setup, the smaller 32× 32 systolic array is less efficient at handling the larger matrices typically found in projection layers. Meanwhile, the MatMul operations in the attention heads are smaller due to the splitting of large matrices among multiple heads (refer to Figure <ref>), making these operations more manageable even with smaller systolic arrays. Consequently, this increases the ratio of computation in the projection layers compared to the attention heads. §.§.§ Amdahl's Law of LLMs. Figure <ref> illustrates the Amdahl's Law analysis for the OPT models with 2048 sequence length deployed on the edge TPU setup. The results indicate that, unlike in the cloud setup, across all cases, enhancing the projection layers leads to significantly greater overall improvements in the entire model. Conversely, improving the attention layers yields only marginal gains. This suggests that 1-bit LLM approaches targeting optimization of projection layers are significantly more efficient in the edge setup. Therefore, a promising research direction would be to focus on developing efficient custom hardware for implementing extremely quantized projection layers, rather than concentrating on algorithmic and hardware innovations to enhance computation in the attention heads. § CONCLUSION The proposal of 1-bit LLMs has opened several research avenues, including the development of custom hardware for 1-bit LLMs as well as algorithmic innovations to enhance aspects of LLM computation that 1-bit LLMs cannot address. Here, we aimed to provide a roadmap to avoid fundamentally incorrect or inefficient research goals in this field. We introduced the concept of Amdahl's Law of LLMs, which helps determine how partial improvements from 1-bit LLMs translate into overall model enhancements. We conducted extensive evaluations across different LLMs with various hyperparameters to identify relevant patterns. The results reveal a significant dependency of 1-bit LLM efficacy on model sizes, hyperparameters, and hardware configurations. Key findings include: (i) 1-bit LLM paradigms have limited impact on smaller language models, particularly when the context length is large, (ii) for medium-sized LLMs, 1-bit LLM methods show benefits, but further algorithmic innovations are needed to enhance parts that 1-bit LLM approaches cannot improve, and (iii) for large-scale LLMs, extreme quantization from 1-bit LLMs alone can improve the majority of computations, in some cases by more than 99%. § WHAT FILES TO SUBMIT You must submit the following items to ensure that your paper is published: * A fully-compliant PDF file. * Your source file submitted as a single .tex file (do not use the “input" command to include sections of your paper — every section must be in the single source file). (The only allowable exception is .bib file, which should be included separately). * The bibliography (.bib) file(s). * Your source must compile on our system, which includes only standard 2020 TeXLive support files. * Only the graphics files used in compiling paper. * The -generated files (e.g. .aux, .bbl file, PDF, etc.). Your source will be reviewed and recompiled on our system (if it does not compile, your paper will be returned to you. Do not submit your source in multiple text files. Your single source file must include all your text, your bibliography (formatted using aaai25.bst), and any custom macros. Your files should work without any supporting files (other than the program itself) on any computer with a standard distribution. Do not send files that are not actually used in the paper. Avoid including any files not needed for compiling your paper, including, for example, this instructions file, unused graphics files, style files, additional material sent for the purpose of the paper review, intermediate build files and so forth. Obsolete style files. The commands for some common packages (such as some used for algorithms), may have changed. Please be certain that you are not compiling your paper using old or obsolete style files. Final Archive. Place your source files in a single archive which should be compressed using .zip. The final file size may not exceed 10 MB. Name your source file with the last (family) name of the first author, even if that is not you. § USING TO FORMAT YOUR PAPER The latest version of the AAAI style file is available on AAAI's website. Download this file and place it in the search path. Placing it in the same directory as the paper should also work. You must download the latest version of the complete AAAI Author Kit so that you will have the latest instruction set and style file. §.§ Document Preamble In the source for your paper, you must place the following lines as shown in the example in this subsection. This command set-up is for three authors. Add or subtract author and address lines as necessary, and uncomment the portions that apply to you. In most instances, this is all you need to do to format your paper in the Times font. The helvet package will cause Helvetica to be used for sans serif. These files are part of the PSNFSS2e package, which is freely available from many Internet sites (and is often part of a standard installation). Leave the setcounter for section number depth commented out and set at 0 unless you want to add section numbers to your paper. If you do add section numbers, you must uncomment this line and change the number to 1 (for section numbers), or 2 (for section and subsection numbers). The style file will not work properly with numbering of subsubsections, so do not use a number higher than 2. §.§.§ The Following Must Appear in Your Preamble §.§ Preparing Your Paper After the preamble above, you should prepare your paper as follows: If you want to add links to the paper's code, dataset(s), and extended version or similar this is the place to add them, within a links environment: Make sure that you do not de-anonymize yourself with these links. You should then continue with the body of your paper. Your paper must conclude with the references, which should be inserted as follows: §.§ Commands and Packages That May Not Be Used There are a number of packages, commands, scripts, and macros that are incompatable with aaai25.sty. The common ones are listed in tables <ref> and <ref>. Generally, if a command, package, script, or macro alters floats, margins, fonts, sizing, linespacing, or the presentation of the references and citations, it is unacceptable. Note that negative vskip and vspace may not be used except in certain rare occurances, and may never be used around tables, figures, captions, sections, subsections, subsubsections, or references. §.§ Page Breaks For your final camera ready copy, you must not use any page break commands. References must flow directly after the text without breaks. Note that some conferences require references to be on a separate page during the review process. AAAI Press, however, does not require this condition for the final paper. §.§ Paper Size, Margins, and Column Width Papers must be formatted to print in two-column format on 8.5 x 11 inch US letter-sized paper. The margins must be exactly as follows: * Top margin: 1.25 inches (first page), .75 inches (others) * Left margin: .75 inches * Right margin: .75 inches * Bottom margin: 1.25 inches The default paper size in most installations of is A4. However, because we require that your electronic paper be formatted in US letter size, the preamble we have provided includes commands that alter the default to US letter size. Please note that using any other package to alter page size (such as, but not limited to the Geometry package) will result in your final paper being returned to you for correction. §.§.§ Column Width and Margins. To ensure maximum readability, your paper must include two columns. Each column should be 3.3 inches wide (slightly more than 3.25 inches), with a .375 inch (.952 cm) gutter of white space between the two columns. The aaai25.sty file will automatically create these columns for you. §.§ Overlength Papers If your paper is too long and you resort to formatting tricks to make it fit, it is quite likely that it will be returned to you. The best way to retain readability if the paper is overlength is to cut text, figures, or tables. There are a few acceptable ways to reduce paper size that don't affect readability. First, turn on \frenchspacing, which will reduce the space after periods. Next, move all your figures and tables to the top of the page. Consider removing less important portions of a figure. If you use \centering instead of \begin{center} in your figure environment, you can also buy some space. For mathematical environments, you may reduce fontsize but not below 6.5 point. Commands that alter page layout are forbidden. These include \columnsep, \float, \topmargin, \topskip, \textheight, \textwidth, \oddsidemargin, and \evensizemargin (this list is not exhaustive). If you alter page layout, you will be required to pay the page fee. Other commands that are questionable and may cause your paper to be rejected include \parindent, and \parskip. Commands that alter the space between sections are forbidden. The title sec package is not allowed. Regardless of the above, if your paper is obviously “squeezed" it is not going to to be accepted. Options for reducing the length of a paper include reducing the size of your graphics, cutting text, or paying the extra page charge (if it is offered). §.§ Type Font and Size Your paper must be formatted in Times Roman or Nimbus. We will not accept papers formatted using Computer Modern or Palatino or some other font as the text or heading typeface. Sans serif, when used, should be Courier. Use Symbol or Lucida or Computer Modern for mathematics only. Do not use type 3 fonts for any portion of your paper, including graphics. Type 3 bitmapped fonts are designed for fixed resolution printers. Most print at 300 dpi even if the printer resolution is 1200 dpi or higher. They also often cause high resolution imagesetter devices to crash. Consequently, AAAI will not accept electronic files containing obsolete type 3 fonts. Files containing those fonts (even in graphics) will be rejected. (Authors using blackboard symbols must avoid packages that use type 3 fonts.) Fortunately, there are effective workarounds that will prevent your file from embedding type 3 bitmapped fonts. The easiest workaround is to use the required times, helvet, and courier packages with 2e. (Note that papers formatted in this way will still use Computer Modern for the mathematics. To make the math look good, you'll either have to use Symbol or Lucida, or you will need to install type 1 Computer Modern fonts — for more on these fonts, see the section “Obtaining Type 1 Computer Modern.") If you are unsure if your paper contains type 3 fonts, view the PDF in Acrobat Reader. The Properties/Fonts window will display the font name, font type, and encoding properties of all the fonts in the document. If you are unsure if your graphics contain type 3 fonts (and they are PostScript or encapsulated PostScript documents), create PDF versions of them, and consult the properties window in Acrobat Reader. The default size for your type must be ten-point with twelve-point leading (line spacing). Start all pages (except the first) directly under the top margin. (See the next section for instructions on formatting the title page.) Indent ten points when beginning a new paragraph, unless the paragraph begins directly below a heading or subheading. §.§.§ Obtaining Type 1 Computer Modern for . If you use Computer Modern for the mathematics in your paper (you cannot use it for the text) you may need to download type 1 Computer fonts. They are available without charge from the American Mathematical Society: http://www.ams.org/tex/type1-fonts.html. §.§.§ Nonroman Fonts. If your paper includes symbols in other languages (such as, but not limited to, Arabic, Chinese, Hebrew, Japanese, Thai, Russian and other Cyrillic languages), you must restrict their use to bit-mapped figures. §.§ Title and Authors Your title must appear centered over both text columns in sixteen-point bold type (twenty-four point leading). The title must be written in Title Case according to the Chicago Manual of Style rules. The rules are a bit involved, but in general verbs (including short verbs like be, is, using, and go), nouns, adverbs, adjectives, and pronouns should be capitalized, (including both words in hyphenated terms), while articles, conjunctions, and prepositions are lower case unless they directly follow a colon or long dash. You can use the online tool <https://titlecaseconverter.com/> to double-check the proper capitalization (select the "Chicago" style and mark the "Show explanations" checkbox). Author's names should appear below the title of the paper, centered in twelve-point type (with fifteen point leading), along with affiliation(s) and complete address(es) (including electronic mail address if available) in nine-point roman type (the twelve point leading). You should begin the two-column format when you come to the abstract. §.§.§ Formatting Author Information. Author information has to be set according to the following specification depending if you have one or more than one affiliation. You may not use a table nor may you employ the \authorblk.sty package. For one or several authors from the same institution, please separate them with commas and write all affiliation directly below (one affiliation per line) using the macros \author and \affiliations: For authors from different institutions, use \textsuperscript {\rm x } to match authors and affiliations. Notice that there should not be any spaces between the author name (or comma following it) and the superscript. You can indicate that some authors contributed equally using the \equalcontrib command. This will add a marker after the author names and a footnote on the first page. Note that you may want to break the author list for better visualization. You can achieve this using a simple line break (\\). §.§ Copyright Notice The copyright notice automatically appears if you use aaai25.sty. It has been hardcoded and may not be disabled. §.§ Credits Any credits to a sponsoring agency should appear in the acknowledgments section, unless the agency requires different placement. If it is necessary to include this information on the front page, use \thanks in either the \author or \title commands. For example: \title{Very Important Results in AI\thanks{This work is supported by everybody.}} Multiple \thanks commands can be given. Each will result in a separate footnote indication in the author or title with the corresponding text at the botton of the first column of the document. Note that the \thanks command is fragile. You will need to use \protect. Please do not include \pubnote commands in your document. §.§ Abstract Follow the example commands in this document for creation of your abstract. The command \begin{abstract} will automatically indent the text block. Please do not indent it further. Do not include references in your abstract! §.§ Page Numbers Do not print any page numbers on your paper. The use of \pagestyle is forbidden. §.§ Text The main body of the paper must be formatted in black, ten-point Times Roman with twelve-point leading (line spacing). You may not reduce font size or the linespacing. Commands that alter font size or line spacing (including, but not limited to baselinestretch, baselineshift, linespread, and others) are expressly forbidden. In addition, you may not use color in the text. §.§ Citations Citations within the text should include the author's last name and year, for example (Newell 1980). Append lower-case letters to the year in cases of ambiguity. Multiple authors should be treated as follows: (Feigenbaum and Engelmore 1988) or (Ford, Hayes, and Glymour 1992). In the case of four or more authors, list only the first author, followed by et al. (Ford et al. 1997). §.§ Extracts Long quotations and extracts should be indented ten points from the left and right margins. This is an example of an extract or quotation. Note the indent on both sides. Quotation marks are not necessary if you offset the text in a block like this, and properly identify and cite the quotation in the text. §.§ Footnotes Use footnotes judiciously, taking into account that they interrupt the reading of the text. When required, they should be consecutively numbered throughout with superscript Arabic numbers. Footnotes should appear at the bottom of the page, separated from the text by a blank line space and a thin, half-point rule. §.§ Headings and Sections When necessary, headings should be used to separate major sections of your paper. Remember, you are writing a short paper, not a lengthy book! An overabundance of headings will tend to make your paper look more like an outline than a paper. The aaai25.sty package will create headings for you. Do not alter their size nor their spacing above or below. §.§.§ Section Numbers. The use of section numbers in AAAI Press papers is optional. To use section numbers in , uncomment the setcounter line in your document preamble and change the 0 to a 1. Section numbers should not be used in short poster papers and/or extended abstracts. §.§.§ Section Headings. Sections should be arranged and headed as follows: * Main content sections * Appendices (optional) * Ethical Statement (optional, unnumbered) * Acknowledgements (optional, unnumbered) * References (unnumbered) §.§.§ Appendices. Any appendices must appear after the main content. If your main sections are numbered, appendix sections must use letters instead of arabic numerals. In you can use the command to achieve this effect and then use normally for your appendix sections. §.§.§ Ethical Statement. You can write a statement about the potential ethical impact of your work, including its broad societal implications, both positive and negative. If included, such statement must be written in an unnumbered section titled Ethical Statement. §.§.§ Acknowledgments. The acknowledgments section, if included, appears right before the references and is headed “Acknowledgments". It must not be numbered even if other sections are (use in ). This section includes acknowledgments of help from associates and colleagues, credits to sponsoring agencies, financial support, and permission to publish. Please acknowledge other contributors, grant support, and so forth, in this section. Do not put acknowledgments in a footnote on the first page. If your grant agency requires acknowledgment of the grant on page 1, limit the footnote to the required statement, and put the remaining acknowledgments at the back. Please try to limit acknowledgments to no more than three sentences. §.§.§ References. The references section should be labeled “References" and must appear at the very end of the paper (don't end the paper with references, and then put a figure by itself on the last page). A sample list of references is given later on in these instructions. Please use a consistent format for references. Poorly prepared or sloppy references reflect badly on the quality of your paper and your research. Please prepare complete and accurate citations. §.§ Illustrations and Figures Your paper must compile in PDF. Consequently, all your figures must be .jpg, .png, or .pdf. You may not use the .gif (the resolution is too low), .ps, or .eps file format for your figures. Figures, drawings, tables, and photographs should be placed throughout the paper on the page (or the subsequent page) where they are first discussed. Do not group them together at the end of the paper. If placed at the top of the paper, illustrations may run across both columns. Figures must not invade the top, bottom, or side margin areas. Figures must be inserted using the \usepackage{graphicx}. Number figures sequentially, for example, figure 1, and so on. Do not use minipage to group figures. If you normally create your figures using pgfplots, please create the figures first, and then import them as pdfs with proper bounding boxes, as the bounding and trim boxes created by pfgplots are fragile and not valid. When you include your figures, you must crop them outside of . The command \includegraphics*[clip=true, viewport 0 0 10 10]... might result in a PDF that looks great, but the image is not really cropped. The full image can reappear (and obscure whatever it is overlapping) when page numbers are applied or color space is standardized. Figures <ref>, and <ref> display some unwanted results that often occur. If your paper includes illustrations that are not compatible with PDF (such as .eps or .ps documents), you will need to convert them. The epstopdf package will usually work for eps files. You will need to convert your ps files to PDF in either case. §.§.§ Figure Captions. The illustration number and caption must appear under the illustration. Labels and other text with the actual illustration must be at least nine-point type. However, the font and size of figure captions must be 10 point roman. Do not make them smaller, bold, or italic. (Individual words may be italicized if the context requires differentiation.) §.§ Tables §.§ Tables Tables should be presented in 10 point roman type. If necessary, they may be altered to 9 point type. You must not use or other commands that resize the entire table to make it smaller, because you can't control the final font size this way. If your table is too large you can use to compress the columns a bit or you can adapt the content (e.g.: reduce the decimal precision when presenting numbers, use shortened column titles, make some column duble-line to get it narrower). Tables that do not fit in a single column must be placed across double columns. If your table won't fit within the margins even when spanning both columns and using the above techniques, you must split it in two separate tables. §.§.§ Table Captions. The number and caption for your table must appear under (not above) the table. Additionally, the font and size of table captions must be 10 point roman and must be placed beneath the figure. Do not make them smaller, bold, or italic. (Individual words may be italicized if the context requires differentiation.) §.§.§ Low-Resolution Bitmaps. You may not use low-resolution (such as 72 dpi) screen-dumps and GIF files—these files contain so few pixels that they are always blurry, and illegible when printed. If they are color, they will become an indecipherable mess when converted to black and white. This is always the case with gif files, which should never be used. The resolution of screen dumps can be increased by reducing the print size of the original file while retaining the same number of pixels. You can also enlarge files by manipulating them in software such as PhotoShop. Your figures should be 300 dpi when incorporated into your document. §.§.§ Overflow. users please beware: will sometimes put portions of the figure or table or an equation in the margin. If this happens, you need to make the figure or table span both columns. If absolutely necessary, you may reduce the figure, or reformat the equation, or reconfigure the table. Check your log file! You must fix any overflow into the margin (that means no overfull boxes in ). Nothing is permitted to intrude into the margin or gutter. §.§.§ Using Color. Use of color is restricted to figures only. It must be WACG 2.0 compliant. (That is, the contrast ratio must be greater than 4.5:1 no matter the font size.) It must be CMYK, NOT RGB. It may never be used for any portion of the text of your paper. The archival version of your paper will be printed in black and white and grayscale. The web version must be readable by persons with disabilities. Consequently, because conversion to grayscale can cause undesirable effects (red changes to black, yellow can disappear, and so forth), we strongly suggest you avoid placing color figures in your document. If you do include color figures, you must (1) use the CMYK (not RGB) colorspace and (2) be mindful of readers who may happen to have trouble distinguishing colors. Your paper must be decipherable without using color for distinction. §.§.§ Drawings. We suggest you use computer drawing software (such as Adobe Illustrator or, (if unavoidable), the drawing tools in Microsoft Word) to create your illustrations. Do not use Microsoft Publisher. These illustrations will look best if all line widths are uniform (half- to two-point in size), and you do not create labels over shaded areas. Shading should be 133 lines per inch if possible. Use Times Roman or Helvetica for all figure call-outs. Do not use hairline width lines — be sure that the stroke width of all lines is at least .5 pt. Zero point lines will print on a laser printer, but will completely disappear on the high-resolution devices used by our printers. §.§.§ Photographs and Images. Photographs and other images should be in grayscale (color photographs will not reproduce well; for example, red tones will reproduce as black, yellow may turn to white, and so forth) and set to a minimum of 300 dpi. Do not prescreen images. §.§.§ Resizing Graphics. Resize your graphics before you include them with LaTeX. You may not use trim or clip options as part of your \includegraphics command. Resize the media box of your PDF using a graphics program instead. §.§.§ Fonts in Your Illustrations. You must embed all fonts in your graphics before including them in your LaTeX document. §.§.§ Algorithms. Algorithms and/or programs are a special kind of figures. Like all illustrations, they should appear floated to the top (preferably) or bottom of the page. However, their caption should appear in the header, left-justified and enclosed between horizontal lines, as shown in Algorithm <ref>. The algorithm body should be terminated with another horizontal line. It is up to the authors to decide whether to show line numbers or not, how to format comments, etc. In algorithms may be typeset using the algorithm and algorithmic packages, but you can also use one of the many other packages for the task. §.§.§ Listings. Listings are much like algorithms and programs. They should also appear floated to the top (preferably) or bottom of the page. Listing captions should appear in the header, left-justified and enclosed between horizontal lines as shown in Listing <ref>. Terminate the body with another horizontal line and avoid any background color. Line numbers, if included, must appear within the text column. [tb] Example listing quicksort.hs [language=Haskell] quicksort :: Ord a => [a] -> [a] quicksort [] = [] quicksort (p:xs) = (quicksort lesser) ++ [p] ++ (quicksort greater) where lesser = filter (< p) xs greater = filter (>= p) xs §.§ References The AAAI style includes a set of definitions for use in formatting references with BibTeX. These definitions make the bibliography style fairly close to the ones specified in the Reference Examples appendix below. To use these definitions, you also need the BibTeX style file “aaai25.bst," available in the AAAI Author Kit on the AAAI web site. Then, at the end of your paper but before \enddocument, you need to put the following lines: \bibliography{bibfile1,bibfile2,...} Please note that the aaai25.sty class already sets the bibliographystyle for you, so you do not have to place any \bibliographystyle command in the document yourselves. The aaai25.sty file is incompatible with the hyperref and navigator packages. If you use either, your references will be garbled and your paper will be returned to you. References may be the same size as surrounding text. However, in this section (only), you may reduce the size to \small (9pt) if your paper exceeds the allowable number of pages. Making it any smaller than 9 point with 10 point linespacing, however, is not allowed. The list of files in the \bibliography command should be the names of your BibTeX source files (that is, the .bib files referenced in your paper). The following commands are available for your use in citing references: \cite: Cites the given reference(s) with a full citation. This appears as “(Author Year)” for one reference, or “(Author Year; Author Year)” for multiple references. \shortcite: Cites the given reference(s) with just the year. This appears as “(Year)” for one reference, or “(Year; Year)” for multiple references. \citeauthor: Cites the given reference(s) with just the author name(s) and no parentheses. \citeyear: Cites the given reference(s) with just the date(s) and no parentheses. You may also use any of the natbib citation commands. § PROOFREADING YOUR PDF Please check all the pages of your PDF file. The most commonly forgotten element is the acknowledgements — especially the correct grant number. Authors also commonly forget to add the metadata to the source, use the wrong reference style file, or don't follow the capitalization rules or comma placement for their author-title information properly. A final common problem is text (expecially equations) that runs into the margin. You will need to fix these common errors before submitting your file. § IMPROPERLY FORMATTED FILES In the past, AAAI has corrected improperly formatted files submitted by the authors. Unfortunately, this has become an increasingly burdensome expense that we can no longer absorb). Consequently, if your file is improperly formatted, it will be returned to you for correction. § NAMING YOUR ELECTRONIC FILE We require that you name your source file with the last name (family name) of the first author so that it can easily be differentiated from other submissions. Complete file-naming instructions will be provided to you in the submission instructions. § SUBMITTING YOUR ELECTRONIC FILES TO AAAI Instructions on paper submittal will be provided to you in your acceptance letter. § INQUIRIES If you have any questions about the preparation or submission of your paper as instructed in this document, please contact AAAI Press at the address given below. If you have technical questions about implementation of the aaai style file, please contact an expert at your site. We do not provide technical support for or any other software package. To avoid problems, please keep your paper simple, and do not incorporate complicated macros and style files. AAAI Press 1101 Pennsylvania Ave, NW Suite 300 Washington, DC 20004 USA Telephone: 1-202-360-4062 E-mail: See the submission instructions for your particular conference or event. § ADDITIONAL RESOURCES is a difficult program to master. If you've used that software, and this document didn't help or some items were not explained clearly, we recommend you read Michael Shell's excellent document (testflow doc.txt V1.0a 2002/08/13) about obtaining correct PS/PDF output on systems. (It was written for another purpose, but it has general application as well). It is available at www.ctan.org in the tex-archive. § REFERENCE EXAMPLES * Formatted bibliographies should look like the following examples. You should use BibTeX to generate the references. Missing fields are unacceptable when compiling references, and usually indicate that you are using the wrong type of entry (BibTeX class). Book with multiple authors em:86 Use the class. em:86. Journal and magazine articles r:80, hcr:83 Use the class. r:80. hcr:83. Proceedings paper published by a society, press or publisher c:83, c:84 Use the class. You may abbreviate the booktitle field, but make sure that the conference edition is clear. c:84. c:83. University technical report r:86 Use the class. r:86. Dissertation or thesis c:79 Use the class. c:79. Forthcoming publication c:21 Use the class with a annotation. c:21. ArXiv paper c:22 Fetch the BibTeX entry from the "Export Bibtex Citation" link in the arXiv website. Notice it uses the class instead of the one, and that it includes the and keys. c:22. Website or online resource c:23 Use the class. Add the url in the field and the date of access in the field: c:23. For the most up to date version of the AAAI reference style, please consult the AI Magazine Author Guidelines at <https://aaai.org/ojs/index.php/aimagazine/about/submissions#authorGuidelines> § ACKNOWLEDGMENTS This work is supported in part by the National Science Foundation (NSF) under grant numbers 2340249 and 2409697. § REPRODUCIBILITY CHECKLIST This paper: * Includes a conceptual outline and/or pseudocode description of AI methods introduced (yes) * Clearly delineates statements that are opinions, hypothesis, and speculation from objective facts and results (yes) * Provides well marked pedagogical references for less-familiar readers to gain background necessary to replicate the paper (yes) §.§ Theoretical Contributions Does this paper make theoretical contributions? (yes) If yes, please complete the list below. * All assumptions and restrictions are stated clearly and formally. (yes) * All novel claims are stated formally (e.g., in theorem statements). (yes) * Proofs of all novel claims are included. (yes) * Proof sketches or intuitions are given for complex and/or novel results. (yes) * Appropriate citations to theoretical tools used are given. (yes) * All theoretical claims are demonstrated empirically to hold. (yes) * All experimental code used to eliminate or disprove claims is included. (yes) §.§ Datasets Does this paper rely on one or more datasets? (no) §.§ Computational Experiments Does this paper include computational experiments? (yes) If yes, please complete the list below. * Any code required for pre-processing data is included in the appendix. (yes) * All source code required for conducting and analyzing the experiments is included in a code appendix. (yes) * All source code required for conducting and analyzing the experiments will be made publicly available upon publication of the paper with a license that allows free usage for research purposes. (yes) * All source code implementing new methods have comments detailing the implementation, with references to the paper where each step comes from. (yes) * If an algorithm depends on randomness, then the method used for setting seeds is described in a way sufficient to allow replication of results. (NA) * This paper specifies the computing infrastructure used for running experiments (hardware and software), including GPU/CPU models; amount of memory; operating system; names and versions of relevant software libraries and frameworks. (yes) * This paper formally describes evaluation metrics used and explains the motivation for choosing these metrics. (yes) * This paper states the number of algorithm runs used to compute each reported result. (yes) * Analysis of experiments goes beyond single-dimensional summaries of performance (e.g., average; median) to include measures of variation, confidence, or other distributional information. (yes) * The significance of any improvement or decrease in performance is judged using appropriate statistical tests (e.g., Wilcoxon signed-rank). (yes) * This paper lists all final (hyper-)parameters used for each model/algorithm in the paper’s experiments. (yes) * This paper states the number and range of values tried per (hyper-)parameter during development of the paper, along with the criterion used for selecting the final parameter setting. (yes) § APPENDIX A. TPU DATAFLOW ANALYSIS < g r a p h i c s > subfigureCloud Setup < g r a p h i c s > subfigureEdge Setup figureTotal cycles for different dataflow of OPT models. All experiments were conducted on a computing setup with a single node, 48 cores, and 200 GB of memory. The Scale-Sim V2 tool was installed, and experiments were executed using specific configurations for both cloud and edge setups. § APPENDIX B. GPT AND LLAMA RESULTS 0.46 < g r a p h i c s > subfigureCompute Cycles 0.46 < g r a p h i c s > subfigureMemory Access figureFraction of Matmul-free operations in the GPT and LLaMA models for the cloud setup. 0.46 < g r a p h i c s > subfigureCompute Cycles 0.46 < g r a p h i c s > subfigureMemory Access figureFraction of Matmul-free operations in the GPT and LLaMA models for the edge setup. All experiments were conducted on a computing setup with a single node, 48 cores, and 200 GB of memory. The Scale-Sim V2 tool was installed, and experiments were executed using specific configurations for both cloud and edge setups. § APPENDIX C. AMDAHL'S LAW ANALYSIS 0.46 < g r a p h i c s > subfigureSequence Length 128 0.46 < g r a p h i c s > subfigureSequence Length 256 0.46 < g r a p h i c s > subfigureSequence Length 512 0.46 < g r a p h i c s > subfigureSequence Length 1024 0.46 < g r a p h i c s > subfigureSequence Length 2048 0.46 < g r a p h i c s > subfigureSequence Length 4096 figureAmdahl's law for different sequence lengths in the OPT models for the cloud setup. 0.46 < g r a p h i c s > subfigureSequence Length 128 0.46 < g r a p h i c s > subfigureSequence Length 256 0.46 < g r a p h i c s > subfigureSequence Length 512 0.46 < g r a p h i c s > subfigureSequence Length 1024 0.46 < g r a p h i c s > subfigureSequence Length 2048 0.46 < g r a p h i c s > subfigureSequence Length 4096 figureAmdahl's law for different sequence lengths in the OPT models for the edge setup. All experiments were conducted on a computing setup with a single node, 48 cores, and 200 GB of memory. The Scale-Sim V2 tool was installed, and experiments were executed using specific configurations for both cloud and edge setups.
http://arxiv.org/abs/2408.11664v1
20240821143803
A Systematic Literature Review on the Use of Blockchain Technology in Transition to a Circular Economy
[ "Ishmam Abid", "S. M. Zuhayer Anzum Fuad", "Mohammad Jabed Morshed Chowdhury", "Mehruba Sharmin Chowdhury", "Md Sadek Ferdous" ]
cs.ET
[ "cs.ET" ]
1 .001 Blockchain for circular economy Abid et al. mode = title]A Systematic Literature Review on the Use of Blockchain Technology in Transition to a Circular Economy ]Ishmam Abid [a] [1]organization=Shahjalal University of Science and Technology, addressline= Kumargaon, citysep=, postcode=Sylhet-3114, country=Bangladesh ]S.M. Zuhayer Anzum Fuad [a] ]Mohammad Jabed Morshed Chowdhury [b] M.Chowdhury@latrobe.edu.au ]Mehruba Sharmin Chowdhury [a] 3,4]Md Sadek Ferdous [2]organization=La Trobe University, addressline=Bundoora, city=VIC, postcode=3086, country=Australia [3]organization=BRAC University, addressline=Dhaka, postcode=1206, country=Bangladesh [4]organization=Imperial College London, addressline=London, postcode=SW7 2AZ, country=UK [cor3]Corresponding authors § ABSTRACT The circular economy has the potential to increase resource efficiency and minimize waste through the 4R framework of reducing, reusing, recycling, and recovering. Blockchain technology is currently considered a valuable aid in the transition to a circular economy. Its decentralized and tamper-resistant nature enables the construction of transparent and secure supply chain management systems, thereby improving product accountability and traceability. However, the full potential of blockchain technology in circular economy models will not be realized until a number of concerns, including scalability, interoperability, data protection, and regulatory and legal issues, are addressed. More research and stakeholder participation are required to overcome these limitations and achieve the benefits of blockchain technology in promoting a circular economy. This article presents a systematic literature review (SLR) that identified industry use cases for blockchain-driven circular economy models and offered architectures to minimize resource consumption, prices, and inefficiencies while encouraging the reuse, recycling, and recovery of end-of-life products. Three main outcomes emerged from our review of 41 documents, which included scholarly publications, Twitter-linked information, and Google results. The relationship between blockchain and the 4R framework for circular economy; discussion the terminology and various forms of blockchain and circular economy; and identification of the challenges and obstacles that blockchain technology may face in enabling a circular economy. This research shows how blockchain technology can help with the transition to a circular economy. Yet, it emphasizes the importance of additional study and stakeholder participation to overcome potential hurdles and obstacles in implementing blockchain-driven circular economy models. Blockchain Circular economy Sustainable Supply chain 4R framework [ [ ===== § INTRODUCTION The linear economy paradigm is currently one of the largest problems on Earth. The linear “take, make, and dispose” economy is generating increased price volatility, supply chain risks, and growing pressures on resources <cit.>. The only thing that moves from extraction to processing to assembly of the finished product are raw materials <cit.>. By using only raw materials for value creation, the difficulties and detrimental effects of the existing economic paradigm are expected to double in the next 20 years <cit.>. As a result, researchers, policymakers, and business leaders are considering a new economic model in light of the economic losses, structural waste, supply and market risk, excessive resource use, and loss of natural systems that are reflected in this linear model <cit.>. Therefore, the concept of circular economy (CE) is gaining considerable attention. A circular economy is one that is designed for the restoration and regeneration in mind to maintain the usability and worth of goods, parts, and materials at all times recognizing the differences between biological and technological cycles at times. Thus, CE can be essential for achieving environmental sustainability. In a recent literature review, authors in <cit.> identified 24 key barriers to achieving CE. With the advent of Industry 4.0, blockchain technology (BCT) may provide a solution to some of these obstacles. The use of blockchain to enable the circular economy has yet to be demonstrated. This is an area of study, and only a small number of industries have been utilizing this. Researchers proposed numerous "R" frameworks for constructing CE, including the 3R <cit.>, 4R <cit.>, 6R <cit.>, and 9R <cit.> (explained in Section <ref>). The approach that best illustrates how CE operates is the 4R (reduce, reuse, recycle, and recover) framework <cit.>. This paper's emphasis is on the 4R framework-based, blockchain-driven CE. Blockchain's decentralized, distributed ledger architecture, traceable, and tamper-proof characteristics can have a significant role in achieving the above 4R framework-based CE <cit.>. Reuse, reduce, recycle, and recovery have all been linked to the blockchain, according to a number of authors<cit.> <cit.>. Blockchain technology has been adopted by a number of industries, which have decreased carbon footprints, facilitated cyclical business models, enhanced performance, and simplified communication along the supply chain, thus supporting the circular economy <cit.>. Authors in <cit.> discovered that tracking the possible environmental and social factors that might create health, environmental, and safety hazards is a critical application focus for Blockchain Technology. Although blockchain has attracted a lot of interest for its potential to address CE issues, there is no recent survey, review, or systematic literature review for a 4R-based CE employing blockchain. Hence, we carried out a thorough systematic literature review on the potential uses of blockchain in the circular economy. We have studied and analyzed how researchers and businesses have used blockchain to enable reducing, reusing, recycling, and recovering for sustainable development and the limitations of the technology in corresponding aspects. Section 2 presents the background about blockchain and circular economy. Research methodology is discussed in section 3. We have analysed our findings using resarch question in section 4. A detail discussion is presented in section 5. Finally, we concluded in section 6. § BACKGROUND In this segment, we discuss blockchain and its different aspects, the circular economy, the adaptation of the R framework, and the research perspectives on these subjects. §.§ Blockchain Initially, blockchain was proposed as a peer-to-peer electronic currency system rather than the wide variety of applications it now sees. Despite the fact that were the first to introduce the concept of blocks connected by cryptographic chains and developed a system to prevent tampering or alteration of data recorded with timestamps, introduced hash function methods to create blocks in the chain and proposed bitcoin as a form of decentralized electronic currency <cit.>. Blockchain is now much more than just an electronic money system coupled with the industrial and 's invention of smart contracts. defined blockchain as, “A technology that enables immutability, and integrity of data in which a record of transactions made in a system are maintained across several distributed nodes that are linked in a peer-to-peer network”. To add a new block to the blockchain, different consensus algorithms are used. A consensus algorithm is a way for all peers in a blockchain network to agree on the current state of the distributed ledger. The most common consensus algorithms are PoW (Proof of Work), PoS (Proof of Stake), DPoS (Delegated Proof of Stake), PBFT (Practical Byzantine Fault Tolerance), and RAFT <cit.>. In a decentralized network, these algorithms improve network security and foster trust among untrusted parties. §.§.§ Key Features of Blockchain According to , blockchain has a number of features that make it useful in a wide range of fields. The features are discussed below. * Distributed consensus on the blockchain state: One of the most important aspects of blockchain is the different kinds of consensus algorithms. These consensus algorithms allow all of the peers to arrive at an agreement regarding the present state of the blockchain in a distributed manner. * Immutable and irreversible blockchain state: The chain state becomes immutable and irreversible when a large number of blockchain nodes participate in the distributed consensus process. In addition, the immutability of blockchain is ensured through the usage of DLT (Distributed Ledger Technology). * Data persistence: As long as there are nodes participating in peer-to-peer networking, data kept in the blockchain persists. * Data provenance: A transaction is a process of storing information on a blockchain. Every transaction on the blockchain is signed by a digital signature, such as public-key cryptography, in order to preserve the data's integrity and authenticity. * Distributed data control: Blockchain stores and retrieves data via a peer-to-peer distributed ledger. As a consequence, Blockchain has no single point of failure. * Accountability and transparency: Because any authorized participant is able to view the current state of the blockchain as well as any transaction that has taken place between participants, it ensures accountability and transparency. §.§.§ Types of Blockchain According to a survey on blockchain by , there are primarily two types of blockchain: * Permissionless Blockchain: A permissionless blockchain is decentralized and open by nature. This means that any peer can join in the process of determining what blocks are added to the chain without providing identifying information, and no one is responsible for controlling entry <cit.>. Bitcoin <cit.> and Ethereum <cit.> are instances of permissionless blockchain. * Permissioned Blockchains: Permissioned blockchain, as defined in <cit.>, is a blockchain that requires its participants' identity authentication and authorization of network access. A central authority gives each peer the right to take part in writing or reading operations on the blockchain. One of the most widely used permissioned blockchains is Corda <cit.> and Hyperledger Fabric <cit.>. §.§.§ Smart Contracts The idea of a smart contract has become more well-known as Blockchain 2.0 has been developed. While Szabo first introduced the idea behind smart contract <cit.>, Vitalik Buterin later introduced Ethereum to realise the concept practically <cit.> in which a Turing-complete programming language was introduced for writing code and executing smart contracts in decentralized applications (dApps) on top of EVM (Ethereum Virtual Machine) and Ethereum Blockchain. a smart contract is a simple blockchain-based computer program that is executed when specific conditions are met. It is used to automate the process of putting an agreement into action so that all parties involved are aware of the outcome without the need for intermediaries or time-consuming delays <cit.>. Smart contracts store data on the blockchain as transactions. This makes it possible for computer logic to be immutable. Figure <ref> illustrates the execution processes for a smart contract. Smart contracts are very important for private blockchain to be functional <cit.>. Industry and businesses are creating smart contract-based applications to utilize blockchain in the food industry, construction industry, supply chain management, document verification, e-voting, medical data storage, FinTech, and other areas. Figure <ref> depicts how smart contracts are widely used in the industry. §.§ Circular Economy and R Framework Natural resource shortages will result from the current linear “take-make-dispose” economic approach. The current economic paradigm must be redesigned and shifted to a more sustainable model. Our surroundings are still being polluted by the disposal or dispersion of wastes produced by industry systems based on linear economies. A staggering 99 percent of consumer goods are thrown away within six months of purchase, which represents a dismal failure in the field of material recovery <cit.>. Since the late 1970s, the notion of Circular Economy(CE) has been gaining traction <cit.>. Several authors cite as the originators of the concept. CE has been a theoretical and practical alternative to neoclassical economics since its inception. It recognizes the critical importance of the environment, its functions, and the relationship between the environment and the economic system <cit.>. CE defines a new approach to sustainability and social responsibility by emphasizing the social components of sustainability <cit.>. defines CE as below, “A circular economy is one that is restorative and regenerative by design and aims to keep products, components, and materials at their highest utility and value at all times, distinguishing between technical and biological cycles.” The concept of a cyclical closed-loop and Cradle to cradle <cit.> systems are the common denominators across several authors who have linked the term "Circular Economy" to a wide variety of themes <cit.>. According to the World Economic Forum <cit.>, the CE will bolster net material savings; mitigate volatility and supply concerns; drive innovation and job development; enhance land productivity and soil health; and provide long-term economic resilience. Circular business models may be a subset of “sustainable business models” <cit.>. The study of the CE is typically divided into three themes <cit.>: * Innovation in technology, organization, and society <cit.> . * Value chains, material flows, and applications specific to certain products <cit.>. * Tools and methods for making policy <cit.>. According to <cit.>, CE could help improve resource productivity and eco-efficiency, reform environmental management, and realize sustainable development. As a result, businesses are increasingly willing to adopt the concept of a CE in an effort to use sustainable methods in the economy <cit.>. The R-framework relates to multiple techniques to embrace circularity, known as R-strategies. Authors in <cit.> mentioned complex material hierarchies, also known as R-hierarchies or R-frameworks, as one of the key components of a more transformative perspective and evaluated and incorporated R-frameworks into a unified systemic typology comprising 10 resource value retention options (Rs) or R strategies. The 10Rs are Refuse, Rethink, Reduce, Reuse, Repair, Refurbish, Remanufacture, Repurpose, Recycle, and Recover. The majority of R-lists define a priority order for approaches to circularity, with the first R being more significant than the second R and so on <cit.>. Figure <ref> illustrates a brief introduction to 10Rs. Researchers proposed different R-frameworks based on these R-strategies such as: * 3R Framework: Ghisellini et al. proposed 3R framework consisting of Reduce (R2), Reuse (R3), and Recycle (R8) <cit.> . The framework accounts for the circular system in which all materials are recycled and all energy is generated from renewable sources; activities support and restore the ecosystem and promote human health and a healthy society; and resources are used to create values <cit.>. It is one of the most popular frameworks for achieving circularity. * 4R Framework: : Sihvonen et al. introduced the 4R framework, consisting of Reduce (R2), Reuse (R3), Recycle (R8) and Recover(R9) <cit.>. The order of the R-strategies in the 4R framework indicates the amount of resource value retained. The higher the strategy, the more the resource value is retained <cit.>. The European Union’s waste framework directive is based on the 4R framework <cit.>. The 4R Principle is one of the most prevalent principles in the field of solid waste management and sustainable development <cit.>. * 6R Framework: Yan et al. presented the 6R approach, which consists of Reduce (R2), Reuse (R3), Recycle (R8), Recover (R9), Redesign (R7), and Remanufacture (R6) <cit.>. This 6R framework provides a closed-loop, multi-product life-cycle system as the basis for sustainable manufacturing <cit.>. Life Cycle Assessment (LCA) can be made more comprehensive by adding the 6R elements, and it can be used to determine the impact or burden on the environment <cit.>. * 9R Framework: Potting et al. presented the 9R framework consisting of Refuse (R0), Rethink (R1), Reduce (R2), Reuse (R3), Repair (R4), Refurbish (R5), Remanufacture (R6), Repurpose (R7), Recycle (R8) and Recover (R9) <cit.>. Optimizing resource and product usage is the goal of the 9R framework, which aims to create a more sustainable manufacturing capability <cit.>. Using the 9R framework in advanced manufacturing, companies can achieve cleaner production and gain a competitive advantage <cit.>. These are, in conclusion, the most significant frameworks for describing CE. The aspects of these R frameworks have been accepted by the industry, which is currently seeking to devise more effective ways to implement them. There are further CE frameworks, such as ReSOLVE <cit.>. The 3R framework does not cover the entire material flow cycle, and the 6R and 9R are far too sophisticated for blockchain implementation. We utilized the 4R framework in this study due to its potential blockchain application. § RESEARCH METHODOLOGY As part of this SLR, we reviewed prior research on how Blockchain can be utilized to establish a circular economy. Using blockchain technology, the 4Rs (Reusing, Reducing, Recycling, and Recovering) have been studied as a means of constructing a circular economy. Additionally, we examined blockchain-based industries for various CE techniques. §.§ Research Questions After evaluating a significant number of research publications, we created five Research Questions (RQ). Table <ref> illustrates the RQs. §.§ Search Strategy The ultimate purpose of the search is to identify all related studies. We utilized the PRISMA-Framework for our research <cit.>. The inclusion-exclusion technique was adopted for archiving. We used search strings on multiple electronic databases for primary searching. We applied both forward and backward citation tracing for a secondary search. The primary selection procedure is comprised of relevant keywords, literature sources, and a screening procedure. §.§.§ Search terms and relevant keywords: We made several search strings and used them on online databases to look for studies that met our criteria. Figure <ref> shows the PRISMA Flow diagram,that depicts how different research papers are sorted for this systematic literature review. This explains how the search procedure works as a whole. Table <ref> shows the relevant keywords we used for the search. §.§.§ Literature Sources: During this process, we used seven different electronic databases and a number of different search strings. Google Scholar, IEEE, ACM DL, ScienceDirect, Springer, Willey Online Library, and MDPI are the databases that were used. We also looked at the name of the journal, the year it was published, Bibliographies, the paper's title, the number of citations, and the link to the paper. §.§.§ Search Process: SLR is required to conduct an exhaustive search of all sources that are relevant; hence, we have defined the search process by splitting it into two phases listed below. * Search Term Based Searching: Seven independent web database searches were conducted. We used search phrases from table <ref> with the logical operators "OR" and "AND," parentheses, and quote marks to narrow our search. After retrieval, the papers were included in a set of candidate papers. * Reference Based Searching: The reference lists of the relevant papers were searched for more relevant papers and if any were discovered, they were included in the set. As an archival of our search results, we utilized an Excel spreadsheet. We collected 81,422 articles from the original search (Table <ref>) and 8 papers from the reference search. §.§ Selection phase We found a large number of candidate papers after the search process, however, not all of them were significant to our RQs. Therefore, we used an additional filtering selection process which has the following two phases: * Inclusion-exclusion based selection: Using an inclusion-exclusion selection strategy, we were able to pick relevant papers from our pool of candidate papers. The materials in these papers may be beneficial to our RQs. The inclusion and exclusion criteria are presented in Table <ref> and Table <ref>. * Final Selection: We used some criteria to judge the quality of relevant papers during this process. After passing a quality check, certain papers were used to get the data. Section <ref> defines the standards that are employed. §.§.§ Study Quality Assessment We made some Quality Assessment Questions (QAQs) to make sure that the papers are of good quality. Table <ref> defines these QAQs. We chose each paper based on how many QAQs it meets in total. If a paper meets at least half of the QAQs, we have picked it in the final selection of studies. In the end, we chose 35 papers. § ANALYSIS The collected papers have been grouped into five categories: Reduce, reuse, recycle, recover, and others. * Reduce represents the studies that have discussed the use of blockchain in the transition to CE utilizing the “Reduce” component of the 4R-framework. This group focused primarily on the solutions provided by industries and researchers for reducing resource and material consumption using any blockchain system. * Reuse represents the collection of studies that have discussed blockchain for the “Reuse” component of the 4R-framework. Specifically, these publications have used blockchain as a solution for reusing materials and resources, thus prolonging its lifespan. * Recycle represents the research that have been conducted on blockchain for the “Recycle” component of the 4R-framework. Articles which also considered the resource / waste recycling options with blockchain as the underlying technology has been placed under this category. * Recover represents the collection of studies that have discussed blockchain for the “Recover” component of the 4R-framework. These researches have demonstrated how blockchain might be utilized as a ledger for waste or material recovery. * Others represents the collection of articles that do not directly refer to any of the 4Rs but are beneficial to the community. Examples inlude resource circularity, market value development and life cycle analysis from a blockchain perspective. §.§ RQ1: How blockchain technology can facilitate Reduce in CE? To answer the aforementioned question, we analyzed various blockchain-based solutions proposed by other scholars that could aid in establishing the concept of “reduce” in CE. Numerous industry use cases aiming at eliminating obstacles linked with CE transition and the “reduce” idea were identified. In addition, these solutions are being discussed. The notion of reduce in the 4R framework attempts to optimize resource consumption and reduce waste output. This can be effectively implemented through blockchain. The distributed ledger of the blockchain eliminates the need for a third-party auditor of transactions. In addition, the ability of a blockchain ledger to maintain an immutable, permanent record of transactions makes the supply chain transparent and traceable. A transparent and traceable supply chain is more significant and circular because it reduces waste and resource consumption. * To reduce waste: To reduce dangerous e-waste, authors in <cit.> have proposed analyzing the entire life cycle of electronic devices using a blockchain-based architecture. Chidepatil et al. <cit.> demonstrated that using artificial intelligence and multi-sensor data fusion, blockchain smart contracts can help us reduce plastic waste. Using IBM's hyperledger fabric, Walmart tracked pork and mangoes along the supply chain to ensure complete traceability and reduce food waste associated with those products <cit.>. Kamilaris et al. <cit.> mentioned Plastic Bank <cit.>, a Canadian recycling company, which was aiming to reduce plastic waste in developing countries like Haiti, Peru, Colombia, Indonesia, and the Philippines. Blockchain-secured digital tokens are given to customers who bring plastic waste to bank recycling facilities. These tokens can be used by users of the Plastic Bank app to buy additional goods. With 1 million participants, 2000 collector units around 3 million kilograms of plastic waste collected since 2014. Authors in <cit.> suggested a blockchain-based system to reduce medical waste. They have developed a design for a blockchain-based medical and water waste management system. Users will receive digital tokens as rewards that can be exchanged for different benefits. There are Echchain, ElectricChain, Suncontract, and other platforms that use blockchain technology to reduce waste in the supply chain <cit.>. * To reduce intermediaries & costs: Blockchain technology has an impact on administrative control and digital regulations. Data is stored in shared databases in blockchain, where it is more transparent, less likely to be deleted or changed, and immutable <cit.>. The blockchain's transparent transaction system reduces the need for intermediaries like brokers, exchanges, and banks <cit.>. This mitigates the possibility of opportunistic behavior <cit.>. According to the analysis in <cit.> analysis, the advantages of decentralization are increased by connecting buyers and sellers directly and reducing transaction costs, which can stimulate secondary market activities. Users can exchange their services and goods directly through a blockchain network. Blockchain technology improves capital flow by reducing transaction costs and investment risk. Energy systems built on the blockchain can use less electricity during long-distance transmissions. As a result, there would be less need for energy use, which would save resources and transaction costs on the network <cit.>. Tushar et al. highlighted the benefits of using a peer-to-peer approach to reduce the costs of energy expenditure. Small consumers will sell their excess energy units to those who do not have enough to save money. The cost of networking is reduced due to blockchain. Numerous businesses use blockchain-based crowdfunding to support the development of new platforms <cit.>. Figure <ref> illustrates the reduction in cost scenarios using blockchain. * To reduce fraud: Cole et al. <cit.> proposed blockchain as a solution to reduce illegal counterfeiting by disclosing a product's origin. They also mentioned the technology to reduce the cost of processes via automated systems, enabling real-time inspection through time-stamping, thereby reducing the complexity of the supply chain. Kouhizadeh et al. <cit.> have analyzed the industry and businesses that have embraced blockchain for their products to achieve economic circularity. They cited Toyota, which utilized blockchain to reduce advertising fraud, and ad purchases. Authors in <cit.> proposed developing a system to track the distribution of drugs. In this system, Internet of Things (IoT) devices such as barcode readers, smartphones, and other devices scan serial numbers or RFID tags on drug packages. They created a GDP controller that employs blockchain technology to monitor transactions and reduce fraud. In spite of the widespread consumption of genuine Australian beef, there is a substantial amount of fake beef on the market. Data61 of the CISRO (Commonwealth Scientific and Industrial Research Organisation, an Australian scientific research agency) employs blockchain technology to combat this fraud <cit.>. Blockchain improves information transparency throughout the supply chain and reduces the likelihood of data manipulation and vulnerability to crashes, fraud, and hacking <cit.>. Reduced fraud can increase the supply chain's transparency and traceability, which benefits towards a transition to a circular economy. * To reduce overproduction: Blockchain technology can help in the reduction of overproduction by enabling a more effective supply chain, which will lower the consumption of raw materials and resources and speed up the transition to a circular economy. To address issues in the supply chain for the fast fashion business, proposed a system based on blockchain. The three stages of a fashion item’s life — prior to production, during production, and the following production — were examined. The suggested system architecture is accessible to everyone, including fast-fashion businesses, designers, merchants, and manufacturers. Everyone must share information, keep track of inventory, and collaborate on forecasting, planning, and gap-filling for the system to function. This can make the entire process circular by reducing inventories and overproduction. The authors in <cit.> presented Ethereum-based smart contracts as a means to address medical overproduction and underconsumption. The system employs four smart contracts to capture events automatically and safeguard the integrity and provenance of the data. It sets rules for the medical supply chain's phases of agreement, production, delivery, and use. Overproduction depletes resources and increases the risks for the circular economy. Xu et al <cit.> suggested a blockchain-based system for keeping track of the electronics supply chain and identifying hardware based on certificate authorities (CA), which would make the system less vulnerable and reduce overproduction. §.§ RQ2: How blockchain technology can facilitate the Reuse in CE? Reuse is the use of discarded products, components, or materials for the same purpose for which they were originally designed, with minimum modification. When reduction is not possible, reusing is the best next option. Blockchain technology could facilitate a decentralized marketplace for reusing goods. Increasing the transparency and verifiability of information enables secondary markets for old goods and materials. Using blockchain technology, everyone can determine the quality of second-hand items <cit.>. On a blockchain, real-time information about reused products and resources can aid the circular economy movement. Shojaei et al. <cit.> performed a life cycle analysis on HVAC (Heating, ventilation, and air conditioning) products, such as air conditioners, package units, gas furnaces, and split system heat pumps, using hyperledger fabric and a web interface. They kept track of the life cycles of the products to assist in decision making and proactive planning for the maximum amount of material reuse. Nandi et al. <cit.> proposed blockchain for repairability and reuse of medical equipment. Locating devices will be used to store or deliver replacement parts. By using a public blockchain, designs for 3D-printable products will be shared for repair and reuse, also mitigating concerns over property rights. To maintain circular supply chain management in the fast fashion industry, have designed a system architecture and implemented blockchain for material reuse management at the application layer of the architecture. The study in <cit.> investigated how blockchain technology can help create a secondary market for previously used leather handbags. The study delved deeper into how to keep track of used goods to facilitate the expansion of the secondary market. The results indicate that used-goods trading platforms can be established if products and their life cycles can be monitored. This would boost secondary market performance and might even make primary production obsolete. Currently, construction and demolition trash are seen as chores, although it is a consequence of the construction process. This debris can be reused and traded, and blockchain technology could be used to develop a universal waste management system that treats garbage as a resource <cit.>. According to <cit.>, blockchain technology can provide a decentralized used-goods market. With the support of information transparency and verifiability of used goods' quality and condition, the transition to a circular economy can move even more quickly and may produce new goods that utilize the secondary market. In another study, authors in <cit.> mentioned Cablenet, noting that the company resells its circular assets if their utility exceeds a certain threshold. This facilitates the economic reuse of products. implemented a system to reuse wastewater utilizing IoT, machine learning, and IBM's Hyperledger fabric blockchain. Industries receive tokens in the form of cryptocurrency based on how much waste they reuse. §.§ RQ3: How blockchain technology can facilitate the Recycle in CE? Recycling means disassembling products into their component parts and dissolving or reprocessing them into new forms. Figure <ref> illustrates the waste recycling procedure. A blockchain-based product analysis enables the transition toward a circular business model. Using blockchain, it is possible to track all products from their point of origin through their sale and recycling. implemented a circular supply chain by implementing Hyperledger Fabric and a PoC (proof of concept) consensus mechanism. A circular supply network, including manufacturers, reverse logistics service providers, recycling centers, selection centers, and landfill, was modeled. Each is a participant in the permissioned blockchain. Here, the selection center collects all recyclable garbage and the recycling center recycles and distributes the items to the manufacturers. Recereum is a blockchain-based platform for profiting from trash and recyclables <cit.>. This blockchain facilitates direct communication between users and the trash collection provider. Recereum network rewards users with Recereum coins, the blockchain's native currency, based on the value of their recyclables. Ethereum powers the Recereum network. Chidepatil et al. <cit.> introduced a blockchain-based, multi-sensor, AI-driven system for recycling plastic waste. Participants validate digital data recorded as a transaction; this data can then be used to facilitate the recycling or repurposing of plastic products. To incentivize participation in the validation process, participants receive cryptocurrency rewards. One of the major problems of the digital circular economy is motivating rivals to trade data while preserving property rights and privacy and fostering trust for recycled products <cit.>. Smart contracts can be used to protect intellectual property rights and designs from counterfeiting and unauthorized usage. Cobalt Blockchain (COBC) has been offered 40,000 tons of cobalt concentrate/annum with a minimum grade of 1% cobalt from DRC artisanal mines using blockchain to trace cobalt from mines to the point of consumption, hence enabling cobalt recycling <cit.>. Shojaei et al. developed a blockchain-based system to monitor the product lifecycle throughout the supply chain. They noted that material traceability and performance records can be used to boost future output. The record of products and materials in each facility, as well as their current condition, could facilitate active recycling. The authors in <cit.> mentioned that businesses could share recyclable waste without the use of intermediaries using blockchain technology, increasing their profit margin. Furthermore, depending on aspects like the quantity, quality, and reusability of the waste, smart contracts can be used to exchange waste. Furthermore, to ensure ownership rights for waste, recyclable data can also be recorded on blockchain. Using blockchain, information about the supply chain and the recycling status of products can be stored, updated, and published. Users will then be able to identify eco-friendly products before making a purchase, promoting the circular economy <cit.>. Consumers' eco-conscious behaviors can provide a significant boost to the Circular economy transition. Numerous companies reward crypto-tokens or cryptocurrency to customers who purchase environmentally friendly products. Recycling, waste reduction, local consumption, etc. are examples of eco-friendly consumer practices <cit.>. §.§ RQ4: How blockchain technology can facilitate the Recover in CE? In the circular economy, recovery refers to the extraction of resources and compounds from waste, by-products, and residues. It is feasible to confirm the authenticity of the recovered components using blockchain technology. Once chemicals or residues have been recovered, the associated data can be recorded on a blockchain, which allows for circular economy incentives to be implemented when the recovered components are used in another product. Using smart contracts, blockchain can record the terms and conditions of waste management and related initiatives, enabling a digital waste recovery process in addition to enhancing the circular economy <cit.>. The authors in <cit.> have provided solutions and business models to enable a circular economy, stating that blockchain is anticipated to address the inefficiencies of the traditional Extended Producer Responsibility (EPR) system by establishing a link between the product's origin and its recovery. This can accelerate the recovery process. As part of the EU Circular Foam program, Electrolux is working with polymer manufacturer Covestro to recover polyurethane foam from refrigerators <cit.>. Using blockchain can enable producers to record information about proper refrigerator disassembly and the most efficient method for foam recovery. Integrating blockchain with the “Aitana” artificial intelligence platform and Telefónica Tech's Blockchain-based TrustOs, Telefónica Tech and Exxita Be Circular created the “Green passport” for circular management of device life cycles <cit.>. The Green passport uses consumer information and device tracing mechanisms to promote device recovery. Green passports have been distributed in roughly 500,000 devices that are recovered annually. Efficient “product return management” will require data-driven decision-making in e-commerce reverse logistics, and blockchain application in logistics can play a significant role in value recovery <cit.>. Blockchain can record data about material wastage and copyright and recovery processes. Consequently, any recovery facility or remanufacturer can trace products and implement recovery strategies <cit.>. §.§ RQ5: What potential barriers could a blockchain-based Circular economy face? Blockchain has the potential to support a circular economy, but it must first overcome obstacles such as scalability, the need for sophisticated software development tools, consumer behavior, and complex systems <cit.>. Also, organizational, financial, technological, environmental and social barriers may prevail <cit.>. We have researched relevant literature on the constraints of blockchain-based business models for the circular economy and have identified eight significant barriers for shifting toward a circular economy. They are: i. lack of consumers' understanding and motivation; ii. the existing linear system; iii. an expensive process of 4R products; iv. scalability and slow transactions per second (TPS); v. very few experts in blockchain; vi. inter/intra organizational obstacles; vii. government regulations and policies and viii. high resource requirements for blockchain. We have provided a brief description of each barrier in Table <ref> and referred to related studies that mentioned it. § DISCUSSION The traditional “take-make-dispose” economic model is rapidly degrading the environment around us. Therefore, the transition to a circular economy is vital for everyone. Though blockchain is in its early stages, it already has showcased enormous potential. However, there are several obstacles that prevent it from being used to its fullest capacity. To overcome the constraints of this technology, several forces must collaborate and a massive research effort is necessary. This study presents a comprehensive analysis of the many blockchain-based solutions that serve as foundations or drivers in the creation and implementation of CE models. In this paper, we have used research questions to investigate the potential role of blockchain technology in facilitating the transition to a circular economy within the context of the 4R framework, as well as the challenges inherent to a blockchain-based CE. We summarise our findings in this section. From the standpoint of our research questions, we first present the following summary. There were five types of categories we utilized to determine how relevant anything was to the 4Rs: (1) Reduce, (2) Reuse, (3) Recycle, (4) Recover, and (5) Applicability to all four ideas, as shown in Table <ref>. The most striking finding of Table <ref> is that most of the research is primarily focused on the “reduce” CE notion. This fits perfectly the structured organization of the CE frameworks. Putting “reduce” ahead of “reuse“, “recycle” and “recover” is crucial because it helps avoid problems such as quality degradation that can occur during recycling and reuse, the consumption of resources required by recycling and restoration, and so on. In “reduce” potential residual asset flows are cut off well before the product even goes into circulation. On the other hand, just a few answers were pertinent to the “recover” idea by itself. And unfortunately, only one research revealed applicability to all four concepts. Not only are there a few theoretical examples of how CE might profit from blockchain technology, but there are also limited actual case studies that deal with sustainability challenges. Since all of the 4R concepts are closely related to each other, finding a solution for one CE concept does not mean that it cannot also be used for another CE concept. That is why it is important to study this 4R approach in greater depth. Table <ref> also shows that the most frequently cited barriers in the transition to a circular economy are the lack of customer knowledge and motivation, inter/intra-organizational obstacles, and high resource requirements for blockchain. Figure <ref> presents the percentage of frequently cited barriers in the reviewed studies. A further significant observation is that several of the solutions utilized technology from many categories. This is because IoT, blockchain, machine learning, and AI are intertwined. Integrating blockchain with IoT in a supply chain, for instance, might boost business performance, as IoT devices are not only more productive than people but also make fewer mistakes in inventory management. In addition, it might be helpful to provide real-time traceability of goods within storage facilities, warehouses, or other places, which can help reduce product damage and maximize its utility. § CONCLUSION AND FURTHER RESEARCH SCOPES There is no doubt that blockchain technology can bring benefits to the notion of circular economy. By altering the current state of recordkeeping and the value proposition, blockchain will accelerate the entire process. However, blockchain have obstacles to overcome. This article summarizes previous research in this field and concludes that the 4R-framework of circular economy (Reduce, Reuse, Recycle, Recover) can be successfully implemented with blockchain serving as a key enabler. This article classifies blockchain's role as a CE enabler into four distinct categories. (1) promoting a circular economy by rewarding NFTs and cryptocurrencies; (2) enhancing the transparency of the product life cycle; (3) reducing operational costs and enabling efficient systems; and (4) enhancing organizational performance through data sharing. These four categories act as catalysts for the implementation of a circular economy by extending product life, decreasing resource consumption, and providing transparency for reused and recovered products. In addition, we identified eight barriers to a CE-based circular economy, including a lack of consumer understanding and motivation, the existing linear system, an expensive process for 4R products, scalability and slow transaction per second (TPS), very few blockchain experts, inter/intra-organizational obstacles, government regulations and policies, and high resource requirements for blockchain. Simply put, blockchain is a technology, but the circularity of the economy is contingent on the vision and strategies selected by businesses to govern their processes. Blockchain can be an effective option, but additional research and optimization are necessary to expand its applications. cas-model2-names
http://arxiv.org/abs/2408.12203v1
20240822082335
Ultra-broadband non-degenerate guided-wave bi-photon source in the near and mid-infrared
[ "Franz Roeder", "Abira Gnanavel", "René Pollmann", "Olga Brecht", "Michael Stefszky", "Laura Padberg", "Christof Eigner", "Christine Silberhorn", "Benjamin Brecht" ]
quant-ph
[ "quant-ph", "physics.optics" ]
1 Paderborn University, Integrated Quantum Optics, Warburger Str. 100, 33098 Paderborn, Germany 2 Paderborn University, Institute for Photonic Quantum Systems (PhoQS), Warburger Str. 100, 33098 Paderborn, Germany franz.roeder@uni-paderborn.de § ABSTRACT The latest applications in ultrafast quantum metrology require bright, broadband bi-photon sources with one of the photons in the mid-infrared and the other in the visible to near infrared. However, existing sources based on bulk crystals are limited in brightness due to the short interaction length and only allow for limited dispersion engineering. Here, we present an integrated PDC source based on a Ti:LiNbO_3 waveguide that generates broadband bi-photons with central wavelengths at 860 nm and 2800 nm. Their spectral bandwidth exceeds 25 THz and is achieved by simultaneous matching of the group velocities and cancellation of group velocity dispersion for the signal and idler field. We provide an intuitive understanding of the process by studying our source's behaviour at different temperatures and pump wavelengths, which agrees well with simulations. § INTRODUCTION Broadband sources of non-degenerate bi-photons are required for quantum metrology applications, especially measurements with undetected photons. Here, such a source can be used as the active optical element within a so-called SU(1,1)-interferometer or nonlinear interferometer <cit.>. At the same time, sources with strong time-frequency entanglement and high spectral bandwidths provide short correlation times which are key for ultrafast quantum spectroscopy applications, such as quantum optical coherence tomography, Fourier transform infrared spectroscopy with undetected photons, or entangled two-photon absorption <cit.>. Recent sources of non-degenerate broadband quantum light are based on bulk nonlinear materials that employ group-velocity matching <cit.>, are poled aperiodically <cit.> or employ ultra-thin crystals, thus relaxing the phase-matching condition <cit.>. However, these sources are limited in brightness and often do not provide collinear emission of the generated photons. The use of long periodically poled nonlinear waveguides provides a means to overcome these limitations <cit.>. Long waveguides, however, are usually associated with narrow spectra. To achieve ultra-broadband emission from waveguides, we must employ higher-order dispersion engineering techniques. As a first step in that direction we showed in previous works that group-velocity matching of signal and idler photons in a periodically poled Ti:LiNbO_3 waveguide can provide spectral bandwidths of more than 7 THz with correlation times below 100 fs and a spectral brightness exceeding 10^6pairs/s· mW · GHz <cit.>. Here, we present a PDC source that features correlation times around 25 fs, resulting from more than 25 THz of spectral bandwidth and that generates highly non-degenerate bi-photons at central wavelengths of 860 nm and 2800 nm. We achieve this via simultaneous matching of the signal and idler group velocities as well as operating at a point of zero total group velocity dispersion for these fields. Operating at this zero group velocity dispersion point is a technique often found in fiber-based sources that allow for flexible dispersion engineering, but exhibit lower nonlinear coefficients <cit.> due to the fact that these sources typically utilize the χ^(3) nonlinearity. Limitations in brightness can be overcome in integrated waveguides that employ the χ^(2) component and thereby inherently provide a higher brightness. First demonstrations in the novel platform of thin-film lithium niobate showed the potential to use dispersion engineering for achieving broadband single photon generation <cit.>. Alternatively, one can also use the long-established Ti:LiNbO_3 waveguide platform, which is more suitable for applications that couple to free-space or fibers, and where fabrication is both standardised and repeatable. In this paper, we first present our method for engineering the PDC process and the necessary conditions for ultra-broadband bi-photon generation. We then introduce the setup for the characterization of the generated PDC signal photons and compare the measured spectra to our simulations. We investigate the dependence of the emission on the waveguide temperature as well as the pump wavelength. Thereafter, we evaluate the change in maximally achievable bandwidth from the source when operating at pump wavelengths away from the working point. § PDC PROCESS ENGINEERING The spectral characteristics of a PDC waveguide source pumped by a continuous wave (cw) laser are given by the so-called joint spectral amplitude (JSA) <cit.>: f(Δω) = sinc(Δβ (Δω)L/2)e^iΔβ(Δω)L/2. Here, Δω = ω_s - Ω_s = -(ω_i-Ω_i) is the signal (idler) frequency detuning from the central frequency Ω_s (Ω_i). The sign change reflects the fact that signal and idler energies must add up to the pump energy such that ħΩ_p=ħω_s + ħω_i, with Ω_p being the frequency of the pump laser. L is the length of the waveguide. The exponential term in this expression contains the phase caused by the dispersion of the waveguide and Δβ is the phase mismatch between the pump and the generated signal and idler fields. This phase mismatch Δβ can be expanded as <cit.>: Δβ(ω_s,ω_i) = β_p(ω_s+ω_i) - β_s(ω_s)-β_i(ω_i)-2π/Λ ≈Δβ^(0) + (κ_s - κ_i)Δω + 1/2 (η_s + η_i)Δω^2 - η_p Δω^2 + O(Δω^3) Here, the 0-th order phase mismatch Δβ^(0) = β_p(Ω_s+Ω_i)-β_s(Ω_s)-β_i(Ω_i)-2π/Λ is set to zero by an appropriate choice of the poling period Λ to ensure phase matching for the central frequencies. The terms κ_s,i=(∂β_p/∂ω|_Ω_s + Ω_i- ∂β_s,i/∂ω|_Ω_s,i) are related to the group velocities (GV) of the signal and idler photons, respectively, while the terms η_s,i = (∂^2β_p/∂ω^2|_Ω_s+Ω_i - ∂^2 β_s,i/∂ω^2|_Ω_s,i) are related to their group velocity dispersion (GVD), in both cases relative to the pump field, for which η_p = ∂^2β_p/∂ω^2|_Ω_p. In the case of a cw pump, the contributions with derivatives containing β_p cancel each other. Since the bandwidth of the generated bi-photon state is set by the width of the phase matching function, it is crucial to set the phase mismatch to zero in all these higher orders for a broad range of frequencies. To this end, we expand the phase mismatch up to second order in the frequency detuning Δω: Δβ (Δω) ≈( - ∂β_s/∂ω|_Ω_s + ∂β_i/∂ω|_Ω_i) Δω - 1/2( ∂^2 β_s/∂ω^2|_Ω_s + ∂^2 β_i/∂ω^2|_Ω_i) Δω^2 The first order of the phase mismatch can be cancelled, if the signal and idler GVs are matched, i.e. (-∂β_s/∂ω|_Ω_s+∂β_i/∂ω|_Ω_i=0). This condition is called GV matching which has already been exploited for dispersion engineering in our group <cit.>. In addition, by ensuring that the GVD of signal and idler has equal magnitude but opposite sign, the second order of the phase mismatch can be cancelled, i.e. (∂^2 β_s/∂ω^2|_Ω_s+∂^2 β_i/∂ω^2|_Ω_i=0). We refer to this as GVD cancellation. This condition is key to supporting the extreme bandwidths that are required for future applications in quantum metrology. The consequences of GV matching and GVD cancellation on the generated signal and idler fields are depicted in Fig. <ref>. Here, we illustrate the resulting spectral shape and phase of the generated non-degenerate bi-photons and joint spectral amplitude from samples with optimized periodic poling and; no dispersion engineering (a), including GV matching (b), with simultaneous GV matching and GVD cancellation (c). In the case of no dispersion engineering, signal and idler experience a walk-off due to different group velocities at the generated wavelengths which leads to a different linear phase for the two colors. This walk-off can be prevented by matching the group velocities such that signal and idler emerge at the same time as illustrated in Fig. <ref> b). In that case, the photons obtain, to leading order, a quadratic phase and the marginal output spectra are broadened due to the larger overlap of phase matching and pump function. Finally, engineering the source to achieve both GV matching and GVD cancellation (Fig. <ref> c)) results in a bi-photon state whose phase profile is dominated by third order effects. In the resulting JSA a further broadened overlap of the phase matching and pump function is observed, and thus the generation of ultra-broadband bi-photons. § EXPERIMENTAL SETUP The source is based on waveguides produced in z-cut LiNbO_3 fabricated in-house by in-diffusion of photo-lithographically patterned titanium strips with widths of 18 m, 20 m, and 22 m. To allow for maximum flexibility, the sample features poling periods ranging from 5.8 m to 6.3 m. We realize a type II phase-matching that allows to exploit the birefringence of the material in order to achieve simultaneous GV matching and GVD cancellation. The sample has a length of 40 mm and comprises 15 groups of three poled waveguides each. The mask layout is shown in Appendix 7.1. We characterize the propagation losses of the samples by using a low-finesse Fabry-Pérot method <cit.>. We measured average losses of around 0.2 dB/cm for TM-polarized light at 3000 nm, TE polarization has also been measured and experiences lower losses due to the waveguide geometry. A typical measurement for a whole waveguide chip is presented in Fig. <ref> in Appendix 7.2. The setup to measure the generated signal spectrum is depicted in Fig. <ref>. We use one of two different external cavity diode lasers (TOPTICA DL pro) as pump lasers to address working points at different poling periods and waveguide temperatures across the sample. These lasers can be tuned from 652 nm to 655 nm and from 642 nm to 646 nm, respectively. We monitor the wavelength of the lasers with the same spectrometer that we use for measuring the PDC spectra. For the first laser the waveguide sample is heated to about 230 ^∘C to achieve phase matching while a temperature of around 200 ^∘C is sufficient for the second laser. We designed the sample to operate at these temperatures to mitigate photo refractive effects <cit.>. We set the sample temperature with a home-built copper oven via a resistive heating cartridge which is driven by a temperature controller with integrated PID loop (Oxford Instruments MercuryiTC). The generated signal photons at a central wavelength of 860 nm are separated from the pump field by a 735 nm long-pass filter, coupled into a single mode fibre and detected using a single-photon sensitive spectrometer (Andor Shamrock SR-500i spectrograph with Newton 970P EMCCD-camera). § SOURCE CHARACTERIZATION In order to achieve the desired broadband emission, the system needs to be precisely tuned to the working point, i.e. good dispersion engineering requires a reliable model that also captures higher order effects in the phase mismatch. We are therefore comparing our developed simulations against the measurements not only at the ideal working point but also at detuned pump wavelengths. To this end, we characterize our source in terms of changes in the spectral emission when varying the temperature of the waveguide and thus its refractive index at various pump wavelengths. The change in temperature primarily leads to a shift of the phase matching function with respect to the pump function in the JSA, c.f. Fig. <ref>. We first start to investigate the temperature tuning with the pump laser set to the design wavelength of 652.3 nm. In Fig. <ref>, the simulated spectra at different temperatures around the working point are compared to the measured ones. The simulation in Fig. <ref> a) shows the resulting spectrum of the signal photons for varying temperatures on the y-axis. The corresponding idler wavelengths are indicated on the top x-axis. It can be seen that the signal emission shifts from longer towards shorter wavelengths for increasing temperatures. Furthermore, a broadband emission is observed only for a specific temperature of 230 ^∘C. At temperatures below and above this working point, the bandwidth of the emitted PDC decreases drastically. The measurements shown in Fig. <ref> b) clearly confirm this behaviour. In this figure, the measured, normalized signal spectra are plotted together with the simulated spectra along the corresponding cuts on the temperature axis in the simulations, indicated via dashed lines of the corresponding colors. Both, the relative shift and the change in bandwidth are in good agreement. However, a systematic offset of around 40 K in the temperature and 2.4 nm in the pump wavelength had to be introduced in this and the following simulations. This deviation can be attributed to inaccuracies in our Sellmeier equations that are used to model the process. These Sellmeier equations are obtained by modelling the waveguide with the ideal fabrication parameters and are therefore also influenced by fabrication tolerances. However, these deviations only affect the first order terms, while the higher order terms are captured accurately. At this optimal operation point the source covers wavelengths from 2400 nm to 3000 nm, corresponding to a spectral bandwidth of 25 THz. Based on considerations from earlier work <cit.>, we extract a Fourier limited correlation time of less than 25 fs from this spectrum, which enables high resolution ultra-fast spectroscopic applications. In contrast to sources with only GV matching, for which a suitable operation point can be found for many pump wavelengths <cit.>, in our case there is only one single operation pump wavelength for a given poling period due to the added constraint of GVD cancellation. Pump wavelengths that are offset from the design wavelength show a distinctive temperature tuning behaviour that has to be taken into account when operating such a source. This can be seen in Fig. <ref> in the Appendix where the pump wavelengths are set below, at, and above the design wavelength. For a pump wavelength that is lower than the design wavelength, only a limited increase in emission bandwidth can be reached via temperature tuning. This behaviour is illustrated in Fig. <ref>. The simulation (Fig. <ref> b)) uses the temperature and pump wavelength offset that was identified in Fig. <ref>. From these simulations, the bandwidth of the signal spectrum was extracted using a full width at 80 % of the maximum. This criterion has been chosen to avoid the influence of side lobes in the non-perfect experimental phase matching. Indeed, two data points in Fig. <ref> c) show a more narrow bandwidth than expected for a pump wavelength of 653 nm which is caused by phase matching side lobes being evaluated instead of the main peak. The error bars consider an uncertainty in the set temperature of 0.2 K and a 5 % interval of the signal height for the width estimation. It can be seen that the temperature required to achieve the maximum bandwidth is lower for higher pump wavelengths. As the pump wavelength is reduced and ultimately matches with the design wavelength, the maximal bandwidth increases while the temperature range over which the increase in bandwidth happens decreases. The same behaviour can be observed for the experimental results. An implication of these measurements is that the accepted temperature range can be extended at the cost of maximal bandwidth for a detuned pump wavelength, which might be useful for applications where a reduced bandwidth is suitable with less critical temperature stability. Finally, we investigate the behaviour of our source at a pump wavelength that is lower than the design wavelength. As depicted in Fig. <ref> a), increasing the waveguide temperature first leads to a broadband emission at low wavelength, followed by one at longer wavelength at higher temperatures. This peculiar behaviour can be reproduced by the simulations that also show two regions of more broadband emission and are depicted by the dotted lines in the figure. Although there are differences in the structure between the measured results and the simulations, the two qualitatively match in their general behaviour. Deviations in the waveguide temperature are the most likely cause for this. The temperatures in Fig. <ref> b) are lower than the previous ones as the second pump laser with a smaller wavelength of 644.25 nm has been used to generate these signal spectra. The full picture of the temperature tuning behaviours for different pump wavelengths is presented in Fig. <ref> in the Appendix. Thus, characterization of the temperature tuning behaviour of our source for different pump wavelengths shows that careful source design is required to reach the desired broadband emission. The observed behaviour has been captured by our bespoke model and agrees very well with the presented experimental data. A high precision in modelling and experimentally reaching the working point is needed due to the fact that for a given poling period, the correct pump wavelength and waveguide temperature have to be chosen in order to fulfill both GV matching and the cancellation of GVD for the signal and idler fields. This understanding is crucial for future experiments that involve this or similar sources. Despite expecting a high brightness comparable to the one reported in <cit.>, the actual brightness of the source could not be measured due to the lack of single-photon detectors in the mid-infrared which prohibits coincidence detection, as typical detectors show a high thermal noise level and are not sensitive to single photons. However, a lower bound of 5·10^3 counts/s·mW·GHz can be estimated by measurements of the signal rate. This value makes the source at least comparable to or even brighter than bulk PDC sources. For more details, see Appendix 7.4. § DISCUSSION AND OUTLOOK We have presented the design and characterization of a broadband PDC source that generates bi-photons at non-degenerate wavelengths, one in the NIR at a central wavelength of 860 nm and the other in the MIR at 2800 nm. For this purpose, we fabricated a 40 mm long waveguide chip with waveguides of widths 18 m, 20 m, and 22 m and verified guiding of light at 3000 nm with low losses (<0.2dB/cm). The spectral bandwidth of the generated bi-photons reaches more than 25 THz for both signal and idler which is achieved via simultaneous GV matching and GVD cancellation. The theoretically predicted dependence of the output spectrum on the temperature and pump wavelength was verified. Furthermore, we estimated the achieved brightness to exceed 5·10^3 counts/s·mW·GHz , which is at least comparable to similar PDC sources. Our source offers great potential for applications in ultrafast quantum spectroscopy and sensing. Due to the broad spectral coverage as well as the bright emission from the confinement over 40 mm length in a waveguide, this source overcomes the typical trade-off between brightness and bandwidth. Furthermore, non-degenerate emission allows for utilizing the source in the context of nonlinear interferometers to perform measurements with undetected photons where an object under test can be probed by the photons in the mid IR while detection happens in the near IR. § ACKNOWLEDGEMENTS F.R. is part of the Max Planck School of Photonics supported by the German Federal Ministry of Education and Research (BMBF), the Max Planck Society, and the Fraunhofer Society. We acknowledge financial support from the Federal Ministry of Education and Research (BMBF) via the grant agreement no. 13N16352 (E2TPA). This project has received funding from the European Union’s Horizon Europe research and innovation programme under grant agreement no. 101070700 (MIRAQLS). § APPENDIX §.§ Sample Layout The photo-mask containing the waveguides and poling structure is presented in Fig. <ref> §.§ Linear optical losses The measured linear optical losses with the Fabry-Pérot method, c.f. <cit.>, at 3000 nm in TM polarization for the waveguide chip used above with 45 waveguides are shown in Fig. <ref>. The average losses are below 0.2 dB/cm. From these measurements, the best waveguides are chosen for further experiments. The error bars are associated with the thermal noise of the used mid IR detector. §.§ Temperature characteristics An overview of the three possible temperature tuning behaviours for varying pump wavelengths around the design wavelength is shown in Fig. <ref>. When the pump wavelength is larger than the optimal value, no or only a less broadband emission is reached as no region parallel to the x-axis forms during temperature tuning. For the design wavelength, this region is present and spans more than 50 nm in the given example plot. If the pump wavelength is chosen too low, the formation of up to 3 peaks in the signal spectrum is possible for a specific temperature. Furthermore, two regions of less broadband emission form while increasing the temperature, first at shorter wavelengths around 800 nm and later at longer wavelengths around 875 nm. §.§ Brightness estimation A lower bound for the brightness of the source is estimated from the counts measured when detecting the signal field using an avalanche photodiode (APD). Despite not being able to measure coincidences between signal and idler photons, which would allow to differentiate counts caused from the bi-photons from background and noise, the simultaneous measurement of photon counts and the spectra make it possible to calculate the bi-photon rate, normalized to the pump power and spectral width. This estimation does not contain coupling efficiencies to the detection and therefore only provides a lower bound. We calibrate the counts on the spectrograph against the ones on the APD by measuring the same signal with sufficient attenuation on the APD. This allows one to substract background and fluorescence counts from the spectrum and calculate a lower bound for the number of generated pairs. The brightness measured in this way, corrected for fluorescence and background noise, is 5 · 10^3 counts/s·mW·GHz. If one takes into account the estimated coupling efficiency of 20 %, this leads to a number that is brighter than most bulk sources and comparable to other waveguide sources, c.f. <cit.>, while achieving a large bandwidth.
http://arxiv.org/abs/2408.12506v1
20240822160252
Effect of Frequency-Dependent Viscosity on Molecular Friction in Liquids
[ "Henrik Kiefer", "Domenico Vitali", "Benjamin A. Dalton", "Laura Scalfi", "Roland R. Netz" ]
cond-mat.soft
[ "cond-mat.soft", "physics.flu-dyn" ]
APS/123-QED rnetz@fu-berlin.de Fachbereich Physik, Freie Universität Berlin, Arnimallee 14, 14195 Berlin, Germany § ABSTRACT The relation between the frequency-dependent friction of a molecule in a liquid and the hydrodynamic properties of the liquid is fundamental for molecular dynamics. We investigate this connection for a water molecule moving in liquid water using all-atomistic molecular dynamics simulations and linear hydrodynamic theory. We analytically calculate the frequency-dependent friction of a sphere with finite surface slip moving in a viscoelastic compressible fluid by solving the linear transient Stokes equation, including frequency-dependent shear and volume viscosities, both determined from MD simulations of bulk liquid water. We also determine the frequency-dependent friction of a single water molecule moving in liquid water, as defined by the generalized Langevin equation from MD simulation trajectories. By fitting the effective sphere radius and the slip length, the frequency-dependent friction and velocity autocorrelation function from the transient Stokes equation and simulations quantitatively agree. This shows that the transient Stokes equation accurately describes the important features of the frequency-dependent friction of a single water molecule in liquid water and thus applies down to molecular length and time scales, provided accurate frequency-dependent viscosities are used. The frequency dependence of the shear viscosity of liquid water requires careful consideration of hydrodynamic finite-size effects to observe the asymptotic hydrodynamic power-law tail. In contrast, for a methane molecule moving in water, the frequency-dependent friction cannot be predicted based on a homogeneous model, which suggests, supported by the extraction of a frequency-dependent surface-slip profile, that a methane molecule is surrounded by a finite-thickness hydration layer with viscoelastic properties that are significantly different from bulk water. Subject Areas Soft Matter, Statistical Physics, Fluid Dynamics, Complex Systems Effect of Frequency-Dependent Viscosity on Molecular Friction in Liquids Roland R. Netz August 26, 2024 ======================================================================== § INTRODUCTION The friction force acting on a solute molecule in a liquid environment exhibits a delayed non-Markovian response due to the finite relaxation time of the solvating liquid degrees of freedom <cit.>. Such memory effects occur on time scales that range between sub-picoseconds up to microseconds and even seconds, depending on the type and complexity of the system <cit.>. Including time- or frequency-dependent friction in an appropriate theoretical framework allows for the accurate modeling of macromolecular dynamics in liquid environments <cit.> and for the proper viscoelastic description of soft matter <cit.>. A fundamental question concerns the connection between the macroscopic hydrodynamic equations and the molecular friction acting on a particle or a molecule in a fluid. Numerous studies investigated this connection by comparing the frequency-dependent friction acting on a particle, as described by the generalized Langevin equation, with the friction obtained by solving the hydrodynamic equations for the fluid flow around a spherical particle <cit.>. Pioneering work in this direction was done by Zwanzig et al. <cit.> and later by Metiu et al. <cit.>, who obtained the time- or frequency-dependent friction by solving the linearized Stokes equation for a spherical particle in the presence of a frequency-dependent shear viscosity, described by a Maxwell model with a single relaxation time. Since a considerable discrepancy between the friction obtained from the solution of the Stokes equation and the friction derived from the velocity autocorrelations obtained in molecular dynamics simulations was found, especially at high frequencies, it was concluded that hydrodynamic theory does not work on molecular time and length scales <cit.>. Such a breakdown of hydrodynamics would most plausibly be explained by spatial non-locality in the fluidic viscous response, which in principle could be treated in reciprocal space but would render the Stokes solution for the frequency-dependent friction around a sphere invalid. However, a critical limitation in the comparison in <cit.> is that a Maxwell model with a single relaxation time was used for the shear viscosity in the solution of the Stokes equation. In fact, viscosity spectra measured in experiments <cit.> and extracted from molecular dynamics (MD) simulations of water <cit.>, indicate pronounced deviations from a simple Maxwell model, especially at high frequencies in the THz regime. This is the frequency range where deviations between the friction from hydrodynamic predictions and molecular simulation were found, so it is unclear whether spatially homogeneous hydrodynamic theory breaks down or whether an inappropriate model for the shear viscosity was used. In the present work, we reconsider the connection between macroscopic hydrodynamics and molecular friction; for this, we consider a single water molecule in a liquid water environment. We first analytically calculate the frequency-dependent friction acting on a sphere using the linearized Stokes equation in the presence of frequency-dependent shear and volume viscosities, finite compressibility, and a finite surface slip <cit.>. In contrast to previous work <cit.> we do not use a phenomenological Maxwell model for the viscosities but rather employ frequency-dependent shear and volume viscosities extracted from MD simulations. We, in detail, investigate the influence of compressibility and the frequency dependence of the volume viscosity on the friction function at high frequencies. We finally compare the friction calculated from the transient Stokes equation with the friction extracted from MD simulations using the framework of the generalized Langevin equation. Using the surface-slip parameter and the sphere radius that appear in the hydrodynamic prediction of the friction as free fit parameters, we find that the frequency-dependent friction of a water molecule extracted directly from MD simulations is in good agreement with the hydrodynamic predictions for frequencies up to 10 THz. This establishes the long-sought link between macroscopic hydrodynamics and the friction acting on a molecule in a fluid and shows that the continuum hydrodynamic equations work for water down to the scale of a single water molecule. It turns out that the macroscopic shear viscosity of water shows pronounced multi-modal behavior as a function of frequency and thus cannot be described by a Maxwell model with a single relaxation time, which explains why previous attempts to derive the frequency-dependent molecular friction from hydrodynamic theory failed. The fitted hydrodynamic radius and slip length obtained from our comparison agree with recent results extracted from the translational and rotational diffusivities of a water molecule in liquid water<cit.>, which demonstrates that our approach is physically sound. The friction calculated from the Stokes equation exhibits a power-law tail for long times, attributed to the so-called Basset-Boussinesq force <cit.>. However, on the time scales reachable with MD simulations of water, the long time tail, which corresponds to a negative force, is completely dominated by the frequency-dependent shear viscosity, which produces positive friction up to times of a few picoseconds, and is for longer times masked by finite-size effects <cit.>, in perfect agreement between our hydrodynamic predictions and the MD simulation results. Our findings are supported by a comparison of the velocity autocorrelation function computed from the MD simulation and from the hydrodynamic friction including frequency-dependent viscosities and finite compressibility. We thus find that the macroscopic hydrodynamic equations work surprisingly well down to molecular time and spatial scales for homogeneous water, i.e., if one considers the motion of a single water molecule embedded in liquid water, if and only if the frequency-dependent shear viscosity of water is used. This shows that a wave-vector-dependent viscosity, which would appear in a formally exact formulation of the linear-response stress-strain relation, is not necessary. In contrast, the friction of a single methane molecule in liquid water is not well described by hydrodynamic theory using liquid water shear and volume viscosities. A reasonable extension of the theory is the introduction of a frequency-dependent slip coefficient. We, therefore, calculate the slip profile which leads to a perfect accordance between the friction of the MD simulation and hydrodynamic theory. The surface-slip profiles of the water and methane simulation include frequency ranges where they are negative, indicating that the local viscosity around the sphere must deviate from the macroscopic shear viscosity. It is known that methane is in the water surrounded by a clathrate-like highly-ordered structure. We thus conclude that for inhomogeneous liquids, i.e., for the motion of a molecule that differs from the surrounding liquid, a homogeneous hydrodynamic model has to be generalized to account for the possibly modified viscosity of the solvation layer around a moving host molecule, as recently found for nanoscopic tracer beads moving in hydrogels <cit.>. § THEORY §.§ Frequency-dependent friction of a sphere from hydrodynamic theory The frequency-dependent friction of a sphere, Γ^hyd(ω), is a complex-valued function that describes the fluid stress response to a small, oscillatory sphere motion and is defined as F_i(ω) = δ_ijΓ^hyd(ω) v_j(ω), where v_j(ω) is the frequency-dependent velocity of the sphere and F_i(ω) is an external force acting on the sphere with radius a, and the indices i,j ∈{x,y,z}. In our work, we define the spatial and temporal Fourier transformation (FT) of a function as f(k,ω) = ∫ dt d^3r f(r,t) e^- i(k_ir_i + ω t). The friction of a sphere in a liquid can be derived from the Navier-Stokes equation, which originates from local momentum conservation <cit.> and follows as Γ^hyd(ω) = 4πη(ω) a/3W^-1{(1+λ̂)(9+9α̂+α̂^2)(1+2b̂) + (1+α̂)[2λ̂^2(1+2b̂)+b̂α̂^2(1+λ̂)]}, where W is given by W = (2+2λ̂+λ̂^2)(1+b̂(3+α̂))+(1+α̂)(1+2b̂)λ̂^2/α̂^2. Finite slip at the spherical surface is described by the dimensionless slip length, b̂ = b/a. The dimensionless decay constants α̂ = aα and λ̂ = aλ are defined by α^2(ω) = i ωρ_0/η(ω) and λ^2(ω) = i ωρ_0/4η(ω)/3 + ζ(ω) - iρ_0c^2/ω, where c is the speed of sound and ρ_0 is the mean fluid mass density. The full derivation of Eq. (<ref>) is presented in Appendix <ref>. The shear viscosity η(ω) and volume viscosity ζ(ω) are defined by the stress tensor in the Navier-Stokes equation (Appendix <ref>). If the viscous response decays on length- and time scales that are small compared to those on which the velocity gradients of the fluid ∇_j v_i(r,t) varies, one can approximate the viscous response as frequency- and momentum-independent, which defines a Newtonian fluid. However, we will explicitly consider frequency-dependent viscosities in this work. The shear viscosity η(t) is calculated from the autocorrelation of the trace-less stress tensor. On the other hand, the volume viscosity ζ(t) quantifies a fluid's dissipative response to compression <cit.> and is crucial for describing processes such as sound propagation or shock waves <cit.>, it can be calculated from the autocorrelation of instantaneous pressure fluctuations. Often, the volume viscosity is neglected, which corresponds to the Stokes hypothesis <cit.>. However, previous simulations and experiments found that the volume viscosity for water is non-negligible and can even be larger than the shear viscosity <cit.>. Thus, we explicitly consider a non-vanishing volume viscosity and will carefully examine its influence on the frequency-dependent friction. §.§ Frequency-dependent particle friction from the generalized Langevin equation For a particle with mass m, the dynamics can be described by the generalized Langevin equation (GLE) <cit.> mr(t) = -∇ U[r(t)] - ∫_-∞^t dt' Γ(t-t') r(t') + F^R(t), where -∇ U[r(t)] is the force due to a potential, Γ(t) the friction function, often called the memory kernel, and F_R(t) the random force, which has zero mean and a variance of ⟨ F^R_i (t) F^R_j(t')⟩ = k_B T δ_ij Γ(|t-t'|), where k_BT is the thermal energy. The stationary friction coefficient of the particle γ_0 is determined by the integral over the memory kernel, i.e., γ_0 = ∫_0^∞Γ(t) dt. We assume isotropic fluids and thus consider only the x-component of the particle position. In the absence of a potential, U = 0, the solution of the GLE in Eq. (<ref>) in Fourier space reads for the particle velocity along x as v_x (ω) = F_R(ω)/Γ_+ (ω) + iω m, where we use the single-sided memory function Γ_+(t) = Γ(t) for t≥0 and Γ_+(t) = 0 for t< 0. In Appendix <ref>, we show by calculating the fluid momentum outside a moving sphere from the transient Stokes equation that the hydrodynamic friction Γ^hyd(ω) in Eq. (<ref>) does not include inertial effects inside the sphere. Thus, by comparing Eqs. (<ref>) and Eq. (<ref>) we conclude that Γ^hyd(ω) and Γ_+(ω) describe the same quantity, namely the frequency-dependent friction of a particle due to the surrounding fluid. § RESULTS AND DISCUSSION §.§ Frequency-dependent shear and volume viscosities We analyze MD simulations at temperature T=300 K for the SPC/E and TIP4P/2005 water models (see Appendix <ref> for simulation details), and compute the frequency-dependent shear η(ω) and volume viscosity ζ(ω) (see Appendix <ref>). The Newtonian fluid, as defined in Eq. (<ref>) in Appendix <ref>, is a standard model to describe large-scale and long-time hydrodynamics of liquid water <cit.>. However, in earlier experimental investigations and MD simulations, it was found that at high frequencies, typically in the THz regime, liquid water deviates from the Newtonian fluid model <cit.> and that the shear viscosity decreases at high frequencies <cit.>. In Fig. <ref>(a, b, c), we show the extracted shear viscosity in the time and frequency domain from both water models. The TIP4P/2005 model spectra are very similar to the SPC/E model; both exhibit a pronounced peak in the real and imaginary parts around 7-8 THz. The value of the steady-state shear viscosity for the SPC/E model of η_0 = ∑_jη_0,j = 0.70 mPa s is lower than the value η_0 = 0.84 mPa s for the TIP4P/2005 model <cit.>. We fit the shear viscosity with a sum of six exponential-oscillating functions <cit.> η(t) = Θ(t){∑_j=I^VIη_0,jτ_n,j/τ_o,j^2e^-t/2τ_n,j[1/κ_jsin(κ_j/2τ_n,jt) + cos(κ_j/2τ_n,jt)]}, where κ_j = √(4(τ_n,j/τ_o,j)^2-1), which in the frequency domain reads as η(ω) = ∑_j=I^VIη_0,j1+iωτ_n,j/1+iωτ_o,j^2/τ_n,j-ω^2τ_o,j^2, as described in Appendix <ref>. Depending on the value of κ, a viscosity component displays a single-exponential decay with oscillations (finite real part of κ), or is a sum of two non-oscillating exponentials (imaginary κ). We find the fit function for both water models to perfectly describe the MD data in Fig. <ref>(a, b, c). We provide the fit parameters in Table <ref> in Appendix <ref>, and the individual components for the SPC/E water shear viscosity in Appendix <ref>. Our shear viscosity model decomposes the viscosity spectrum into modes that describe distinct dynamical processes <cit.>. The oscillation component I is due to hydrogen-bond network topology changes, i.e., changes of nearest-neighbor water pairs, while component II is due to hydrogen-bond stretch vibrations of water pairs, they are both overdamped <cit.>. The large peak around 7 THz due to the exponential-oscillatory component III describes vibrations of hydrogen-bonded water pairs and agrees in position with infrared spectroscopy simulation studies <cit.>. We identify the remaining high-frequency modes IV, V, and VI by comparison with absorption spectra of simulated water <cit.> as librational modes, i.e., rotational vibrations of individual water molecules, the splitting is related to the fact that rotations between the three main water axes are not equivalent. In Fig. <ref>(d, e, f), we show the volume viscosity extracted from the MD data, which we determine from the autocorrelation of system pressure fluctuations. The real part of these spectra exhibits no distinct peak in the THz regime, contrary to the shear viscosity, and in agreement with previous simulation results <cit.>. We fit the volume viscosity data with a sum of five exponential-oscillatory components ζ(t) = Θ(t){∑_j=I^Vζ_0,jτ_v,j/τ_w,j^2e^-t/2τ_v,j[1/κ_jsin(κ_j/2τ_v,jt) + cos(κ_j/2τ_v,jt)]}, where κ_j = √(4(τ_v,j/τ_w,j)^2-1). The total complex volume viscosity in the frequency domain is given by ζ(ω) = ∑_j=I^Vζ_0,j1+iωτ_v,j/1+iωτ_w,j^2/τ_v,j-ω^2τ_w,j^2. We find a steady-state value of ζ_0 = ∑_jζ_0,j = 1.69 mPa s for the SPC/E, and 2.04 mPa s for the TIP4P/2005 model, slightly lower than the experimental value (2.4 mPa s for 298 K <cit.>) and comparable to results from previous MD simulations <cit.>, which yielded for TIP4P/2005 water ζ_0 = 2.07 mPa s at 298 K and ζ_0 = 2.01 mPa s at 303 K, and for SPC/E water ζ_0 = 1.57 mPa s at 298 K and ζ_0 = 1.45 mPa s at 303 K. For high-frequencies around 100 THz, the fitting functions' real part of both viscosity spectra deviate from the MD data, where according to the models in Eq. (<ref>) and (<ref>) the real part scales with ∼ω^-4 (see dashed black lines). These discrepancies are not a simulation time step issue (Appendix <ref>), but due to short comings of the fitting function <cit.> and occur at frequencies where molecular vibration of real water happens <cit.> which are not included in our rigid water models and are not of concern in this work. §.§ Frequency-dependent friction of a sphere from hydrodynamic theory including frequency-dependent viscosities We insert the shear and volume viscosity spectra from the SPC/E water model MD simulations in Fig. <ref> into Eq. (<ref>) to compute the hydrodynamic friction Γ^hyd(ω) of a sphere. To obtain a feeling for the influence of slip and sphere size, we in Fig. <ref> show Γ^hyd(ω) for different values of the sphere radius a and slip coefficient b̂=b/a (solid lines). Note that the results are normalized by 6π a η_0 with η_0 = ∑_jη_0,j = 0.70 mPa s, which is the steady-state friction for zero slip. For small radii ∼ a = 10^-10 m, the friction Γ^hyd(ω) in Fig. <ref> exhibits similar features as the frequency-dependent shear viscosity in Fig. <ref> and in particular shows a peak around 7 THz, indicated by vertical dashed lines in Fig. <ref>. This behavior is abscent if a constant shear viscosity is assumed, as shown in Appendix <ref>. The slip length b has a rather mild effect on the low-frequency friction, as it mostly modulates the absolute value. As ω→ 0, the real part goes to 6πη_0 a for b̂→ 0 and to 4πη_0 a for b̂→∞ <cit.>, as indicated by horizontal red and green bars to the left in Fig. <ref>. The real part of the friction functions does not decay to zero as ω→∞ but instead reaches a plateau, the value of which depends on whether the slip parameter is zero or not. For radii a > 10^-10, the imaginary part interestingly changes its sign from negative to positive values. In Appendix <ref>, we discuss the asymptotic behavior of the friction function for low and high frequencies. We observe that the imaginary part switches its sign again, which is not visible in the frequency range shown in Fig. <ref>. We compare the results with different approximations of the full hydrodynamic theory. Short-dashed lines in Fig. <ref> denote the limiting scenario of infinity sound velocity c→∞, which represents the friction in an incompressible fluid. We observe distinct deviations from the full theory in the real and imaginary parts, which increase with increasing frequency. The high-frequency scaling depends strongly on whether we have a finite or a vanishing slip coefficient. In the incompressible case, the imaginary part diverges with ≃ρ_0 a^2 ω/(9η_0), corresponding to the so-called added mass term in the force acting on the sphere (see Appendix <ref> for details). This term has been a long-standing issue in literature, since it causes a discontinuity in the initial value of the velocity autocorrelation function C^vv(t) from the equipartition theorem, i.e., C^vv(0)=k_BT/m, to k_BT/(m + m_0), where m_0 = 2/3πρ_0a^3 is the added mass of the half of the displaced fluid. The added mass term vanishes in the full hydrodynamic theory with compressibility, in agreement with previous works <cit.>. Dotted lines in Fig. <ref> denote the incompressible case with a constant shear viscosity, i.e., η = η_0. The friction, in this case, significantly differs from the case of frequency-dependent shear viscosity, in particular the distinct peak around 7 THz is absent. The long-dashed lines represent the case c→∞ and α→ 0, where the hydrodynamic friction converges to the so-called generalized Stokes-Einstein relation (GSER) Γ^hyd(ω) = 6πη(ω) a 1+2b̂/1+3b̂, which is, in general with the additional approximation b → 0, widely used in the context of rheological theory <cit.>. As seen from Eq. (<ref>), the GSER prediction is proportional to the shear viscosity spectra but differs significantly and from the full hydrodynamic theory, in particular for larger radii and in the high-frequency regime. In Fig. <ref>, we analyze the dependence of the hydrodynamic friction on the volume viscosity ζ(ω). We compare the hydrodynamic friction for a finite frequency-independent volume viscosity ζ_0 = ∑_jζ_0,j = 1.69 mPa s (dashed lines), for vanishing volume viscosity ζ_0 = 0, corresponding to the Stokes hypothesis (dotted lines) and for the frequency-dependent volume viscosity extracted from MD simulations in Fig. <ref> (solid lines). For large frequencies, we see that the real part of the friction diverges for constant volume viscosity but goes to a constant when using the full frequency-dependent volume viscosity from MD simulations or a vanishing volume viscosity (Appendix <ref>). This shows that compression effects dominate the friction at high frequencies if an erroneous constant volume viscosity is assumed. Remarkably, the predicted friction function in the low-frequency regime changes only marginally if, instead of the full frequency-dependent volume viscosity, a vanishing volume viscosity is assumed, demonstrating that the Stokes assumption <cit.>, i.e., the neglect of volume viscosities, is a very accurate approximation. However, the deviation between both cases increases for higher frequencies. §.§ Comparison of the hydrodynamic and GLE friction for water and the hydrodynamic tail In the following, we compare the friction Γ(t), defined by the GLE in Eq. (<ref>) and extracted from the MD simulations for SPC/E water (Appendix <ref>) with the hydrodynamic prediction. In Fig. <ref>(a, b) we compare the real and imaginary parts of Γ_+(ω) from the GLE (circles) with predictions from the hydrodynamic Eq. (<ref>) using different approximations: the green line shows Γ^hyd(ω) using the frequency-dependent shear and volume viscosities extracted from MD simulations, the yellow line shows Γ^hyd(ω) using the frequency-dependent shear viscosity η(ω) but neglecting compressibility, i.e., c →∞, and shear-wave effects, i.e., α→ 0, which corresponds to the GSER in Eq. (<ref>), the blue lines show the hydrodynamic friction using a Maxwell model for the shear viscosity, i.e., η(ω) = η_0/(1+iωτ) and vanishing volume viscosity ζ = 0. We compare the bare MD result for the memory kernel (red circles) with the result obtained by subtracting the effect of periodic boundary conditions (PBC) <cit.> (black circles, Appendix <ref>). Note that here we show results from an extended MD simulation of SPC/E water compared to the results shown in Fig. <ref> (Appendix <ref>), for the purpose of reducing statistical noise in the long-time behavior. The friction Γ_+(ω) exhibits the same features as the hydrodynamic prediction Γ^hyd(ω) from Eq. (<ref>) (green). Especially a peak around 6-8 THz is visible in both with a slight shift in frequency space. Here, we use a = 141.36 pm and b̂ = 0.72, which we obtain from fitting the MD results in the frequency domain, as described in Appendix <ref>. The prediction using the full frequency-dependent shear and volume viscosities (green) agrees rather nicely with the friction directly extracted from the GLE, though. Deviations between the MD data and the green lines become noticeable for frequencies above 10 THz, where the hydrodynamic prediction converges to a plateau in the real part and the MD data decay to zero. These deviations are not unexpected <cit.> since standard hydrodynamic theory can not correctly describe the local interaction of the sphere with its environment for high frequencies which are mediated by smooth intermolecular forces <cit.> and that are not described accurately by an abrupt boundary condition at the spherical surface. Thus, we find that the friction can be predicted very well, including complex shear and volume viscosity models, below a frequency of 10 THz. The shear viscosity spectrum gives rise to the resonant feature in the friction kernel Γ_+(ω) around 7 THz, which is missed by the Maxwell model used in previous models (blue line) <cit.>. The residual discrepencies below 10 THz could be caused by a modified viscosity near the particle surface since hydration shells may contribute to deviations of the macroscopic shear viscosity near the molecule. In Appendix <ref>, we show that the agreement between hydrodynamic theory and MD is even better for a simpler system such as a Lennard-Jones fluid. Interestingly, the good agreement for the low-frequency behavior in the friction function assumes non-negligible slip b ≠ 0. In Appendix <ref>, we discuss the influence of the radius a and the slip length b on the hydrodynamic friction. The fitted values we obtain for the sphere radius a = 141.36 pm and the slip length b = 102.1 pm (Appendix <ref>) are in a realistic range around 1 Å. We obtain a water molecule mass of m ≈ 3·10^-26 kg from the equipartition theorem, i.e., m C^vv(0) = k_BT. For a density of ρ_0 ≈ 10^3 kg/m^3, the estimate of the hydrodynamic radius accounting to m = 4/3πρ_0 a^3 is a ≈ 192.76 pm, in rough agreement with the estimated radius. The estimated radius and slip length are comparable with results of a ≈ 0.15 nm and b ≈ 0.10 nm obtained from determining the translational and rotational diffusion constants for SPC/E water <cit.>. In an unbounded fluid, hydrodynamic backflow effects lead for long times to a power-law decay of the memory kernel as lim_t →∞Γ^hyd(t) ≈ - 3a^2√(πη_0ρ_0)t^-3/2 <cit.>, which follows from our expression of the hydrodynamic friction in Eq. (<ref>) by assuming negligible slip, i.e., b→ 0, and vanishing compressibility, i.e., c →∞ (Appendix <ref>). However, previous MD simulations of solute particles in liquid environments <cit.> did not observe the predicted tail, rather showing a positive sign or a different power law behavior. Here we use hydrodynamic theory and finite-size effects to explain the absence of the tail, the influence of compressibility, slip, and frequency-dependent viscosity on the hydrodynamic tail has already been discussed in <cit.>. Since, as we show in Appendix <ref>, compressibility does not influence the hydrodynamic tail but is only relevant on intermediate time scales, we assume λ→ 0. As shown in Appendix <ref>, for low frequencies the hydrodynamic memory kernel in Eq. (<ref>) can be expanded as Γ^hyd(ω)/6πη_0a≃ 1+2b̂/1+3b̂ + a(1+2b̂)^2√(iωρ_0/η_0)/√(2)(1+3b̂)^2 + 𝒪(ω^3/2). We see that the ω^1/2-term, which is responsible for the t^-3/2 power-law decay, is rescaled by the slip length b̂. Higher-order terms are influenced by the slip length b̂ and the model we choose for the shear viscosity η(ω). By inverse Fourier transformation, one obtains for long times lim_t →∞Γ^hyd(t) ≈ - 3a^2 (1+2b̂)^2/(1+3b̂)^2√(πη_0ρ_0)t^-3/2, shown as a black dashed line in Fig. <ref>(c). The long-time behavior of the water memory kernel is governed by a competition between the hydrodynamic tail and the long-time behavior of the frequency-dependent shear viscosity we use. The power-law tail in the MD data only agrees with Eq. (<ref>) if we subtract the masking finite-size effects (black circles). The uncorrected MD data (red circles) are dominated by PBC effects and do not exhibit the long-time tail in Eq. (<ref>) at time scales readable with MD simulations <cit.>. §.§ Long-time tail of velocity autocorrelation function The velocity autocorrelation function C^vv(t) (VACF) of a solute particle was used in various studies to compare hydrodynamic and stochastic theories <cit.>. The GLE in Eq. (<ref>) relates the memory kernel and the VACF by a Volterra equation (Appendix <ref>), which can be written as (derived in Appendix <ref>) C_+^vv(ω) = i ω k_BT/i ωΓ_+(ω) - m ω^2. In Fig. <ref>, we compare predictions according to Eq. (<ref>) using hydrodynamic theory with MD results for the VACF. As for the memory kernel, we achieve good agreement in the frequency domain up to frequencies of 10 THz with the full theory in Eq. (<ref>) (green line). The hydrodynamic tail of the VACF for long times (Appendix <ref>) lim_t →∞ C^vv(t) ≈k_BT √(ρ_0)/12 (πη_0 t )^-3/2, agrees with the prediction from hydrodynamic theory and the MD data with finite-size correction (black circles) for long times > 10 ps in Fig. <ref>(c). These findings in the VACF support that the best prediction of the solute particle dynamics is achieved by including the macroscopic viscosity spectra of the fluid into the hydrodynamic theory. §.§ Results for a methane molecule moving in water In Fig. <ref>, we show the memory kernel Γ_+(ω) for a methane molecule in SPC/E water extracted from MD simulations, again with and without PBC and finite-size correction. The data is taken from the work of Kowalik et al. <cit.>. The methane molecule in these simulations is modeled as a monoatomic Lennard-Jones sphere, which makes it possible to compute the friction using Eq. (<ref>) for a spherical particle. In Fig. <ref>(a, b), we see that the real and imaginary parts of the Fourier transformed friction Γ_+(ω) do not show the same features as the prediction from the hydrodynamic equations (green and yellow). In particular, the pronounced peak in the real part at around 7-8 THz arising from the shear viscosity of water is absent, and the memory kernel from MD in the time domain in Fig. <ref>(c) does not contain any noticeable oscillations, in contrast to the hydrodynamic prediction. Thus, we observe considerable discrepancies between the friction extracted directly from MD simulations and the hydrodynamic prediction for a hydrophobic molecule in a water environment. Constant slip effects on the methane surface cannot explain these differences. Rather, the local shear viscosity near the methane molecule seems to differ from the bulk shear viscosity. This is in line with a recent analysis of tracer-particle dynamics in hydrogels, where the assumption of a thin interfacial shell with viscoelastic properties different from bulk would reconcile the experimental measurements with hydrodynamic predictions <cit.>. §.§ Frequency-dependent surface slip from MD simulations The observation that the hydrodynamic theory does not accurately predict the friction kernel extracted directly from the MD data above 10 THz in Fig. <ref> and <ref>, and that a good agreement is rather achieved with the simplified GSER model, is a clear sign that the hydrodynamic theory used by us misses key aspects of molecular friction. A reasonable extension is to assume the slip coefficient b, which follows from the Navier boundary condition at the spherical surface (Appendix <ref>) to be frequency-dependent as well, i.e., b = b(ω). Note that a frequency-dependent surface friction coefficient, i.e., Λ(ω) = η(ω)/b(ω), has been assumed in many works <cit.>. To obtain the slip spectrum, we assume the frequency-dependent memory kernel Γ_+(ω) to equal the hydrodynamic expression in Eq. (<ref>) and solve for the slip length b(ω). The results we show in Fig. <ref> for different radii a are, therefore, the slip spectra that lead to a perfect agreement between the friction from the MD data we show in Fig. <ref> and <ref> and the hydrodynamic prediction, and exhibit interesting features. For example, we observe regions where the real part of the slip coefficient is negative, which would correspond to a negative real part of the surface friction coefficient Λ(ω), contrary to standard linear response theory. Negative slip coefficients indicate a local shear viscosity around the spherical particle different from the bulk viscosity <cit.>. For the water particle's friction, denoted as solid lines, a distinct positive peak in the real part around 0.2 THz is visible, which is in the region of the shear viscosity mode I of water (Appendix <ref>) and relates to an induced slip effect due to hydrogen network topology changes. The real part of the slip length of a water molecule at the prominent peak of shear viscosity mode III around 7-8 THz is negative, suggesting that hydrogen stretching modes cause local viscosities around the tracer particle that are higher than the macroscopic shear viscosity. For methane, denoted by dashed lines in Fig. <ref>, the real part is negative in nearly the whole frequency range. This agrees with the fact that methane in water is surrounded by a highly-ordered solvation and presumably high-viscosity shell structure <cit.>. § CONCLUSIONS We demonstrate that predictions from macroscopic hydrodynamic theory are in good agreement with the friction coefficient directly extracted via the GLE from MD simulations of a single water molecule in liquid water if the frequency dependence of the shear and volume viscosities is properly accounted for. This establishes the long-sought link between macroscopic fluid hydrodynamics and the molecular friction in a fluid. We also show that it is important to include the frequency-dependent volume viscosity of the fluid that has the asymptotic behavior ζ→ 0 as ω→∞. Interestingly, the agreement between the friction from the molecular water motion (obtained via the GLE) and the hydrodynamic prediction is achieved without including spatial or wave-vector dependencies of the viscosity functions. Nevertheless, we cannot exclude the possibility that such spatial dependencies are present, especially at high frequencies. We have mostly dealt with the homogeneous case, where the moving molecule is identical to the surrounding fluid molecules. In contrast, we observe pronounced discrepancies between the friction obtained from hydrodynamic theory and simulations for the inhomogeneous case of a methane molecule moving in water, especially at frequencies above 10 THz. By calculating the frequency-dependent surface slip from the MD simulations, we conclude that our current hydrodynamic model neglects the modified viscous properties of the water solvation layer around a methane molecule. It would therefore be desirable to develop inhomogeneous hydrodynamic models for the friction of host molecules in liquids in the presence of solvation shells that exhibit viscosity properties that are different from the bulk. § ACKNOWLEDGEMENTS This work was supported by the Deutsche Forschungsgemeinschaft (DFG) via Grant No. SFB 1449 'Dynamic Hydrogels at Biointerfaces', Project A02, and Grant No. SFB 1114 'Scaling Cascades in Complex Systems', Project C02. The authors would like to acknowledge the HPC Service of ZEDAT, Freie Universität Berlin, for providing computing time. § DERIVATION OF THE FRICTION OF A SPHERE FROM THE TRANSIENT STOKES EQUATION The Navier-Stokes equation reads ∂ρ (r,t) v_i (r,t)/∂ t + ∇_j ρ(r,t) v_i(r,t) v_j(r,t) = F_i(r,t) + ∇_j σ_ij(r,t), where i,j ∈{x,y,z} and doubly appearing indices are summed over. The mass density ρ, velocity v_i and volume force F_i are functions of time t and position r. The symmetric stress tensor σ_ij(r,t) consists of a diagonal pressure contribution and components that depend on velocity gradients, i.e., ∇_j v_i(r,t) <cit.>. For a linear, homogeneous, isotropic compressible fluid, it is on the linear level given by σ_ij(r,t) = - P(r,t)δ_ij + ∫∫[ η(|r'|,t')(∇_i v_j(r- r',t-t') + ∇_j v_i(r- r',t-t')) + δ_ij(ζ(|r'|,t') - 2/3η(|r'|,t')) ∇_k v_k(r- r',t-t')] dr' dt', where P is the pressure and η and ζ are the shear and volume viscosity kernels, which in general are time- and space-dependent. If the viscosity kernels decay on length- and time scales that are small compared to those on which ∇_j v_i(r,t) varies, one can approximate the stress tensor in Eq. (<ref>) as σ_ij (r,t) ≈ - P(r,t) δ_ij + ( ζ_0- 2/3η_0) δ_ij∇_k v_k(r,t) + η_0 (∇_i v_j(r,t) + ∇_j v_i(r,t) ), where η_0 and ζ_0 are the time- and space-integrated viscosities η_0 = ∫∫η(|r'|,t') dr' dt', ζ_0 = ∫∫ζ(|r'|,t') dr' dt'. We define a fluid with a stress tensor given by Eq. (<ref>) as a Newtonian fluid, i.e., the viscosities are time- and space-dependent and the stress tensor is linear in the velocity gradients, but will explicitly consider viscoelastic fluids in this work, i.e. the viscosity depends on the history of the velocity gradients (Eq. (<ref>)). If we neglect the non-linear term in the Navier-Stokes equation (second term on the left-hand side in Eq. (<ref>)), which is justified for low Reynolds numbers, and use the expression of the stress tensor in Eq. (<ref>), we arrive at the linear transient Stokes equation ρ(r,t) ∂ v_i(r,t)/∂ t = F_i(r,t) - ∇_i P(r,t) +∫∫(1/3η(|r'|,t') + ζ(|r'|,t')) ∇_i∇_k v_k(r- r',t-t')dr' dt' + ∫∫η(|r'|,t') ∇_k ∇_k v_i(r- r',t-t')dr' dt'. The frequency-dependent friction of a sphere can be calculated using the Green's function of the Stokes equation in Eq. (<ref>) <cit.>. To derive the Green's function, we take the divergence of Eq. (<ref>) and obtain ∇_i^2 P(r,t) - ∂^2 P(r,t)/c^2 ∂ t^2 = ∇_i F_i(r,t) +∫∫(4/3η(|r'|,t') + ζ(|r'|,t')) ∇_i^2 ∇_k v_k(r - r',t-t')dr' dt', where we used the linearized continuity equation, i.e., ρ_0 ∇_i (∂ v_i/∂ t) = - ∂^2 ρ /∂ t^2, and the linearized isentropic equation of state ρ - ρ_0 = c^-2(P-P_0), where the speed of sound is denoted by c, from which follows that ∂^2 ρ /∂ t^2 = c^-2∂^2 P /∂ t^2. Fourier-transforming Eqs. (<ref>) and (<ref>) we obtain iωρ_0v_i(k,ω) = F_i(k,ω) - i k_i P(k,ω) - [η(k,ω)/3 + ζ(k,ω)]k_ik_jv_j(k,ω) - η(k,ω)k_jk_jv_i(k,ω), and (ω^2/c^2 - k_ik_i)P(k,ω) = i k_iF_i(k,ω) - i[4 η(k,ω)/3 + ζ(k,ω)]k_ik_ik_jv_j(k,ω). Note that the viscosity kernels η and ζ are both single-sided in the time domain, i.e., η(r,t) = 0 and ζ(r,t) = 0 for t < 0. We next assume that both viscosities decay quickly in space, so that their Fourier transforms become independent of k, i.e., η(k,ω) →η(ω) and ζ(k,ω) →ζ(ω). Solving Eq. (<ref>) for P and inserting into Eq. (<ref>), we arrive at an equation for the velocity as a function of the external force <cit.>. To solve this equation, we decompose the velocity into the transverse and longitudinal parts, i.e., v_i(k,ω) = v_i^T(k,ω) + v_i^L(k,ω), which fulfill k_i v_i^T(k,ω) = 0 and k_iv_i(k,ω) = k_i v_i^L(k,ω). In Fourier space, the Green's function G_ij of the velocity is defined by v_i(k,ω) = G_ij(k,ω) F_j(k,ω), and is a sum of transverse and longitudinal components, i.e., G_ij(k,ω) = G_ij^T(k,ω) + G_ij^L(k,ω). The transverse part describes the velocity field in the incompressible case and accounts for shear effects. It is given by G_ij^T(k,ω) = (δ_ij - k_ik_j/k^2)/η(ω)/k^2 + α^2(ω), where the length scale α^-1 is defined as α^2(ω) = i ωρ_0/η(ω). The longitudinal component describes compression effects and reads G_ij^L(k,ω) = k_ik_jλ^2(ω)/η(ω) α^2(ω)k^2(k^2+λ^2(ω)). The length scale λ^-1 is defined as λ^2(ω) = i ωρ_0/4η(ω)/3 + ζ(ω) - iρ_0c^2/ω. The full frequency-dependent Green's function G_ij(r,ω) = G_ij^T(r,ω) + G_ij^L(r,ω) in real space reads G_ij(r,ω) = 1/4πηα^2r^3{δ_ij([1+rα+r^2α^2]e^-rα - [1+rλ]e^-rλ) -3r̂_ir̂_j([1+rα+r^2α^2/3]e^-rα - [1+rλ+r^2λ^2/3]e^-rλ)}. The asymptotic behavior of the Green's function, which is discussed in detail in <cit.>, strongly depends on the inverse length scales α = α(ω) and λ = λ(ω). Note that in <cit.> a different definition of the Fourier transformation is used and frequency-independent viscosities are assumed. To calculate the friction acting on an oscillating sphere, we have to compute the Green's function for the stress tensor, denoted by σ_ijk, defined as σ_ij(r,ω) = σ_ijk(r,ω) F_k(r,ω). Without derivation and referring to <cit.>, the stress tensor Green's function is given by σ_ijk(r,ω)/η(ω) = G_ijk(r,ω) + G_jik(r,ω) + (α^2/λ^2 - 2)G_llk(r,ω)δ_ij, where ∇_k G_ij = G_kij. To obtain the fluid velocity around a sphere with radius a, we use a standard singularity Ansatz <cit.> G_ij^sp(r,ω) = (C_0 + C_2 a^2 ∇_k ∇_k) G_ij(r,ω), where the velocity field around the sphere follows as v_i^sp(r,ω) = F_j(ω)G_ij^sp(r,ω), with F_j being a force source. We choose the coefficients C_0 and C_2 such that the boundary conditions on the spherical surface are satisfied. If we assume a finite slip at the spherical surface, we can split the boundary conditions at the surface into a kinematic and a Navier boundary condition. The kinematic boundary condition at |r| = a can be written as 6πη(ω)ar̂_iG^sp_ij(ω) = r̂_j, which defines the sphere velocity V_i^sp(ω) = F_i(ω)/6πη(ω)a. Note that only in the zero-frequency limit the source force F_i(ω) equals the actual force on the sphere. The Navier boundary condition for the tangential stress at |r| = a reads b[∇_k G_ij^sp(ω) + ∇_i G_kj^sp(ω)]r̂_kℒ_li = [G_ij^sp(ω) - δ_ij/6πη(ω)a]ℒ_li, where b is the slip length and we define the projection operator as ℒ_li = (δ_li - r̂_lr̂_i). The final result for G_ij^sp(r,ω) reads, using Eq. (<ref>) G_ij^sp(r,ω) = 1/4πη(ω)α^2 r^3 · {δ_ij (E_1[1+rα+r^2α^2]e^-rα - E_2[1+rλ]e^-rλ) - 3r̂_ir̂_j(E_1[1+rα+r^2α^2/3]e^-rα - E_2[1+rλ+r^2λ^2/3]e^-rλ) }, with the coefficients E_1 = 2/3e^α̂(1+2b̂)(3+3λ̂+λ̂^2)/W, E_2 = 2/3e^λ̂(1+2b̂)(3+3α̂+α̂^2) + b̂α̂^2(1+α̂)/W, and W = (2+2λ̂+λ̂^2)(1+b̂(3+α̂))+(1+α̂)(1+2b̂)λ̂^2/α̂^2. We define the dimensionless slip length, b̂ = b/a, and the dimensionless decay constants α̂ = aα and λ̂ = aλ. The corresponding friction function Γ^hyd(ω) is given by δ_ijΓ^hyd(ω) = F_i^sp(ω)/V_j^sp(ω) = -6πη(ω) a∫ d^3 r r̂_k σ_kijδ(|r|-a), where V_j^sp is the frequency-dependent velocity amplitude of the sphere. For the hydrodynamic force F_i^sp(ω) on a spherical particle, we use the projection of the stress tensor on the surface and integrate it over the sphere surface. Using Eq. (<ref>), and the derivative of G_ij^sp(r,ω) in Eq. (<ref>), we obtain the friction function of the spherical particle Γ^hyd(ω) = 4πη(ω)a/3 W^-1{(1+λ̂)(9+9α̂+α̂^2)(1+2b̂) + (1+α̂)[2λ̂^2(1+2b̂)+b̂α̂^2(1+λ̂)]}, where we use the identities ∫ d^3r δ_ijδ(|r|-a) = 4π a^2 δ_ij and ∫ d^3r r̂_i r̂_j δ(|r|-a) = 4π a^2 δ_ij/3. Assuming negligible slip, i.e., b→ 0, and vanishing compressibility, i.e., λ→ 0, we have Γ^hyd(ω) = 6π a η(ω) (1+aα+a^2α^2/9). Thus, the frequency-dependent hydrodynamic force on a spherical particle, i.e., F_i^sp(ω) = - Γ^hyd(ω) V_i^sp(ω), is in the same limit given by F_i^sp(ω) = -6π a η(ω) V_i^sp(ω) (1+aα+a^2α^2/9). § FLUID MOMENTUM AROUND A MOVING SPHERE From the velocity field, i.e., v_i^sp(r,ω) = F_j(r,ω)G_ij^sp(r,ω), and the expression in Eq. (<ref>), we can calculate the fluid momentum outside the moving sphere as p_i(ω) = ρ_0 ∫_|r|>av_i^sp(r,ω) d^3r. We assume that the force source is oscillating along the x-direction, i.e., F(ω) = (F(ω),0,0)^T, so that the momentum points in the x-direction. The volume integral in Eq. (<ref>) involves the angular integrals ∫_|r|>a d^3r δ_ij = ∫_a^∞ r^2 dr ∫_0^π dθ sinθ∫_0^2π d Φ δ_ij, = 4 πδ_ij∫_a^∞ r^2 dr, ∫_|r|>a d^3r r̂_i r̂_j = ∫_a^∞ r^2 dr ∫_0^π dθ sinθ cos^2 θ∫_0^2π d Φ δ_ij, = 2πδ_ij∫_a^∞ r^2 dr ∫_-1^1 du u^2, = 4/3πδ_ij∫_a^∞ r^2 dr. In the derivation of the flow field around a sphere in Appendix <ref>, we use the kinematic boundary condition, i.e., 6π a ηr̂_i G_ij^sp = r̂_j for |r| = a , in Eqs. (<ref>) and (<ref>) and define the sphere velocity V_i^sp(ω) = F_i(ω)/6πη(ω)a, such that in the low-frequency (steady) limit and without slip, the source force equals the actual force on the sphere. Using this definition, the identities in Eq. (<ref>) and the expression in Eq. (<ref>) and inserting them into Eq. (<ref>), we arrive at p_i(ω) = 6πρ_0/α^2 a V_i^sp(ω) ∫_a^∞1/r dr · {δ_ij (E_1[1+rα+r^2α^2]e^-rα - E_2[1+rλ]e^-rλ) - δ_ij(E_1[1+rα+r^2α^2/3]e^-rα - E_2[1+rλ+r^2λ^2/3]e^-rλ) }, = 6πρ_0/α^2 a V_i^sp(ω) ∫_a^∞ dr [ 2/3 E_1 r α^2 e^-rα + 1/3 E_2 r λ^2 e^-rλ], = - 6πρ_0/α^2 a V_i^sp(ω) [2/3 E_1 [e^-rα(rα + 1)]|_a^∞ + 1/3 E_2 [e^-rλ(rλ + 1)]|_a^∞], = 6πρ_0/α^2 a V_i^sp(ω) [2/3 E_1 [e^-aα(aα + 1)] + 1/3 E_2 [e^-aλ(aλ + 1)] ], For b→ 0 and λ→ 0, the constants E_1 and E_2 in Eqs. (<ref>) and (<ref>) become E_1 = e^aα and E_2 = (1+aα + a^2α^2/3), and we obtain for the momentum p_i(ω) = 6πρ_0/α^2 a V_i^sp(ω) (1+aα+a^2α^2/9). The force is given by F_i = iωp_i, which leads to F_i(ω) = 6π a η(ω) V_i^sp(ω) (1+aα+a^2α^2/9), which is identical to the result obtained in Eq. (<ref>) from integrating the surface force over the oscillating sphere. Thus, the net frequency-dependent momentum of the fluid inside the sphere has to vanish, and we have no added mass due to the motion of the liquid inside the sphere. It follows that the friction Γ^hyd calculated from hydrodynamic theory equals the friction Γ(t) extracted from single-particle trajectories using the GLE, and no fluid mass correction has to be applied. § SIMULATION SETUP We perform all MD simulations using the GROMACS simulation package <cit.> (version 2020-Modified). For water, we use the SPC/E <cit.> and TIP4P/2005 <cit.> rigid water models. We pre-equilibrate the system in an NPT ensemble (P = 1 bar) using a Berendsen barostat <cit.> set to 1 atm. For production runs, we perform all simulations in the NVT ensemble with a temperature of T = 300 K, controlled with a velocity rescaling thermostat <cit.>. For electrostatics, we use the particle-mesh Ewald method <cit.>, with a cut-off length of 1 nm. We allow simulations to run for 600 ns, using integration time steps of 1 fs. We perform simulations in a 3.5616 nm cubic box with 1250 water molecules. We use the trajectories of two traced water particles for the memory kernel extraction. For the results we show in Appendix <ref>, we additionally run simulations with integration time steps of 2 and 4 fs. For the results we show in Fig. <ref>, we run an MD simulation of SPC/E water with a total length of 1 μ s and a time step of 2 fs, and use the trajectories of 15 water particles for the memory kernel extraction. For the Lennard-Jones (LJ) fluid, we simulate a system at T=92 K with a box length of 5 nm and 2744 LJ particles. For the particles, we took the Lennard-Jones parameters of argon of the GROMOS53a6 force field <cit.> (σ = 3.410 Å, ϵ = 0.996 kJ/mol and a cutoff radius of 2.5σ). Using LJ units, the systems are at T^* = 0.77 and P^* = 0.04 corresponding to the liquid phase <cit.>. The system is first equilibrated in the NPT ensemble (P = 17 bar) followed by a production run in the NVT ensemble for 10 ns with a time step of 2 fs. We use the trajectories of 50 LJ particles for the memory kernel extraction. § CALCULATION OF FREQUENCY-DEPENDENT SHEAR AND VOLUME VISCOSITY SPECTRA FROM MD SIMULATIONS The shear viscosity kernel η(t) is given by the trace-free part of the stress tensor by the Green-Kubo relation <cit.> η(k = 0, ω) = ∫_0^∞dt η(t) e^-iω t, = V/6 k_BT∫_0^∞e^-iω t∑_i≠ j⟨Π_ij(t) Π_ij(0)⟩ dt, where V is the volume of the fluid. We define the trace-free part of the stress tensor σ_ij as Π_ij = σ_ij - δ_ij1/3∑_kσ_kk, where i,j ∈{x,y,z}. For the computation of the shear viscosity spectrum, we use Eq. (<ref>) by calculating the time-correlation functions of the stress tensor entries and applying the half-sided Fourier transform. Employing the Green-Kubo relations, we use the fluctuations of the instantaneous pressure from its average value ⟨P⟩, i.e., δ P(t) = P(t) - ⟨ P ⟩, to compute the volume viscosity kernel ζ(t). P(t) is computed from the trace of the stress tensor, i.e., P(t) = 1/3∑_kσ_kk(t). Using the half-sided Fourier transformation, we compute the volume viscosity spectrum via <cit.> ζ(k = 0, ω) = ∫_0^∞dt ζ(t) e^-iω t, = V/k_BT∫_0^∞e^-iω t⟨δ P(t) δ P(0) ⟩ dt. For the Fourier transformation of the viscosity data, and the memory kernel data as well, we use the FFT algorithm implemented in NumPy v. 1.18.5 <cit.>, where we assume the input signal x(t) to be single-sided, i.e., x(t<0) = 0. All data in the time domain are truncated at 10 ps. Note that the FFT in NumPy uses the opposite convention for the Fourier transformation definition we use here. § FITTING OF THE VISCOSITY DATA We fit the real part of the Fourier-transformed viscosity spectra η(ω) and ζ(ω) extracted from the MD simulation by a combination of six and five exponential-oscillating components according to Eqs. (<ref>) and (<ref>), respectively, using the Levenberg-Marquardt algorithm implemented in scipy v. 1.4 <cit.>. The initial values for all η_0,j, τ_n,j and τ_o,j are chosen suitably; the same for ζ_0,j, τ_v,j and τ_w,j. We constrain the parameter space to positive values. We filter the data set beforehand on a logarithmic frequency scale to reduce the overall data points to fit. We also weight the data exponentially so that the data for small frequencies become more important for the fit. After optimizing the parameters, we use them as initial parameters to peform the final fit of the data in the time domain. Here the input data are filtered on logarithmic time-domain scale but without exponential weighting. This allows us to fit the low- and high-frequency regimes very well at the same time. The obtained fit parameters are summarized in Appendix <ref>. § SUMMARY OF FITTING PARAMETERS empty § COMPONENTS OF THE SHEAR VISCOSITY FOR SPC/E WATER The fitting components according to Eq. (<ref>) and the fitting parameters in Table <ref> (Appendix <ref>) for the SPC/E water MD simulation are depicted in Fig. <ref>, together with the MD data and the total fit shown in Fig. <ref>. § FRICTION FROM MD SIMULATIONS WITH DIFFERENT TIME RESOLUTION In Fig. <ref>, we investigate the influence of the time resolution on the MD simulation, where we simulated SPC/E water for different time resolutions. The memory kernel, extracted as explained in Appendix <ref>, exhibits, besides numerical noise, no distinct differences between the different time resolutions. All important features we observe in the memory kernel for 1 fs are also visible for different time resolutions in Fig. <ref>. The same applies for the extracted viscosity spectra in Fig. <ref>, since they exhibit similar molecular features as discussed in the main text. The deviating behavior from the exponential-oscillatory fitting model starting around 30-60 THz seen for the viscosity spectra occurs at lower resolutions in Fig. <ref>(b), which rules out discretization problems in this frequency range. At higher frequencies, in the resolution limit regime, the real part data in Fig. <ref>(b) is dominated by noise, which means that we cannot make a statement about the actual high-frequency scaling. For the 1 fs data (black), the data points become unstable around 100 THz, which is a fifth of the maximal frequency of 500 THz. This suggests that our used FFT algorithm explained in Appendix <ref> is numerically unstable for the non-periodic data sets in this frequency regime, and data points in this regime should not be used for interpretation. § FRICTION OF A SPHERE FOR FREQUENCY-INDEPENDENT VISCOSITIES In Fig. <ref>, we show the calculated hydrodynamic friction of a sphere (Eq. (<ref>)) for constant shear viscosity η(ω) = η_0 for different sphere radii a. Note that we use a vanishing volume viscosity ζ = 0, for better comparability with the results in <cit.>. As already discussed in <cit.>, the friction sensitively depends on the slip length b and its real and imaginary parts both increase as ω→∞. Note that the high-frequency behavior of the imaginary part differs from the results in <cit.>, as we use a different definition of the Fourier transformation. We refer to <cit.> for a detailed discussion of these results, but note that the addition of frequency-dependent shear and volume viscosity significantly changes the friction as shown in Fig. <ref>. § LOW- AND HIGH-FREQUENCY SCALING OF THE HYDRODYNAMIC FRICTION OF A SPHERE In the following, we analyze friction in Eq. (<ref>), without considering the limiting cases c→∞, η = η_0 or α→ 0. For the real part of the friction in Eq. (<ref>), we analytically obtain the asymptotic behavior (for finite c and b≠0), for shear viscosity and volume viscosity given by the models in Eqs. (<ref>) and (<ref>), as Re Γ^hyd(ω)/6πη_0a≃ω→ 0 1+2b̂/1+3b̂ + a(1+2b̂)^2√(ρ_0/η_0)/√(2) (1+3b̂)^2ω^1/2 + 𝒪(ω^3/2), ω→∞ if ζ = ζ_0 2a/9Φ(α_∞)^2/2λ_cω^1/2, otherwise 2a/9Φ(α_∞)^2/λ_∞ω^0, with the constants Φ = (∑_j = I^VIη_0,jτ_n,j/τ_o,j^2)/η_0, η_0 = ∑_j=I^VIη_0,j. Here, we introduced high-frequency limiting values for the inverse length scales α_∞, λ_∞ and λ_c. They are determined via α(ω→∞) ≡α_∞ = √(ρ_0η_∞) , λ_∞ = √(ρ_0Z_∞), λ_c = √(ρ_0/2∑_j = I^Vζ_0,j), where η_∞ and Z_∞ follow from the high-frequency limits of the shear and volume viscosities in Eqs. (<ref>) and (<ref>), with Z(ω) = 4η(ω)/3 + ζ(ω) - i ρ_0 c^2/ω η_∞ = 1/ (∑_j = I^VIη_0,jτ_n,j/τ_o,j^2), Z_∞ = 1/(∑_j = I^VI4η_0,jτ_n,j/3τ_o,j^2 + ∑_j = I^Vζ_0,jτ_v,j/τ_w,j^2 + ρ_0c^2). For ζ(ω)≠ζ_0, the real part of the friction Γ^hyd(ω) in Eq. (<ref>) converges for ω→∞ to a constant value depending on the steady-state viscosity constants and relaxation times. This stems from our choice of exponential-oscillatory models for shear and volume viscosity, where the real part of the components scales with ω^-4 and the imaginary part scales with ω^-1. The friction in the frequency domain differs only marginally for frequency-dependent volume viscosity and for vanishing volume viscosity, i.e., for ζ = 0. For ζ = ζ_0, the real part diverges. For ζ = 0, the friction has the same asymptotic behavior as for ζ(ω) in Eq. (<ref>), with modified constants. This is visible in Fig. <ref>. In the abscence of slip, i.e., for b=0, the real part scales as Re Γ^hyd(ω)/6πη_0a≃ω→ 0 1 + a√(ρ_0/η_0)/√(2)ω^1/2+ 𝒪(ω^3/2), ω→∞ if ζ = ζ_0 2a/9Φ(α_∞)^2/2λ_cω^1/2, otherwise 2a/9Φα_∞(α_∞ + 2 λ_∞)/λ_∞ ω^0. Therefore, the plateau value for ω→∞ and frequency-dependent ζ depends on whether the slip length is zero or not, but the long-time scaling for ζ = ζ_0 is slip-independent. For completeness, the imaginary part of the friction function, for b≠ 0 has the following asymptotic scaling Im Γ^hyd(ω)/6πη_0a≃ω→ 0 a(1+2b̂)^2√(ωρ_0/η_0)/√(2) (1+3b̂)^2 + 𝒪(ω), ω→∞ if ζ = ζ_0 2/9aΦ(α_∞)^2/2λ_cω^1/2, otherwise - 2a/9Φ C_∞ω^-1. The constant C_∞ is given by C_∞ = 4 + 2/b̂- (α_∞)^2/(λ_∞)^2. Thus, we see that for vanishing volume viscosity, i.e., ζ(ω) → 0 as ω→∞, the imaginary part of the friction decays as ∼ω^-1 for high frequencies, but diverges with the same scaling as the real part for ζ = ζ_0. For the stick case (b→ 0), the imaginary part scales as Im Γ^hyd(ω)/6πη_0a≃ω→ 0 -a√(ωρ_0/η_0)/√(2) + 𝒪(ω), ω→∞ if ζ = ζ_0 2/9Φ(α_∞)^2 a/2λ_cω^1/2, otherwise - 2a/9Φα_∞(α_∞ - 4λ_∞)/(λ_∞)^2ω^-1. § DERIVATION OF THE HYDRODYNAMIC TAIL Applying the inverse Fourier transformation in Eq. (<ref>) leads to the Boussinesq-Basset equation <cit.>. We decompose the total force in Eq. (<ref>) into three forces F_i,1^sp(ω) = -6π a η(ω) V_i^sp(ω), F_i,2^sp(ω) = -6π a^2 √(η(ω) ρ_0 i ω)V_i^sp(ω), F_i,3^sp(ω) = -2/3π a^3 ρ_0 i ωV_i^sp(ω). Assuming a frequency-independent shear viscosity, i.e., η(ω) = η_0, it is easily seen that in the time-domain, the first component is given by F_i,1^sp(t) = - 6π a η_0 V_i^sp(t), and the third component by F_i,3^sp(t) = - 2/3πρ_0 a^3 V_i^sp(t). To derive the expression for the second component, we use V_i^sp(ω) = V_i^sp(ω)/i ω, where the dot denotes the time derivative. For the expression F_i,2^sp(ω) = -6π a^2 √(η_0ρ_0/i ω)V_i^sp(ω) = -6π a^2 √(η_0ρ_0)g(ω)V_i^sp(ω), we can show that g(ω) in the time domain is given by g(t) = Θ(t)/√(π)t^-1/2 g(ω) = 1/√(π)∫_0^∞ e^-iω tdt/t^1/2, = 1/√(π i ω)∫_0^i ∞ e^-sds/s^1/2, = 1/√(π i ω)∫_0^∞ e^-sds/s^1/2, = 1/√(i ω), where we use a Wick rotation to arrive at Eq. (<ref>) from Eq. (<ref>). Applying the convolution theorem in the inverse Fourier transformation of the second force component, the full force is given by the Boussinesq-Basset equation F_i^sp(t) = - 6π a η V_i^sp(t) - 6a^2√(πη_0ρ_0)∫_0^t dt' V_i^sp(t')/√(t-t') - 2/3πρ_0 a^3 V_i^sp(t). The first term in Eq. (<ref>) is the steady Stokes drag. The third term is known as the added mass term, where m_0 = 2/3πρ_0 a^3. It originates since the accelerating sphere in the unsteady flow must move or deflect some surrounding fluid volume. The second term, including a convolution integral, describes the sphere's history of motion, also known as the Basset history force. Applying a partial integration on this term, we obtain F_i,2^sp(t) = 3a^2√(πη_0ρ_0)∫_0^t dt' (t-t')^-3/2 V_i^sp(t') + V_i^sp(t)f(0) - V_i^sp(0)f(t), where f(t) = θ(t) 6a^2√(πη_0ρ_0)t^-1/2. For long times, we find that the memory kernel from the hydrodynamic prediction contains a power law decay lim_t →∞Γ^hyd(t) ≈ - 3a^2√(πη_0ρ_0)t^-3/2, which is the famous hydrodynamic tail <cit.>. The Volterra equation in Eq. (<ref>) relates the long-time behavior of the memory kernel and the velocity autocorrelation function C^vv(t) (VACF). Using the expression in Eq. (<ref>), we find that for long times, the VACF scales with lim_t →∞ C^vv(t) ≈k_BT √(ρ_0)/12 (πη_0 t )^-3/2. § CALCULATION OF THE FREQUENCY-DEPENDENT FRICTION FROM SIMULATION TRAJECTORIES Various data-based methods to estimate the parameters of the GLE from experimental or simulation trajectories have been proposed <cit.>. A robust and computationally efficient technique to extract the memory kernel from given time series trajectories can be derived by multiplying Eq. (<ref>) for the position x with the initial velocity x(0). Taking the ensemble average leads to an equation involving correlation functions that can be calculated from the given trajectory <cit.>. By this, we obtain a Volterra equation of the first kind <cit.> mC^xx(t) = - ∫_0^t dt' Γ(t')C^xx(t-t'), where C^xx(t) = ⟨x(0)x(t)⟩, and C^xx(t) = ⟨x(0)x(t)⟩, and we used that ∇ U = 0 and that x(0) and F_R(t) are uncorrelated, i.e., ⟨x(0)F_R(t)⟩ = 0 <cit.>. Analyses for one-dimensional trajectories have shown that compared to the direct method <cit.>, extraction of the memory kernel's running integral produces significantly more stable results <cit.>. We integrate Eq. (<ref>) over time m (C^xx(t) - C^xx(0)) = - ∫_0^t ds ∫_0^s ds' Γ(s') C^xx(s-s'), = - ∫_0^t ds'∫_s'^t ds Γ(s-s') C^xx(s'), = - ∫_0^t ds' ∫_0^t-s' du Γ(u) C^xx(s'), = - ∫_0^t ds G(t-s) C^xx(s), where G(t) = ∫_0^t ds Γ(s) is the running integral of the memory kernel. Discretizing this equation with a time step Δ t, we obtain an iterative formula for G_i = G(iΔ t). For a discretized correlation function we use the short-hand notation C_i^AB = ⟨ A(0) B(iΔ t) ⟩. For the running integral of the memory kernel G_i, we obtain from Eq. (<ref>) by applying the trapezoidal rule on the integral G_i = m( C_0^xx - C_i^xx) - Δ t ∑_j=1^i-1G_j C_i-j^xx· (1/2Δ t C_0^xx)^-1, where we use G_0 = 0. If we compute the velocity autocorrelation function C_i^xx from the given time series x(t), we can use Eq. (<ref>) to determine the running integral G(t) and based on this the memory kernel Γ(t) by differentiation. § FREQUENCY-DEPENDENT FINITE-SIZE CORRECTION FOR MEMORY KERNELS Numerical results of the velocity autocorrelation function and the frequency-dependent friction of a particle in a fluid are significantly affected by the presence of periodic boundary conditions and finite system sizes in molecular dynamics simulations <cit.>. We follow the procedure elaborated in <cit.> which yields an analytic correction term for the frequency-dependent friction that accounts for periodic boundary effects. From the uncorrected friction Γ̃_+(ω), the corrected version is computed by [ Γ̃_+^corr(ω) ]^-1 = [Γ̃_+(ω) ]^-1 - Δ G^corr(ω), where the frequency-dependent correction term Δ G^corr(ω) is given by Δ G^corr(ω) = [ ∑_n⃗,n≠ 01/3Tr[G_ij(n⃗L,ω)]] - 1/3L^3∫ dr⃗' Tr [G_ij(r⃗',ω)] . L is the box size, here L = 3.5616 nm for the water simulation and L = 4.5 nm for the methane in water simulation <cit.>, and n⃗ = n_xe⃗_x + n_ye⃗_y + n_ze⃗_z is a lattice vector with n_x, n_y, n_z integers and e⃗_i are the unit vectors in the directions x, y, and z. Tr denotes the tensor trace. G is the Green's function given in Eq. (<ref>), derived in Appendix <ref>. For a detailed description of the numerical calculation of the correction term in Eq. (<ref>), we refer to <cit.>. We additionally correct the velocity autocorrelation function C̃_+^vv(ω) from the MD simulations by the correction term C̃_+^vv,corr(ω) = C̃_+^vv(ω)/1+(k_BT)^-1C̃_+^vv(ω)Γ̃_+^corr(ω)Γ̃_+(ω)ΔG^corr(ω), which is derived using Eq. (<ref>). The corrected data Γ̃_+^corr(ω) and C̃_+^vv,corr(ω) is shown in the Figs. <ref>, <ref> and <ref> as black circles. Note that for the calculation of the PBC correction, we used the fitted frequency-dependent models shown in Fig. <ref> as it improves the finite-size correction <cit.>. § DETERMINATION OF THE EFFECTIVE RADIUS AND THE SLIP LENGTH In Fig. <ref>, we investigate the dependence of the friction ReΓ_+(ω) on the hydrodynamic radius a and the slip length b. We observe a significant dependence of the slip length in Fig. <ref>(a), where higher slip leads to lower friction. The same is visible in the time domain in Fig. <ref>(b), obtained from (a) by applying the inverse Fourier transformation on Γ_+(ω). The slip length seems to have no major influence on the shape but only on the magnitude of the friction function. In Fig. <ref>(c), we show the root mean squared error (RMSE) between the MD data of the real part of the friction ReΓ_+(ω) (Fig. <ref>(b, black)), and the fit of the data using Eq. (<ref>) in dependence of the sphere radius a and the slip length b. We can find a global minimum, denoted as a red circle, which corresponds to the results reported in Fig. <ref> and shown as a red line in Fig. <ref>(a, b). Due to anisotropic dependence of the error in Fig. <ref>(c) on a and b, we can fit the effective radius much more accurately than the slip length. Hence, the estimation of the slip length by this procedure is subject to considerable uncertainty. § RESULTS FOR A LENNARD-JONES FLUID We here discuss simulation results and the hydrodynamic theory for a Lennard-Jones (LJ) particle in a LJ fluid (Appendix <ref> for simulation details). In Fig. <ref>(a, b), we show the extracted and fitted viscoelastic spectra of the MD simulation. Note that, in contrast to the water system, we used four viscoelastic fitting components for both shear and volume viscosity. Also note that we do not subtract finite-size effects here, as we are only interested in the behavior in the intermediate frequency regime. The extracted friction Γ_+(ω) in Fig. <ref>(c, d) agrees well with the hydrodynamic prediction using the fitted viscosity models (green), the agreement is worse with the model neglecting compressibility, i.e., c →∞ (purple), and the GSER (yellow). We used ρ_0 = 1370 kg/m^3 and c = 869 m/s <cit.>. Even if no distinct peaks can be seen in the friction spectrum, several viscoelastic components are still necessary for the modeling, as a simple Maxwell model (blue) deviates significantly from the MD data in Fig. <ref>(c, d). As in the water model, a plateau can be seen in the real part of the hydrodynamic friction for frequencies above 10 THz, which is not present in the MD data. Nevertheless, we find that the hydrodynamic prediction with the correct viscoelastic model and finite compressibility is the most appropriate one for the LJ fluid, which becomes even more evident when analyzing the VACF in Fig. <ref>(e, f). § LONG-TIME BEHAVIOR OF THE WATER MEMORY KERNEL WITH COMPRESSIBILITY In Fig. <ref>, we compare the long-time behavior of the water memory kernel from MD simulations (symbols) with the numerical inverse Fourier transformation of the hydrodynamic friction Γ^hyd(ω) in Eq. (<ref>) for the compressible case (red line) and in the incompressible limit, i.e., λ→ 0 (green line). As discussed in the main text, the hydrodynamic tail remains unchanged in the incompressible case. The sign change from positive to negative values around 3 ps is slightly shifted to earlier times. § DERIVATION OF EQ. (<REF>) We start from the Fourier-transformed GLE in Eq. (<ref>), i.e., v(ω) = i ωχ(ω) F_R(ω), where v(ω) = iωx(ω) and χ(ω) = (i ωΓ_+(ω) - mω^2)^-1. The fluctuation-dissipation theorem, i.e., ⟨ F_R(t)F_R(t')⟩ = k_B T Γ(|t-t'|), in Fourier space reads ⟨F_R(ω) F_R(ω')⟩ = k_BT ∫_-∞^∞ dt e^-iω t∫_-∞^∞ dt' e^-iω' t'Γ(t-t'), = k_BT ∫_-∞^∞ dt' e^-i(ω+ω') t ∫_-∞^∞ dt e^-iω (t- t')Γ(t-t'), = k_BT ∫_-∞^∞ dt' e^-i(ω+ω') t'Γ(ω), = 2π k_BT δ(ω+ω') Γ(ω). Using this identity, we obtain the Fourier transformation of the VACF C^vv(ω) = ∫_-∞^∞dω'/2π e^-iω (t-t)⟨v(ω) v(ω')⟩, = ∫_-∞^∞dω'/2π⟨ iωx(ω) i ω' x(ω')⟩, = - ∫_-∞^∞dω'/2π⟨ωχ(ω)F_R(ω)ω' χ(ω')F_R(ω')⟩, = - k_BT ∫_-∞^∞ dω' ωω' χ(ω) χ(ω') δ(ω+ω') Γ(ω), = k_BT ω^2 χ(ω) χ(-ω) Γ(ω), = k_BT ω^2 Γ(ω) χ(-ω) - χ(ω)/1/χ(ω) - 1/χ(-ω), = k_BT ω^2 Γ(ω) χ(-ω) - χ(ω)/iω (Γ_+(ω) + Γ_+(-ω)). Since χ(t) is a real function, we have χ(-ω) - χ(ω) = χ^*(ω) - χ(ω) = - 2i Im χ(ω), where χ^*(ω) is the complex conjugate of χ(ω). For any function f(t) symmetric in t, such as Γ(t) and C^vv(t), we have f_+(ω) + f_+(-ω) = f_+(ω) + f_+^*(ω), = ∫_-∞^∞ dt f_+( t) e^-iω t + ∫_-∞^∞ dt f_+(t) e^iω t, = ∫_-∞^∞ dt f(t) θ(t) e^-iω t + ∫_-∞^∞ dt f(t) θ(-t) e^-iω t, = f(ω). Inserting this identity for Γ_+(ω) into Eq. (<ref>), we obtain C^vv(ω) = - 2 k_BT ω Im χ(ω), and Re C_+^vv(ω) = - k_BT ω Im χ(ω), = k_BT ω Re (i χ(ω)), where we used C^vv(ω) = C^vv_+(ω) + (C^vv_+(ω))^* = 2 Re C^vv_+(ω). For the single-sided VACF, we finally obtain C_+^vv(ω) = i ω k_BT χ(ω), = i ω k_BT/i ωΓ_+(ω) - m ω^2, which is Eq. (<ref>) in the main text. Here we use the fact, employing the Kramers-Kronig relations, that if the real parts of the Fourier transformations of two half-sided time-domain functions are equal (Eq. (<ref>)), the total complex functions are equal <cit.>. unsrt
http://arxiv.org/abs/2408.11133v1
20240820183120
Public Health in Disaster: Emotional Health and Life Incidents Extraction during Hurricane Harvey
[ "Thomas Hoang", "Quynh Anh Nguyen", "Long Nguyen" ]
cs.IR
[ "cs.IR", "cs.CL" ]
Public Health in Disaster: Emotional Health and Life Incidents Extraction during Hurricane Harvey [t]0.33 Thomas Hoang Department of Computer Science Denison University hoang_t2@denison.edu [t]0.33 Quynh Anh Nguyen Faculty of Information Technology Electric Power University anhnq@epu.edu.vn [t]0.33 Long Nguyen Department of Computer Science and Engineering University of Louisville l.nguyen@louisville.edu August 26, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT Countless disasters have resulted from climate change, causing severe damage to infrastructure and the economy. These disasters have significant societal impacts, necessitating mental health services for the millions affected. To prepare for and respond effectively to such events, it is important to understand people's emotions and the life incidents they experience before and after a disaster strikes. In this case study, we collected a dataset of approximately 400,000 public tweets related to the storm. Using a BERT-based model, we predicted the emotions associated with each tweet. To efficiently identify these topics, we utilized the Latent Dirichlet Allocation (LDA) technique for topic modeling, which allowed us to bypass manual content analysis and extract meaningful patterns from the data. However, rather than stopping at topic identification like previous methods <cit.>, we further refined our analysis by integrating Graph Neural Networks (GNN) and Large Language Models (LLM). The GNN was employed to generate embeddings and construct a similarity graph of the tweets, which was then used to optimize clustering. Subsequently, we used an LLM to automatically generate descriptive names for each event cluster, offering critical insights for disaster preparedness and response strategies. emotional health; climate change; large language model; graph neural network; natural language processing; social media § INTRODUCTION Climate change has caused many serious natural disasters around the world, like strong hurricanes, long droughts, higher temperatures, and heavy snowstorms. These extreme weather events damage buildings and the economy, affecting society deeply. Hurricanes, in particular, have become more frequent and severe. For example, Hurricane Harvey in 2017 brought massive amounts of rain to Texas and Louisiana, causing record-breaking floods. The National Hurricane Center estimated the damage at $125 billion. Also, 738,000 people asked for help from the Federal Emergency Management Agency (FEMA), and at least 3,900 homes lost electricity <cit.>. The huge number of 911 calls overwhelmed emergency services, leading many people to use social media to share their problems, worries, and requests for help. Research by Cooper et al. demonstrated a strong connection between environmental conditions and emotional health through group discussions and interviews. Their study revealed that water shortages caused significant worry and fatigue among participants <cit.>. These findings were corroborated by other research, which showed that negative emotions are directly linked to immediate environmental conditions such as water shortages <cit.>, food shortages <cit.>, and environmental changes <cit.>. Hickman et al. conducted a study that highlighted the anxiety felt by many young people (aged 16-25 years) worldwide regarding climate change, with many participants expressing negative emotions towards their governments' inaction on climate issues <cit.>. To minimize bias, these studies employed various methodologies, including large surveys and group studies. Despite providing valuable insights into the impact of climate change on daily life, these studies face several challenges. Primarily, such research is often costly and time-consuming, requiring significant data collection and analysis resources. The process involves recruiting participants, organizing data collection sessions, and compensating participants, particularly in group studies. In today's world of fast technological progress and growing environmental concerns, social media platforms have become a powerful tool for investigating and understanding the different impacts of climate change. We picked this approach for a few key reasons. First, we're focusing on emotions and specific life incidents instead of just general mental health, which helps us see how environmental factors impact people's feelings during disasters. Second, we use a BERT model to predict emotions and LDA to identify life incidents, combining the power of modern NLP models and topic modeling to get accurate results. Third, we ensure our findings are reliable by automatically grouping and accurately naming the incident topics using (GNN+LLM) Graph Neural Network <cit.> <cit.> and Large language Model <cit.> <cit.> <cit.>. Lastly, real-time social media data lets us capture public reactions and feelings immediately, giving us timely insights that are important for managing disasters and public health. While <cit.> focuses on stressors related to climate change with the use of manual topic name prediction which could be human-biased, we accurately concentrate on immediate emotional reactions and specific life incidents during disasters by leveraging the use of graph neural networks and large language model. Thus, our approach allows us to provide more detailed insights into how specific incidents affect emotional health during disasters. The collected tweets undergo an extensive data cleaning process, where URLs, special characters, and irrelevant terms are removed. We also apply stop word removal, including an expanded list to filter out common disaster-related terms that do not contribute meaningfully to our analysis. Following the cleaning process, the tweets are transformed into embeddings using a pre-trained BERT model. These embeddings are then fed into a GNN, which is trained to refine the embeddings by capturing the underlying graph structure of the data. To determine the optimal number of clusters, we employ the silhouette score, a metric that evaluates how well each tweet fits within its assigned cluster compared to other clusters. This method ensures that the tweets are accurately grouped based on their content. Once the clustering is completed, we utilize a GPT-2-based LLM to generate meaningful event names for each cluster. This step involves synthesizing the content of tweets within a cluster to predict a concise event name that encapsulates the central theme of the cluster. Our approach offers several key contributions. First, it demonstrates the effective integration of GNNs with transformer models for refining tweet embeddings, leading to more accurate clustering. Second, by using an LLM for event name generation, we move beyond traditional topic modeling, providing a more human-like interpretation of the data. Our research advances the methodological framework for disaster analysis using social media data and provides practical insights that can inform policymakers in developing comprehensive disaster management strategies that address both physical and emotional well-being. § RELATED STUDIES In this section, we review recent studies related to addressing climate change and public health. These studies are categorized into two main scientific areas: topic modeling for public health and the use of social media for disaster relief. §.§ Topic modeling for public health Topic modeling helps find patterns and make sense of unstructured collections of documents <cit.>. This technique connects social and computational sciences. Topic models use probabilistic methods to uncover the hidden semantic structures of a group of texts through hierarchical Bayesian analysis. These texts can include emails, scientific papers, and newspaper articles. For example, Grassia et al. <cit.> used non-negative matrix factorization (NMF) to identify main themes in newspaper articles, pinpointing topics used for propaganda. Grootendorst <cit.> used BERTopic to create document embeddings with pre-trained transformer-based language models, clustering these embeddings and generating topic representations with a class-based TF-IDF procedure to build neural networks. Karas et al. <cit.> applied the Top2Vec model with doc2vec as the embedding model to extract topics from the subreddit "r/CysticFibrosis." Many studies use Latent Dirichlet Allocation (LDA) because it is popular and simple. For instance, Man et al. <cit.> used LDA to adapt an HPV transmission model to data on sexual behavior, HPV prevalence, and cervical cancer incidence. They predicted the effects of HPV vaccination on HPV and cancer incidence and the lifetime risk of cervical cancer over 100 years after vaccination. Asmundson et al. <cit.> replicated a study to examine the factor structure, reliability, and validity of the COVID-19 Incident Scales, showing how topic modeling can reveal fear and anxiety-related distress responses during pandemics. Mental health is a particular area where the importance of emotional and practical support, as well as self-disclosure, has been increasingly acknowledged. Manikonda et al. <cit.> aimed to understand the language features, content characterization, driving factors, and types of online disinhibition seen in social media, focusing on mental health. §.§ Social media for disaster relief Social media, as explained by Kaplan, includes Internet-based applications that are built on the foundations of Web 2.0, allowing the creation and sharing of user-generated content <cit.>. This term covers platforms like Reddit, Twitter, Flickr, Facebook, and YouTube, which let users communicate and share information and resources. These tools are being used more and more for disaster relief efforts. For example, Gao et al. suggested using social media to create a crowdsourcing platform for emergency services during the 2010 Haiti earthquake <cit.>. Social media can also be combined with crisis maps to help organizations find places where supplies are needed the most. A 2011 study by the American National Government looked into using social media for disaster recovery, discussing how it can be used, future possibilities, and policy considerations <cit.>. Twitter, a popular social media platform, works as both a social network and a microblogging service, allowing users to post short messages called tweets. Du et al. suggested a social media-based system to analyze people's concerns, see how important they are, and track how they change over time <cit.>. Their study compared the flow of concerns between Twitter and news outlets during the California mountain fires. Other studies have also used social media to engage communities in water resource management <cit.>, coordinate volunteer rescue efforts <cit.>, and predict people's needs for better extreme weather planning <cit.>. Lu et al. visualized social media sentiment during extreme weather incidents, exploring trends in positive and negative feelings and their geographical distribution using Twitter data <cit.>. Additionally, social media can quickly assess damage from extreme weather incidents. Kryvasheyeu et al. developed a multiscale analysis of Twitter activity before, during, and after Hurricane Sandy to monitor and assess the disaster through the spatiotemporal distribution of disaster-related messages <cit.>. §.§ Graph Neural Networks Graph Neural Networks (GNNs) have emerged as a powerful tool for modeling relationships and dependencies in data that can be naturally represented as graphs. GNNs extend neural networks to graph-structured data, enabling the learning of representations that consider both node features and the graph structure. Kipf and Welling <cit.> introduced the concept of semi-supervised learning with GNNs, demonstrating their effectiveness in classifying nodes in a graph. This method has since been adapted to various applications, including social media analysis, where relationships between users or content can be modeled as graphs. Zhuang et al. <cit.> proposed a dual graph convolutional network model, which integrates local and global graph structures to improve classification accuracy in semi-supervised settings. §.§ Large Language Models for Topic Naming Large Language Models (LLMs) like GPT-2 have revolutionized natural language processing by enabling the generation of coherent and contextually appropriate text. Radford et al. <cit.> demonstrated the capability of GPT-2 to generate text that closely mirrors human language, making it a suitable tool for creating descriptive names for clusters of events or topics. Wei et al. <cit.> further explored the adaptability of LLMs, showing that fine-tuned language models could perform well even with limited data, a common scenario in real-time social media analysis. Brown et al. <cit.> introduced the concept of few-shot learning with LLMs, where the model requires minimal examples to generate relevant and specific text accurately. § METHODS §.§ Study Design We meticulously processed our collected tweet data through several stages to analyze the emotional responses to Hurricane Harvey and predict life incident names, as outlined in Figure <ref>. While in this process of cleaning data, we tried to remove emojis, hexadecimal characters, images, special characters, hyperlinks, and irrelevant words to prepare the text for analysis. Following, we tried to pass the cleaned data through an emotion classification model, which helps categorize tweets into positive, negative, or neutral sentiments. After that, we applied lemmatization to ensure that words with similar meanings but different forms (e.g., "be," "being," "been") were unified. In addition to this, we removed common English stopwords (e.g., "a," "an," "the") to eliminate non-informative words from the dataset. The text data was then transformed into token features using Term Frequency-Inverse Document Frequency (TF-IDF). With these features, we constructed an initial Latent Dirichlet Allocation (LDA) model to identify preliminary topics within the tweets. During this stage, we continuously refined our stopwords list, filtering out prevalent and unwanted tokens such as standard disaster-related terms ("hurricane," "Harvey," "storm") and location names ("Texas," "Houston," "Antonio"). Following the preliminary topic extraction, the data underwent a more rigorous processing phase, incorporating Graph Neural Network (GNN) embeddings. We performed dimension reduction on these embeddings and constructed a similarity graph, which was then used to train a GNN model. Finally, using a fine-tuned LDA model alongside the GNN-based clustering results, we employed a Large Language Model (LLM) to generate descriptive names for each predicted event group automatically. §.§ Data pre-processing and feature engineering Our Hurricane Harvey dataset includes tweets collected from January 11, 2017, to August 29, 2017, and is publicly available on Kaggle  <cit.>. The original dataset contains approximately 400,000 tweets about Hurricane Harvey. After initial filtering, we identified around 98,000 tweets expressing negative emotions. These extracted tweets then underwent data cleaning and text preprocessing to reduce redundancy and remove unwanted keywords for the topic modeling process. Specifically, we eliminated Twitter-specific characters from a defined range of Unicode characters, URLs, and hyperlinks by removing tokens containing "http." This standardization process also involved removing icons such as emojis and hex-images. Lastly, we excluded all single-character tokens from the tweets. We classify the tweets into three distinct emotion categories using BERT-based model, a Bidirectional Encoder Representations from Transformers (BERT) model with a state-of-the-art pre-built emotion detection capability. §.§ Emotion Prediction and life incident extraction §.§.§ Text vectorization We employ Term Frequency-Inverse Document Frequency (TF-IDF). TF-IDF is a widely used text vectorization algorithm that creates a word frequency vector. The term frequency, inverse document frequency, and their product are computed as follows: tf(t, d) = f_t,d∑_t'∈ df_t',d idf(t, D) = log N1 + |{d ∈ D: t ∈ d }| tfidf(t, d, D) = tf(t, d) . idf(t, D) Here, f(t, d) denotes the frequency of the word t in document d, and D represents the entire collection of documents. In this study, each document corresponds to a tweet. D is a corpus with a size of N. To princident division by zero when t is absent in d, a value of one is added to the denominator in the formula. §.§.§ LDA topic modeling based life incident extraction  <cit.> demonstrates the technique of Latent Semantic Indexing (LSI) for indexing and retrieval, which helps understand the document's content by finding the relationship between words and documents.  <cit.> introduced the improvement of LSI, called probabilistic LSI (pLSI), which uses the likelihood method (e.g., Bayes method). The nature of pLSI is to help with finding the words’ models in a document where each word belongs to a specific topic. Both techniques ignore the words’ order in a document. In addition, the problem with time complexity occurs in both techniques, leading to overfitting, which Latent Dirichlet Allocation addressed well <cit.>. In the details of LDA, we assume we have a document (d) containing a set of words. In addition, we have a topic (z) that has several significant keywords (w). Knowing that each word can relate to many topics with various probabilities and that the amount of topics is the LDA parameter. By estimating the confidential variables (α, β, θ) by calculating the allocation in documents, LDA discovers each document's topics (Z) and the significant words of each topic. We define N as the words’ number in document d. Dirichlet prior parameters at the corpus level parameters are α and β. In addition, we choose the topic z_n of each word from multinomial distribution θ for each word w_n. We represent as below a word w_n from p(w_n | z_n, β): p(w | α, β) = ∫p(θ | α)(∏_n=1^N∑_z_n p(z_n | θ)p(w_n | z_n,β)) dθ, Furthermore, we represent the probability of a corpus as below: ∏_d=1^M∫ p(θ_d | α)(∏_n=1^N_d∑_z_dn p(z_dn | θ_d) p(w_dn | z_dn,β)) dθ_d Topics identification for optimal number: In order to examine the optimal amount of topics for the LDA model, we use Umass coherence score,  <cit.>. This technique estimates the frequency of two words, which are w_i and w_j: C_UMass = ∑_i=2^N∑_j=1^i-1logP(w_i,w_j)+1/P(w_j) In this equation, P(w_i, w_j) denotes the frequency with which w_i and w_j co-occur in the same document, while P(w_j) indicates the number of documents that contain the word w_j. To avoid division by zero, we add a value of 1 to the denominator. The UMass coherence value is calculated as the sum of the top N pre-determined terms. Typically, P(w_i, w_j) + 1 is much smaller than P(w_j), which results in a negative UMass score. The quality of the LDA model improves as the UMass score approaches zero. However, adding more topics can increase the score, which leads to topics with very few documents. To mitigate this, we use the elbow method <cit.>, which helps determine the optimal number of topics by identifying the point where the rate of improvement in the UMass coherence score diminishes. After defining the topics, we manually extract the life incidents from the representative terms of each topic. Life incident extraction: After establishing the optimal number of topics for the LDA model, we use a Python-based LDA visualization tool to illustrate each topic and identify the key terms that influence them. This visualization helps us interpret the topics through their distinct sets of keywords. Based on <ref>, which shows the performance among algorithm choices <cit.> <cit.> <cit.> <cit.> <cit.>, we see that GNN demonstrates the best performance, with a precision score of 0.31 and a purity score of 0.25. In our case, we select that vertices in the graph represent individual terms, while edges illustrate the similarity between these terms. Thus, GNN can aggregate and propagate information across connected nodes, which leads to more accurate and contextually aware clustering and term name prediction. In detail, GNN processes the embeddings generated from the textual data, capturing both the content and the relational structure between topics. Once the topics are grouped, the LLM is used to predict descriptive names for each topic cluster. Our analysis focuses on life incidents specifically related to climate change. The GNN and LLM combination allows us to efficiently identify and name the most prominent incidents within these topics, facilitating a more detailed analysis of their impact. Thus, our method improves accuracy and enhances the extracted incidents' interpretability, making it easier to understand the specific events influencing public sentiment during disasters. § RESULTS §.§ Emotion Prediction Results We ran the algorithm using Google Collaboration, which runs on GL65 Leopard 10SCXK, an x64-based PC, on Microsoft Windows 11 Home Single Language.The emotion distribution of the tweets is illustrated in Figure <ref>. §.§ Tweets summary by emotions The positive sentiment word cloud prominently features words such as “love,” “great,” “happy,” “good,” “wonderful,” “blessed,” “safe,” and “joy.” These words reflect a general sense of optimism and positivity among Twitter users. The presence of “love” and “happy” suggests expressions of care, solidarity, and relief, possibly directed toward successful rescue operations or the safety of loved ones. These words indicate that amidst the challenges posed by the hurricane, people found moments of emotional support and happiness. The terms “great” and “good” highlight commendations and satisfaction for the effective response by emergency services or the supportive actions taken by the community. In addition, this suggests that users acknowledged and appreciated the efforts made to mitigate the disaster's impact and ensure public safety. The word “wonderful” conveys a strong sense of positivity, which might be related to successful evacuations, community support, or the resilience shown by individuals during the crisis. The appearance of “blessed” reflects a deep sense of gratitude and thankfulness, which might be in response to avoided dangers, received help, or the overall sense of being protected during the storm. This sentiment is vital as it underscores the human aspect of the disaster response, which helps highlight moments of kindness and support that were experienced. “Safe” and “joy” further emphasize the positive outcomes and feelings of security that were felt despite the adverse conditions. These words suggest that people were able to find comfort and happiness in the safety of their surroundings or in the knowledge that their loved ones were unharmed. In general, the positive sentiment word cloud reveals a prevailing sentiment of appreciation, relief, and encouragement, reflecting the community's resilience and the successful measures taken to ensure safety and support. The positive emotions captured in these tweets highlight the human capacity to find light even in the darkest times, celebrating the small victories and the collective strength of the community. To find the best number of topics for our Latent Dirichlet Allocation (LDA) model, we used the scikit-learn library with a learning rate of 0.7 <cit.>. We created several LDA models, changing the number of topics from 20 to 70 in steps of 5. Then, evaluation is done via comparing UMass coherence score <cit.> for selection of optimal number of topics in datasets. Figure <ref> shows an example of selection of optimal number of topics for positive sentiment. We notice that at 20 topics, the coherence score starts to decline rapidly. Thus, we chose 20 topics for the final version of our LDA model. Similarly, we get 30 topics for neutral sentiments and 55 topics for negative sentiments. §.§ Life incident Extraction Results § ANALYSIS OF OPTIMAL K SELECTION FOR SENTIMENT GROUPS The selection of the optimal number of clusters k for each sentiment group—negative, neutral, and positive—is informed by the silhouette score that we use to measure the quality of clustering by evaluating how similar an object is to its own cluster compared to other clusters. About positive sentiment group, for the positive sentiment group, as depicted in Figure <ref>, the silhouette score is highest at k = 2. This implies that the positive sentiment data is best categorized into two clusters, effectively capturing the key variations in positive emotions and themes expressed in the data. Having determined these optimal topic numbers and to ensure consistency and accuracy, we employ the Silhouette Score technique to automatically cluster similar groups of topics. Thanks to leveraging Graph Neural Networks (GNNs) and the use of (LLM) large language models, we can accurately generate topic names. Life Incidents Insight Analysis The extracted life incidents and their associated terms are listed in Table <ref>, representing positive sentiments. The table presents the predicted event names for life incidents grouped by a GNN-based approach and named using a large language model (LLM). Table <ref> showcases positive sentiment life incidents, which emphasize community resilience and positive interactions. The predicted event names like "The Best of the Best" and "A 'Good' Weather Event" capture the optimism and support within the community. These incidents include terms related to well-wishes, supportive actions, and positive outlooks, reflecting the community's efforts to uplift morale during challenging times. § CONCLUSION Our paper presents a case study on predicting public emotions and identifying life incidents during Hurricane Harvey using social media data. We employed a Graph Neural Network (GNN) to automatically group related incidents, combined with a Large Language Model (LLM) to generate meaningful event names. Unlike previous studies that broadly examine the mental health impacts of climate change using NLP techniques, our study specifically targets emotions and life incidents during a disaster event, offering a more focused analysis of how such incidents influence public sentiment. Thus, our research will help overcome the limitations of manual extraction and enable the automated monitoring of disaster impacts on daily life and emotional health. § CITATIONS ACM-Reference-Format 40 #1 #1#1#1 #1 #1 #1 #1#1 #1#1 [har(2023)] harveyDataset year2017 (Accessed Aug 06, 2023). titleHurricane Harvey Tweets. howpublishedhttps://www.kaggle.com/datasets/dan195/hurricaneharvey. [Aihara et al(2016)] aihara2016household authorpersonYoko Aihara, personSalina Shrestha, and personJyoti Sharma. year2016. Household water insecurity, depression and quality of life among postnatal women living in urban Nepal. journalJournal of water and health volume14, number2 (year2016), pages317–324. [Amadeo(2018)] amadeo2018hurricane authorpersonKimberly Amadeo. year2018. Hurricane Harvey facts, damage and costs. journalThe Balance (year2018). [Asmundson and Taylor(2020)] asmundson2020coronaphobia authorpersonGordon JG Asmundson and personSteven Taylor. year2020. Coronaphobia: Fear and the 2019-nCoV outbreak. journalJournal of anxiety disorders volume70 (year2020), pages102196. [Blei and Lafferty(2009)] blei2009topic authorpersonDavid M Blei and personJohn D Lafferty. year2009. Topic models. journalText mining: classification, clustering, and applications volume10, number71 (year2009), pages34. [Blei et al(2003)] blei2003lda authorpersonDavid M Blei, personAndrew Y Ng, and personMichael I Jordan. year2003. Latent dirichlet allocation. journalJournal of machine Learning research volume3, numberJan (year2003), pages993–1022. [Brown et al(2020)] Brown2020LanguageMA authorpersonTom B. Brown, personBenjamin Mann, personNick Ryder, personMelanie Subbiah, personJared Kaplan, personPrafulla Dhariwal, personArvind Neelakantan, personPranav Shyam, personGirish Sastry, personAmanda Askell, personSandhini Agarwal, personAriel Herbert-Voss, personGretchen Krueger, personTom Henighan, personRewon Child, personAditya Ramesh, personDaniel M. Ziegler, personJeff Wu, personClemens Winter, personChristopher Hesse, personMark Chen, personEric Sigler, personMa teusz Litwin, personScott Gray, personBenjamin Chess, personJack Clark, personChristopher Berner, personSam McCandlish, personAlec Radford, personIlya Sutskever, and personDario Amodei. year2020. Language Models are Few-Shot Learners. journalArXiv volumeabs/2005.14165 (year2020). <https://api.semanticscholar.org/CorpusID:218971783> [Bui et al(2023)] math11244910 authorpersonThanh Bui, personAndrea Hannah, personSanjay Madria, personRosemary Nabaweesi, personEugene Levin, personMichael Wilson, and personLong Nguyen. year2023. Emotional Health and Climate-Change-Related Stressor Extraction from Social Media: A Case Study Using Hurricane Harvey. journalMathematics volume11, number24 (year2023). 2227-7390 <https://doi.org/10.3390/math11244910> [Chen et al(2016)] chen2016plsi authorpersonTse-Hsun Chen, personStephen W Thomas, and personAhmed E Hassan. year2016. A survey on the use of topic models when mining software repositories. journalEmpirical Software Engineering volume21 (year2016), pages1843–1919. [Cooper et al(2019)] cooper2019environmental authorpersonSarah Cooper, personPaul Hutchings, personJohn Butterworth, personSolome Joseph, personAbinet Kebede, personAlison Parker, personBethel Terefe, and personBarbara Van Koppen. year2019. Environmental associated emotional distress and the dangers of climate change for pastoralist mental health. journalGlobal Environmental Change volume59 (year2019), pages101994. [Du et al(2019)] du2019twitter authorpersonHanxiang Du, personLong Nguyen, personZhou Yang, personHashim Abu-Gellban, personXingyu Zhou, personWanli Xing, personGuofeng Cao, and personFang Jin. year2019. Twitter vs news: Concern analysis of the 2018 california wildfire event. In booktitle2019 IEEE 43rd Annual Computer Software and Applications Conference (COMPSAC), Vol. volume2. IEEE, pages207–212. [Frey and Dueck(2007)] Frey2007ClusteringBP authorpersonBrendan J. Frey and personDelbert Dueck. year2007. Clustering by Passing Messages Between Data Points. journalScience volume315 (year2007), pages972 – 976. <https://api.semanticscholar.org/CorpusID:6502291> [Friedrich and Wüstenhagen(2017)] friedrich2017leading authorpersonElmar Friedrich and personRolf Wüstenhagen. year2017. Leading organizations through the stages of grief: The development of negative emotions over environmental change. journalBusiness & society volume56, number2 (year2017), pages186–213. [Gao et al(2011)] gao2011harnessing authorpersonHuiji Gao, personGeoffrey Barbier, and personRebecca Goolsby. year2011. Harnessing the crowdsourcing power of social media for disaster relief. journalIEEE intelligent systems volume26, number3 (year2011), pages10–14. [Grassia et al(2023)] grassia2023topic authorpersonMaria Gabriella Grassia, personMarina Marino, personRocco Mazza, personMichelangelo Misuraca, and personAgostino Stavolo. year2023. Topic modeling for analysing the Russian propaganda in the conflict with Ukraine. journalASA 2022 (year2023), pages245. [Grootendorst(2022)] Grootendorst authorpersonMaartin Grootendorst. year2022. BERTopic, Topic Modeling with a class-base for TF-IDF procedure. journalFrontiers in Sociology (year2022). [Hickman et al(2021)] Hickman2021survey authorpersonCaroline Hickman, personElizabeth Marks, personPanu Pihkala, personSusan Clayton, personR Eric Lewandowski, personElouise E Mayall, personBritt Wray, personCatriona Mellor, and personLise van Susteren. year2021. Climate anxiety in children and young people and their beliefs about government responses to climate change: A global survey. journalThe Lancet Planetary Health volume5, number12 (year2021). <https://doi.org/10.1016/s2542-5196(21)00278-3> [Hofmann(1999)] hofmann1999probabilistic authorpersonThomas Hofmann. year1999. Probabilistic latent semantic indexing. In booktitleProceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval. pages50–57. [Kaplan(2018)] Kaplan2018 authorpersonAndreas M. Kaplan. year2018. booktitleSocial Media, Definition, and History. publisherSpringer New York, addressNew York, NY, pages2662–2665. 978-1-4939-7131-2 <https://doi.org/10.1007/978-1-4939-7131-2_95> [Karas et al(2022)] karas2022experiments authorpersonBradley Karas, personSue Qu, personYanji Xu, and personQian Zhu. year2022. Experiments with LDA and Top2Vec for embedded topic discovery on social media data—A case study of cystic fibrosis. journalFrontiers in Artificial Intelligence volume5 (year2022), pages948313. [Kipf and Welling(2016)] Kipf2016SemiSupervisedCW authorpersonThomas Kipf and personMax Welling. year2016. Semi-Supervised Classification with Graph Convolutional Networks. journalArXiv volumeabs/1609.02907 (year2016). <https://api.semanticscholar.org/CorpusID:3144218> [Kryvasheyeu et al(2016)] kryvasheyeu2016rapid authorpersonYury Kryvasheyeu, personHaohui Chen, personNick Obradovich, personEsteban Moro, personPascal Van Hentenryck, personJames Fowler, and personManuel Cebrian. year2016. Rapid assessment of disaster damage using social media activity. journalScience advances volume2, number3 (year2016), pagese1500779. [Lee and Seung(1999)] Lee1999LearningTP authorpersonDaniel D. Lee and personH. Sebastian Seung. year1999. Learning the parts of objects by non-negative matrix factorization. journalNature volume401 (year1999), pages788–791. <https://api.semanticscholar.org/CorpusID:4428232> [Lindsay(2011)] lindsay2011social authorpersonBruce R. Lindsay. year2011. booktitleSocial Media and Disasters: Current Uses, Future Options, and Policy Considerations. typeTechnical Report. institutionLibrary of Congress. Congressional Research Service. <^1^> [Lu et al(2015)] lu2015visualizing authorpersonYafeng Lu, personXia Hu, personF Wang, personS Kumar, personH Liu, and personR Maciejewski. year2015. Visualizing social media sentiment in disaster scenarios. In booktitleProceedings of the 24th international conference on world wide web. pages1211–1215. [Man et al(2022)] man2022evidence authorpersonIrene Man, personDamien Georges, personTiago M de Carvalho, personLopamudra Ray Saraswati, personPrince Bhandari, personIshu Kataria, personMariam Siddiqui, personRichard Muwonge, personEric Lucas, personJohannes Berkhof, et al year2022. Evidence-based impact projections of single-dose human papillomavirus vaccination in India: a modelling study. journalThe Lancet Oncology volume23, number11 (year2022), pages1419–1429. [Manikonda(2019)] manikonda2019analysis authorpersonLydia Manikonda. year2019. booktitleAnalysis and Decision-Making with Social Media. publisherArizona State University. [Mimno et al(2011)] UMassscore authorpersonDavid Mimno, personHanna M. Wallach, personEdmund Talley, personMiriam Leenders, and personAndrew McCallum. year2011. Optimizing Semantic Coherence in Topic Models. In booktitleProceedings of the Conference on Empirical Methods in Natural Language Processing (Edinburgh, United Kingdom) (seriesEMNLP '11). publisherAssociation for Computational Linguistics, addressUSA, pages262–272. 9781937284114 [Murtagh and Legendre(2011)] Murtagh2011WardsHA authorpersonFionn Murtagh and personPierre Legendre. year2011. Ward’s Hierarchical Agglomerative Clustering Method: Which Algorithms Implement Ward’s Criterion? journalJournal of Classification volume31 (year2011), pages274 – 295. <https://api.semanticscholar.org/CorpusID:7134583> [Ng et al(2001)] Ng2001OnSC authorpersonA. Ng, personMichael I. Jordan, and personYair Weiss. year2001. On Spectral Clustering: Analysis and an algorithm. In booktitleNeural Information Processing Systems. <https://api.semanticscholar.org/CorpusID:18764978> [Nguyen et al(2019)] nguyen2019forecasting authorpersonLong Nguyen, personZhou Yang, personJia Li, personZhenhe Pan, personGuofeng Cao, and personFang Jin. year2019. Forecasting people’s needs in hurricane events from social network. journalIEEE Transactions on Big Data volume8, number1 (year2019), pages229–240. [Nguyen et al(2018)] nguyen2018smart authorpersonLong H Nguyen, personRattikorn Hewett, personAkbar S Namin, personNicholas Alvarez, personCristina Bradatan, and personFang Jin. year2018. Smart and connected water resource management via social media and community engagement. In booktitle2018 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE, pages613–616. [Ojala(2016)] ojala2016young authorpersonMaria Ojala. year2016. Young people and global climate change: Emotions, coping, and engagement in everyday life. journalGeographies of global issues: Change and threat volume8, number1 (year2016), pages1–19. [Pedregosa et al(2011)] scikit-learn authorpersonF. Pedregosa, personG. Varoquaux, personA. Gramfort, personV. Michel, personB. Thirion, personO. Grisel, personM. Blondel, personP. Prettenhofer, personR. Weiss, personV. Dubourg, personJ. Vanderplas, personA. Passos, personD. Cournapeau, personM. Brucher, personM. Perrot, and personE. Duchesnay. year2011. Scikit-learn: Machine Learning in Python. journalJournal of Machine Learning Research volume12 (year2011), pages2825–2830. [Radford et al(2019)] Radford2019LanguageMA authorpersonAlec Radford, personJeff Wu, personRewon Child, personDavid Luan, personDario Amodei, and personIlya Sutskever. year2019. Language Models are Unsupervised Multitask Learners. <https://api.semanticscholar.org/CorpusID:160025533> [Stevenson et al(2012)] stevenson2012water authorpersonEdward GJ Stevenson, personLeslie E Greene, personKenneth C Maes, personArgaw Ambelu, personYihenew Alemu Tesfaye, personRichard Rheingans, and personCraig Hadley. year2012. Water insecurity in 3 dimensions: an anthropological perspective on water and women's psychosocial distress in Ethiopia. journalSocial science & medicine volume75, number2 (year2012), pages392–400. [Thorndike(1953)] elbowmethod authorpersonRobert Thorndike. year1953. Who belongs in the family? journalPsychometrika volume18 (year1953), pages267–276. [Wei et al(2021)] Wei2021FinetunedLM authorpersonJason Wei, personMaarten Bosma, personVincent Zhao, personKelvin Guu, personAdams Wei Yu, personBrian Lester, personNan Du, personAndrew M. Dai, and personQuoc V. Le. year2021. Finetuned Language Models Are Zero-Shot Learners. journalArXiv volumeabs/2109.01652 (year2021). <https://api.semanticscholar.org/CorpusID:237416585> [Yang et al(2020)] yang2020coordinating authorpersonZhou Yang, personLong Nguyen, personJiazhen Zhu, personZhenhe Pan, personJia Li, and personFang Jin. year2020. Coordinating disaster emergency response with heuristic reinforcement learning. In booktitle2020 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM). IEEE, pages565–572. [Zhuang and Ma(2018)] Zhuang2018DualGC authorpersonChenyi Zhuang and personQiang Ma. year2018. Dual Graph Convolutional Networks for Graph-Based Semi-Supervised Classification. journalProceedings of the 2018 World Wide Web Conference (year2018). <https://api.semanticscholar.org/CorpusID:4899764>
http://arxiv.org/abs/2408.11622v1
20240821134758
Durotaxis and antidurotaxis droplet motion onto gradient gel-substrates
[ "R. Kajouri", "P. E. Theodorakis", "A. Milchev" ]
physics.flu-dyn
[ "physics.flu-dyn" ]
§ ABSTRACT The self-sustained motion of fluids on gradient substrates is a spectacular phenomenon, which can be employed and controlled in applications by carefully engineering the substrate properties. Here, we report on a design of a gel-substrate with stiffness gradient, which can cause the spontaneous motion of a droplet along (durotaxis) or to the opposite (antidurotaxis) direction of the gradient, depending on the droplet affinity to the substrate. By using extensive molecular dynamics simulations of a coarse-grained model, we find that the mechanisms of the durotaxis and antidurotaxis droplet motion are distinct, require the minimization of the interfacial energy between the droplet and the substrate, and share similarities with those mechanisms previously observed for brush substrates with stiffness gradient. Moreover, durotaxis motion takes place over a wider range of affinities and is generally more efficient (faster motion) than antidurotaxis. Thus, our study points to further possibilities and guidelines for realizing both antidurotaxis and durotaxis motion on the same gradient substrate for applications in microfluidics, energy conservation, and biology. § INTRODUCTION The autonomous motion of fluids on gradient substrates has been observed in various contexts, for example, in the case of microfluidics, microfabrication, coatings, energy conversion, and biology. <cit.> Moreover, both the efficiency and the direction of motion can be controlled by carefully engineering the gradient of a substrate property. In the case of moving cells on tissues,<cit.> their motion has been attributed to gradients in the stiffness of the underlying tissue, a phenomenon known as durotaxis. Inspired by biological systems, efforts to foster new possibilities of sustained motion on substrates with gradually changing properties along a certain direction have taken place, in view of the spectrum of possible applications in diverse areas. This also includes nano-objects of different type (e.g. fluids, nanosheets) on a wide range of different substrates, which have been studied in the context of theoretical and simulation work,<cit.> as well as experiments.<cit.> The exciting aspect of durotaxis is the autonomously sustained motion, that is no energy supply from an external source is required for setting in and sustaining the motion of the nano-object. While in connection with durotaxis, a gradient in the stiffness is responsible for the motion, such motion can actually be observed in other scenarios as well, for example, when the gradient reflects changes in the pattern of the substrate. Here, a characteristic example is rugotaxis, where a fluid motion is caused by a gradient in the wavelength characterizing a wavy substrate.<cit.> Other examples include curvotaxis, that is motion attributed to curvature changes, such as that observed in the context of curved protein complexes at the cell.<cit.> Further possibilities, include small condensate droplets that can move due to the presence of asymmetric pillars<cit.>, three-dimensional (3D) capillary ratchets,<cit.> or pinning and depinning effects at the three-phase contact line.<cit.> Interestingly, in the case of capillary ratchets, the surface tension can play a role in determining the direction of motion, whether this is along or against the gradient.<cit.> In addition, substrates with wettability gradients have been reported as a possibility for the autonomous motion of liquids,<cit.> for example, due to corrosion,<cit.> while long-range transport has been realized by using electrostatic<cit.> or triboelectric charges<cit.>. In the presence of an external energy source, motion is also possible, with characteristic examples being electrotaxis<cit.> and thermotaxis.<cit.> For example, in the latter case, the motion is caused by a temperature gradient that requires to be maintained along the substrate by means of an external energy source. Further examples of motion due to external sources include motion caused by electrical current <cit.>, charge <cit.>, or even simple stretching<cit.>. Situations where droplets are chemically driven have also been reported in the literature<cit.>, as well as droplets on vibrated substrates<cit.> or wettability ratchets<cit.>. Motivated by relevant experiments with liquid droplets,<cit.> we have previously proposed and investigated by computer simulation various substrate designs that can cause a sustained droplet motion.<cit.> More specifically, we have proposed two designs of brush substrates with stiffness gradient that can cause such motion either along or against the gradient direction.<cit.> In the first design, the brush substrate had a constant density of grafted polymer chains.<cit.> In this case, the stiffness gradient was a result of changes in the stiffness of the individual polymer chains along the gradient direction. We have found that the droplet can move toward areas of higher stiffness (durotaxis), where a larger number of contacts between the droplet and the substrate can be established, due to a lower substrate roughness in these areas. In the second design of a brush substrate, the grafted polymer chains were fully flexible and the stiffness gradient was imposed by changing the grafting density along a particular direction.<cit.> In this case, the droplet could move toward softer parts of the substrate (antidurotaxis), establishing more pair contacts as it penetrated into the substrate. Interestingly, the latter antidurotaxis motion might share similarities with experiments of droplets on soft substrates with stiffness gradient, where droplet motion was also observed from stiffer toward softer areas of the substrate.<cit.> Moreover, in this case, larger droplets seem to perform antidurotaxis motion more efficiently (faster), an effect that might not be attributed to gravity effects due to the weight of the droplet, as experiments were carried out for micrometer-sized water droplets, i.e., smaller than the capillary length (∼2.5 mm). Thus far, experimental substrates<cit.> and simulation models<cit.> have mostly demonstrated either durotaxis or antidurotaxis motion for a given substrate. Here, building upon our previous experience with durotaxis and antidurotaxis droplet motion onto brush substrates,<cit.> we show that a novel gel substrate can demonstrate both antidurotaxis and durotaxis droplet motion depending on the type of liquid. To achieve this result, a gradient in the bonding stiffness between the gel chemical units is used in our model to create the stiffness gradient along a specific direction of the gel substrate. Furthermore, by means of extensive molecular dynamics (MD) simulations of a coarse-grained model, we elucidate the mechanisms for both the durotaxis and antidurotaxis motions and their efficiency for a range of parameters relevant for this substrate design. Interestingly, we observe similarities for these mechanisms with what we have previously seen for brush substrates.<cit.> Thus, this may point to more universal features of such substrates that can cause durotaxis and antidurotaxis motion of fluids, and holds hope for the experimental realization of such substrates. In the following, we provide details of the system, simulation model and methodology. Then, we will present and discuss the obtained results, while we will draw the conclusions resulting from our investigations in the final section. § MATERIALS AND METHODS The gel substrate of this study is illustrated in Figure <ref> with typical configurations of the droplet at the beginning and the end of successful durotaxis/antidurotaxis simulations. In particular, the droplet remains on the top of the substrate as it reaches the stiffest end of the substrate in the durotaxis case, while the droplet appears to penetrate into the substrate in the case of antidurotaxis motion as it reaches the softest end of the substrate. The length of the substrate in the direction of the gradient is l_x=100 σ, where σ is the length unit. The gel substrate is supported by a smooth and unstructured substrate and consists of beads each initially placed at the positions of the vertices of a simple cubic lattice with unity lattice constant (expressed in units of σ) with harmonic interactions between beads reaching up to the second nearest neighbors. To realize the gradient in the substrate stiffness, the magnitude of these interactions (elastic constant, Γ_ s in units of ε/σ^2, where ε is the energy unit) linearly varies with the position x of the beads obtaining larger values towards the stiffer regions of the substrate (Figure <ref>), while the equilibrium length is set to 1.2 σ. The rate of change of Γ_ s is 0.05 ε/σ^3 at steps of 2 σ starting from an initial value of Γ_ s =0.5 ε/σ^2 at the softest end of the substrate, thus implying Γ_ s =5 ε/σ^2 at the stiffest end. Since this particular choice was proven to be optimal for carrying out our parametric investigation, our results will be based on this specific substrate setup. Once the substrate reached its equilibrium state by means of molecular dynamics simulation (further details will be provided below), a polymer droplet was first placed onto the softest and then the stiffest part of the substrate to examine the direction of motion (antidurotaxis or durotaxis). Once, the direction of motion was identified, the decision was taken onto which end of the substrate the droplet should be placed and an ensemble of simulations were carried out for each set of parameters. Nonbonded interactions between particles (beads) in the system are based on the Lennard-Jones (LJ) potential, expressed by the relation U_ LJ(r) = 4ε_ ij[ (σ_ ij/r)^12 - (σ_ ij/r)^6]. Here, r is the distance between any pair of beads, with indices i and j in Equation <ref> reflecting the type of bead, namely “d” for the droplet and “g” for the gel substrate. The size of the beads is the same, namely σ_ ij = σ. Attractive interactions between the gel and the droplet beads as well as among the droplet beads are used by choosing a cutoff of r_ c=2.5 σ for the LJ potential, while an athermal model is used for the interactions among the gel beads. The strength of LJ interactions between the droplet beads is set to ε_ dd=1.5 ε. Different choices are considered for the interaction strength between the polymer droplet and the gel substrate, namely ε_ dg=0.3-1.0 ε, thus in practice controlling the affinity of the droplet to the substrate. Finally, the droplet consists of fully flexible polymer chains to avoid evaporation effects, which may also further complicate our analysis. Hence, the vapor pressure is sufficiently low.<cit.> In particular, the droplet consists of polymer chains with length N_ l=10 beads each, while the total size of the droplet is 8000 beads. To bind the beads together in each polymer chain of the droplet a harmonic potential was used with elastic constant 1000 ε/σ^2 and equilibrium length σ. To control the temperature of the system, T=ε/k_B (k_B is Boltzmann's constant), the Nosé–Hoover thermostat was applied,<cit.> as implemented in the HOOMD-Blue package (version 2.9.7).<cit.> The integration time step was set to 0.005 τ, where τ=(mσ^2/ε)^1/2 is the natural MD time unit. For every set of parameters, we perform five simulation experiments with different initial conditions (e.g., changing the random seed for generating the initial velocities of the system) to statistically collect data for the analysis of properties. Finally, each simulation run lasts a total of 50×10^6 time steps, which was deemed long enough for drawing reliable conclusions on the possibility of observing the droplet motion and carrying out the necessary analysis of the relevant properties. Before presenting our durotaxis and antidurotaxis experiments and their analysis, we take a step back to analyze the stiffness of the substrate and see how this varies with the strength of the interactions between beads used to create the gradient. To perform our analysis, we consider a nanoindenter that slowly impinges onto the gel substrate without gradient, as has been done in a previous study in the case of protein fibrils.<cit.> By recording the total force of the substrate beads on the nanoindenter, the Young modulus, γ, of the gel substrate can be determined similarly to an empirical technique used to estimate the Young modulus in atomic-force-microscopy (AFM) nanoindentaion experiments. The Young modulus of the nanoindenter is infinite and we therefore define each system in the limit of the Hertzian theory.<cit.> The indenter is a sphere with a curvature radius R_ ind that slowly impinges onto the gel substrate with a velocity u_ ind. Here, this velocity was the same in all nanoindentation exeriments, i.e., data were collected every 5×10^3 MD time steps for a total trajectory length of 5×10^5 time steps with a time step of 0.005 τ. Then, the nanoindentation force, f, is defined by the Hertz relation f = αγ h^3/2, where h is the penetration or nanoindentation length, γ Young modulus, and α = 4R_ ind^1/2/3(1-ν^2) In our simulation experiments, the radius of the nanoindenter was R_ ind=5 σ and the maximum penetration depth h_ max=10 σ. ν is the Poisson coefficient, in our case taken as 0.5, which corresponds to a homogeneous deformation on the x-y plane. Then, the Young modulus, γ, can be determined by calculating the slope of the curves of Figure <ref>a for each gel substrate without gradient but with a different value of the harmonic elastic constant, Γ_ s. By plotting the obtained Young's moduli as a function of Γ_ s (Figure <ref>b), we conclude that increasing Γ_ s indeed results in stiffer gel substrates. By attempting to fit a power-law function on these data, we obtained an exponent of about 3/4 for the relation between γ and Γ_ s. § RESULTS AND DISCUSSION Given the constant gradient maintained in each of our simulation experiments, which is optimally chosen to facilitate our properties exploration, the first aspect of our research concerns the possibility of causing durotaxis or antidurotaxis motion and the probability of such motion for a range of droplet–substrate affinities. To address this issue, a droplet is placed either on the softest or the stiffest part of the substrate and the outcome of the simulation is monitored. Figure <ref> visually summarizes our conclusions. For values ε_ dg<0.2 ε, the interaction between the droplet and the gel substrate is weak. Hence, in this case the droplet detaches from the substrate due to the thermal fluctuations and this case deserves no further consideration here. Durotaxis motion takes place when 0.2 ε<ε_ dg<0.8 ε. For this range of affinity strength between the droplet and the substrate, we observe that the droplet moves from softer to stiffer parts of the gel substrate covering its full length in the x direction, a manifestation of successful durotaxis motion for the droplet. While for 0.3 ε≤ε_ dg≤0.6 ε the probability that the droplet successfully moves from the softest to the stiffest side of the substrate is 1.0 as calculated from an ensemble of five different trajectories for each affinity case, this probability becomes less than unity when ε_ dg=0.7 ε. Moreover, we were able to only detect partial motion along the substrate, when ε_ dg=0.8 ε, reporting threrefore this case as unsuccessful. This provides a first indication that the droplet motion may become less effective for larger values of ε_ dg. Indeed, this is corroborated by monitoring the average velocity of the droplet for different values ε_ dg (Figure <ref>), which clearly indicates that an increased affinity between the droplet and the substrate will lead to a smaller average durotaxis velocity. Further increase of the affinity, namely ε_ dg=0.9 ε, leads to successful antidurotaxis motion. In this case, the droplet reached the softest part of the gel substrate and the recorded average velocity was of the same magnitude as in the durotaxis case with ε_ dg=0.7 ε. Finally, antidurotaxis motion for ε_ dg=ε was observed, but in this case the droplet was not able to cover the full distance from the one to the other side of the gel substrate for any of our five trajectories and therefore this case was considered unsuccessful, as was the case of partial durotaxis droplet motion for ε_ dg=0.8 ε. The above observations may allow us to conclude that both durotaxis and antidurotaxis motions are possible on the same substrate. Since this takes place by varying the droplet–substrate affinity in our simulation, we may argue that the direction of motion eventually depends on the choice of liquid for the droplet. Also, durotaxis motion on gel substrates is overall more efficient than the antidurotaxis motion, especially when the droplet–substrate affinity is lower. As in our previous studies,<cit.> we attempted to identify the driving force for both antidurotaxis and durotaxis cases. X in Figure <ref> indicates the coordinate of the center-of-mass of the droplet in the x direction with the zero value corresponding to the center of the gel substrate. Z is the coordinate of the center-of-mass of the droplet in the z direction with the zero indicating the position of the substrate boundary, which was determined through the inflection point in the density profile of each substrate as done in our previous work. <cit.> Moreover, the peculiarities of the gel–droplet interface have been explored recently in detail.<cit.> On the basis of our analysis for the durotaxis cases, we observe that the interfacial energy between the droplet and the substrate decreases as a function of the center-of-mass position of the droplet in both the x (Figure <ref>a) and the z directions (Figure <ref>b), which suggests that the droplet establish a larger number of contacts with the gel as the it moves along the substrate (see also Movie 1 in the Supporting Information). As a result, the droplet is more strongly attracted by the gel as it moves toward the stiffer parts, which results in a decrease in the position Z of the center-of-mass of the droplet, but with the droplet however remaining on top of the substrate. Moreover, we observe that the slope in the energy reduction of the interfacial energy as a function of the position X of the center-of-mass of the droplet is larger for smaller values of the attraction strength ε_ dg (Figure <ref>a), which reflects the conclusions relating to the average velocity of the droplet presented in Figure <ref>, that is a lower adhesion of the droplet to the gel substrate offers a more efficient (in terms of droplet speed) durotaxis motion. This motion mechanism of the droplet shares similarities with the durotaxis motion previously observed on brush substrates,<cit.> where the droplet moves to the areas of smaller surface fluctuations of the substrate, that is substrate parts of lower roughness. The results of Figure <ref> for the durotaxis cases can be compared with those for the antidurotaxis cases presented in Figure <ref>. Notably, we observe that the interfacial energy is much more reduced for the antidurotaxis cases in comparison with the durotaxis ones. More importantly, we also see that the droplet penetrates deeper into the substrate in the case of antidurotaxis droplet motion and the center-of-mass of the droplet eventually lies below the top of the substrate as the antidurotaxis motion completes (see also Movie 2 of the Supporting Information). This mechanism is therefore more similar to the one observed in the case of antidurotaxis motion for brush substrates with gradient in the grafting density of the polymer chains.<cit.> In this case, the minimization of the interfacial energy was due to the penetration of the droplet onto the brush substrate. For this reason, the droplet motion is much less efficient than that in the case of durotaxis simulations, since the droplet faces a larger resistance in carrying out the motion along the substrate by bypassing the gel beads. Finally, we monitored the trajectories of the center-of-mass of the droplet onto the x-y plane (Figure <ref>). A different behavior of the droplet motion is observed between durotaxis and antidurotaxis cases. In particular, we see that the droplet motion is more influenced by thermal fluctuations as indicated by the lateral motion in the y direction in the case of durotaxis (Figure <ref>a). The droplet clearly initially moves at a higher instantaneous speed towards the stiffer areas and then slightly slows down. This pattern of motion is observed for both the lowest and the highest affinity between the droplet and the substrate, which may indicate that the affinity might play a lesser role in determining the exact trajectory of the particle. The weakening effect of the gradient effect on the droplet velocity as the droplet reaches the ever stiffer parts of the substrate has been thus far observed in all previous durotaxis/antidurotaxis studies.<cit.> In the case of antidurotaxis experiments (Figure <ref>b), the droplet appears to only move in the x direction with minimal lateral (diffusive) motion in the y direction, which may suggest that the motion in this case is dominated by the droplet–substrate interactions. This takes place to a larger degree as the droplet moves to the softer parts of the substrate. Hence, we can see that the droplet motion fundamentally differs in the case of antidurotaxis and durotaxis cases, with the antidurotaxis motion providing a more certain path for the trajectory of the droplet moving along the substrate during the simulation experiments. § CONCLUSIONS In this study, we have proposed and investigated a novel substrate design based on a gel material. Importantly, we have been able to demonstrate that durotaxis and antidurotaxis motion of a droplet is possible on the same substrate and the direction of motion only depends on the fluid. To our knowledge, this is the first time that this possibility is realized for gel substrates. As in the case of durotaxis onto brush substrates<cit.>, we have found that the minimization of the interfacial energy between the droplet and the substrate is the dominant driving force responsible for the motion of the droplet. This takes place by the substantial penetration of the substrate by the droplet in the case of antidurotaxis or the droplet motion towards areas with smaller surface fluctuations on the top of the gel in the case of durotaxis. As a result, the trajectories of the droplet motion appear to be more diffusive in the durotaxis cases than in the antidurotaxis cases, where in the latter the droplet motion is hindered by the gel units. Moreover, recent experiments <cit.> have reported on the spontaneous droplet motion on soft, gel substrates with stiffness gradient created by varying the degree of cross-linking in the gel. In this case, results have pointed out to the minimization of the interfacial energy between the substrate and the droplet as the driving force for the durotaxis motion of the droplet, as in the case of simulation experiments here and in previous studies.<cit.> We have also found that durotaxis takes place for a wide range of droplet–substrate affinities with lower affinities leading to more efficient durotaxis motion, while fully successful antidurotaxis motion has only been observed for a high value of droplet–substrate affinity. Our study provides further evidence that both durotaxis and antidurotaxis motion can be realized on the same gel substrate. Thus, we anticipate that our work highlights the new venues of possibilities in the autonomous motion of fluids based on gradient gel-substrates and provides insights into the motion of droplets driven by stiffness gradients, thus enhancing our understanding of similar phenomena, encountered in nature. Authors thank Jan Židek for helpful discussions. This research has been supported by the National Science Centre, Poland, under grant No. 2019/35/B/ST3/03426. A. M. acknowledges support by COST (European Cooperation in Science and Technology [See http://www.cost.eu and https://www.fni.bg] and its Bulgarian partner FNI/MON under KOST-11). We gratefully acknowledge Polish high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2023/016607. Movie1.mp4: Movie illustrates the droplet durotaxis motion (ε_ dg=0.3 ε). Movie2.mp4: Movie illustrates the droplet antidurotaxis motion (ε_ dg=0.9 ε).
http://arxiv.org/abs/2408.12230v1
20240822090457
Tuning THz magnons in a mixed van-der-Waals antiferromagnet
[ "F. Le Mardele", "I. Mohelsky", "D. Jana", "A. Pawbake", "J. Dzian", "W. -L. Lee", "K. Raju", "R. Sankar", "C. Faugeras", "M. Potemski", "M. E. Zhitomirsky", "M. Orlita" ]
cond-mat.mtrl-sci
[ "cond-mat.mtrl-sci" ]
[]florian.le-mardele@lncmi.cnrs.fr LNCMI-EMFL, CNRS UPR3228, Univ. Grenoble Alpes, Univ. Toulouse, Univ. Toulouse 3, INSA-T, Grenoble and Toulouse, France LNCMI-EMFL, CNRS UPR3228, Univ. Grenoble Alpes, Univ. Toulouse, Univ. Toulouse 3, INSA-T, Grenoble and Toulouse, France LNCMI-EMFL, CNRS UPR3228, Univ. Grenoble Alpes, Univ. Toulouse, Univ. Toulouse 3, INSA-T, Grenoble and Toulouse, France LNCMI-EMFL, CNRS UPR3228, Univ. Grenoble Alpes, Univ. Toulouse, Univ. Toulouse 3, INSA-T, Grenoble and Toulouse, France LNCMI-EMFL, CNRS UPR3228, Univ. Grenoble Alpes, Univ. Toulouse, Univ. Toulouse 3, INSA-T, Grenoble and Toulouse, France Institute of Physics, Charles University, Ke Karlovu 5, Prague, 121 16 Czech Republic Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan, Republic of China Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan, Republic of China Institute of Physics, Academia Sinica, Nankang, Taipei 11529, Taiwan, Republic of China LNCMI-EMFL, CNRS UPR3228, Univ. Grenoble Alpes, Univ. Toulouse, Univ. Toulouse 3, INSA-T, Grenoble and Toulouse, France LNCMI-EMFL, CNRS UPR3228, Univ. Grenoble Alpes, Univ. Toulouse, Univ. Toulouse 3, INSA-T, Grenoble and Toulouse, France Institute of High Pressure Physics, PAS, Warsaw, Poland CENTERA, CEZAMAT, Warsaw University of Technology, Warsaw, Poland Univ. Grenoble Alpes, CEA, IRIG, PHELIQS, 17 avenue des Martyrs, 38000 Grenoble, France []milan.orlita@lncmi.cnrs.fr LNCMI-EMFL, CNRS UPR3228, Univ. Grenoble Alpes, Univ. Toulouse, Univ. Toulouse 3, INSA-T, Grenoble and Toulouse, France Institute of Physics, Charles University, Ke Karlovu 5, Prague, 121 16 Czech Republic § ABSTRACT Alloying stands out as a pivotal technological method employed across various compounds, be they metallic, magnetic, or semiconducting, serving to fine-tune their properties to meet specific requirements. Ternary semiconductors represent a prominent example of such alloys. They offer fine-tuning of electronic bands, the band gap in particular, thus granting the technology of semiconductor heterostructures devices, key elements in current electronics and optoelectronics. In the realm of magnetically ordered systems, akin to electronic bands in solids, spin waves exhibit characteristic dispersion relations, featuring sizeable magnon gaps in many antiferromagnets. The engineering of the magnon gap constitutes a relevant direction in current research on antiferromagnets, aiming to leverage their distinct properties for THz technologies, spintronics, or magnonics. In this study, we showcase the tunability of the magnon gap across the THz spectral range within an alloy comprising representative semiconducting van-der-Waals antiferromagnets FePS_3 and NiPS_3. These constituents share identical in-plane crystal structures, magnetic unit cells and the direction of the magnetic anisotropy, but differ in the amplitude and sign of the latter. Altogether these attributes result in the wide tunability of the magnon gap in the Fe_1-xNi_xPS_3 alloy in which the magnetic order is imposed by stronger, perpendicular anisotropy of iron. Tuning THz magnons in a mixed van-der-Waals antiferromagnet M. Orlita August 26, 2024 =========================================================== The ongoing research on magnetic van-der-Waals materials is multifaceted, addressing fundamental issues related to novel topological and quantum phases of matter, while also exploring potential applications across various fields, thereby driving the advancement of magnonics <cit.>. The pursuit of magnetic materials with continuously adjustable magnetic properties, especially the magnon spectrum, is a significant focus of current research efforts. This exploration holds promise for enabling the fabrication of multilayered structures that exhibit tailored multi-magnon gap excitations, precisely suited to meet specific demands. The magnon energies (gaps) can be tuned by temperature or magnetic field, and to a lesser extent, by pressure, strain, and electric field. The charge accumulation is also an option when working with a-few-layer stacks or heterostructures. Another way of tuning, explored in this paper, is alloying of antiferromagnets <cit.>. Even though the long-range magnetic order is – strictly speaking – excluded in such randomly organized alloys, the antiferromagnetism is often preserved and systematic trends in magnetic properties are found. In a few cases, tuning of the magnon energies has been reported, nevertheless, only in alloys with small mixing ratios <cit.>. Antiferromagnetic vdW crystals with tailored properties are often viewed as materials suitable for novel terahertz (THz) technologies. Future wireless communication with a high data throughput (6G technology and beyond) is one of them <cit.>. Exceptionally fast dynamics of the magnetic lattice <cit.> and optically active magnon modes in the sub-THz and THz spectral ranges are their key features that may allow them to become active media in various THz devices <cit.>, e.g., as fast optically-driven modulators of THz radiation <cit.>. The fine tunability is here required to match the magnon energy with the communication channels defined by the windows of atmosphere transparency. In this study, we conduct experimental investigations into alloys formed by mixing two layered antiferromagnets: FePS_3 and NiPS_3. These materials have identical crystal and magnetic lattices within the plane, featuring characteristic zigzag ferromagnetic chains aligned along the same crystallographic axis. In both cases, their magneto-crystalline anisotropy has an out-of-plane orientation, nevertheless, with a different strength and sign. Consequently, FePS_3 and NiPS_3 are identified as easy-axis and easy-plane antiferromagnets, respectively. Our findings demonstrate that by adjusting the molar fraction x in the Fe_1-xNi_xPS_3 alloy, the magnon energy can be continuously varied from 2 to 4 THz (8 to 16 meV). Remarkably, this dependence of the magnon energy can be interpreted in terms of varying the magnetic anisotropy, which maintains its perpendicular orientation up to relatively high nickel concentrations (x≈ 0.9). The high-quality mixed crystals of Fe_1-xNi_xPS_3 <cit.> were grown by a chemical vapor transport method using iodine as a transport agent. Initially, the polycrystalline powders were synthesized by the solid-state synthesis process under high vacuum conditions. The high-purity (5N) starting materials were weighted at a stoichiometric ratio and then sealed into the quartz tube with a diameter of 22 mm with 10^-3 Torr pressure. The mixed compounds were heated and grounded twice at 400 and 600^∘C to make a single-phase compound. The 200 mg iodine was added into the polycrystalline samples and sealed by the tube dimension of 20×22×400 mm^3 with 10^-3 Torr pressure. The tube was kept for growth at a two-zone furnace with the temperature range of 600-700^∘C for 200 h. After completing the growth process, the furnace temperature was reduced to room temperature at a rate of 2^∘C/min. The quartz tube was broken inside an argon-filled glovebox and collected good-quality single crystals. For experiments on pure NiPS_3, a commercially available crystal was used. We carried out a series of low-temperature THz magneto-transmission experiments on Fe_1-xNi_xPS_3 alloys with B oriented along the c-axis (out-of-plane direction). The radiation from a mercury lamp was analyzed by a Bruker Vertex 80v Fourier-transform spectrometer, and delivered to the sample via light-pipe optics. The Fe_1-xNi_xPS_3 samples – with an effective irradiated area of several mm^2 and thickness of several hundred microns – were kept at T=4.2 K in the helium exchange gas. The radiation was detected by a composite bolometer placed right below the sample. The measured magneto-transmission spectra were normalized using the spectrum obtained by averaging over the whole range of B explored which facilitates identifying B-dependent spectral features. The data collected on alloys with all decimal-fraction compositions, x=0, 0.1… 1, are presented in Figs. <ref>a-k, in the form of false-color plots. The extracted positions of B-dependent features are plotted in Figs. <ref>l-v. To interpret the data, we first focus on the response of pure FePS_3 that is well understood thanks to several recent studies <cit.>. These allow us to associate the observed feature with the k=0 magnon mode, corresponding to in-phase oscillations of parallel iron sublattices. Following the expectations for the classical antiferromagnetic resonance (AFMR) in easy-axis antiferromagnets <cit.>, it symmetrically splits into two branches that evolve linearly with the applied magnetic field. Around B=14 T, the lower AFMR branch undergoes avoided-crossing behaviour due to coupling with an optical phonon. This is a signature that we observe a magnon-polaron rather than bare magnon modes <cit.>. With an increasing nickel content (Figs. <ref>b-j), the overall AFMR-like of the response is preserved. It is just the position of the magnon gap which redshifts monotonically, from 4 THz in pure FePS_3 down to 2 THz in Fe_0.1Ni_0.9PS_3. For concentrations up to x=0.4, the lower AFMR branch exhibits signs of magnon-phonon coupling. In alloys with a nickel content above 0.9, no magnon-like excitations were found in our data. In pure NiPS_3, the response is dramatically different and reflects the easy-plane antiferromagnetic order, reported in preceding studies <cit.>. There, the degeneracy of the magnon mode is lifted even at B=0 and only the upper mode is seen at 5.3 meV in our data, dispersing quadratically with B <cit.>. The lower magnon mode was observed in preceding studies <cit.>, but its energy of 1.1 meV is too low to be resolved in our Fourier-transform experiments. The observed behavior implies that, in a broad range of the nickel content, the Fe_1-xNi_xPS_3 alloy behaves as an antiferromagnet with an out-of-plane easy axis. We assign this exceptional stability of the magnetic state to the fact that pure materials, FePS_3 and NiPS_3, order into the same zig-zag spin structure with ferromagnetic arrays of parallel spins that alternate antiferromagnetically in the transverse direction. The only difference between the two is the orientation of ordered moments, which are orthogonal to the ab plane in FePS_3, see Fig. <ref>w, but nearly in-plane for NiPS_3, see Fig. <ref>z. A sensitivity of the moment direction in an alloy to even a tiny concentration of iron can be assigned to its particularly large single-ion anisotropy. It is worth mentioning that in other similar alloys, such as Fe_1-xMn_xPS_3, where the pure compounds have different magnetic arrangement (zig-zag versus Néel), a more complex evolution of magnetic properties has been reported <cit.>. To describe the long-range magnetic order established in Fe_1-xNi_xPS_3 quantitatively, we introduce a global antiferromagnetic order parameter l, | l|=1. Further, we consider the single-ion magnetic anisotropy in the form of -DS_z^2 for both materials, neglecting an order-of-magnitude weaker anisotropy within the plane reported in NiPS_3 <cit.>. The magnetic anisotropy energy normalized per mole of transition metal ions then reads: E_ an = -N_Acos^2θ [D_ Fe S_ Fe^2(1-x) + D_ Ni S_ Ni^2 x], where θ is the angle between l and the c-axis, N_A stands for the Avogadro constant. D_ Fe = 2.66 meV <cit.> and D_ Ni = -0.21 meV <cit.> are single-ion anisotropy constants for two magnetic ions with the respective spins S_ Fe = 2 and S_ Ni = 1. We find that the out-of-plane orientation of ordered moments is energetically favourable (i.e., E_ an <0) up to a relatively high nickel concentration: x_max = 4/[4-D_Ni/D_Fe] ≈ 0.98, in agreement with our experimental findings. The magnon gaps, accessed directly in our magneto-optical experiments, can be computed using the hydrodynamic spin-wave theory <cit.>. The consideration is based on the Lagrangian for a collinear antiferromagnet, described by the order parameter l: L = χ_⊥/2 (∂_t l)^2 - E_ an, where χ_⊥ is the transverse susceptibility. The magnetic anisotropy energy (<ref>) is here expressed as E_ an= -a/2 l_z^2, where l_z=cosθ and a= 2N_A[D_ Fe S_ Fe^2(1-x) + D_ Ni S_ Ni^2 x]. In the easy-axis case (a>0), the antiferromagnetic vector is oriented along the c-axis. Then, the equation of motion for (<ref>) yields two degenerate magnon modes with the gap: Δ_1,2 = √(a/χ_⊥) . For the easy-plane anisotropy (a<0), the order parameter l lies in the basal plane and two magnon gaps read Δ_1=0, Δ_2 = √(|a|/χ_⊥). It is worth noting that the calculated energies of magnon gaps are in line with k=0 gaps obtained using the standard linear spin-wave theory applied to pure materials, FePS_3 <cit.> and NiPS_3 <cit.>. To compute magnon gaps in the magnetically ordered alloy, we use the coherent potential approximation (CPA) and find the parameters a and χ_⊥ that enter Eq. <ref>. The CPA approximation is well established for electrons in solids <cit.>, including their magnetic properties <cit.>. In the easy-axis case (a>0), we obtain the following expression for the twice degenerate magnon gap (<ref>): Δ_1,2^2 = 4DS^2[3J_3S^2 + J_1S^2 + 4J_2S^2 + DS^2]/S, where the averaged microscopic parameters depend on the composition x of the alloy: S = S_ Fe(1-x) + S_ Ni x DS^2 = D_ FeS_ Fe^2(1-x) + D_ Ni S_ Ni^2 x J_nS^2 = J_n^ FeFe S_ Fe^2(1-x)^2 + J_n^ NiNi S_ Ni^2 x^2 + +2J_n^ FeNi S_ FeS_ Ni(1-x)x. For FePS_3 and NiPS_3, there exists a solid set of microscopic parameters deduced from neutron scattering experiments, see Tab.  <ref>. These allow us to estimate the averaged values of microscopic parameters S and DS^2 needed to calculate the magnon gap for a given composition x. The situation is, however, more complex for the average exchange constants J_nS^2. These also depend on a priori unknown strength of the exchange coupling between pairs of iron and nickel J_n^FeNi. Fortunately, the form of Egs. <ref> and <ref> allows us to introduce a single effective Ni-Fe exchange constant α= 2S_ FeS_ Ni[J_1^ FeNi+4J_2^ FeNi+3J_3^ FeNi] that we use as a fitting parameter. The magnon gap calculated using the above introduced hydrodynamical model is compared with experimentally extracted values in Fig. <ref>x. Several observations can be made: (i) reasonable semi-quantitative agreement can be obtained (light blue line) when α is kept as the only free parameter, with the best agreement for α = -1.4 meV; (ii) a closer analysis shows that quantitative agreement, with a truly monotonic Δ_1,2(x) dependence, can be achieved when the J_3^Ni exchange constant, particularly strong for nickel, is ad hoc reduced (dark blue curve), thus serving as an additional fitting parameter; (iii) the observed redshift of the magnon mode with x is, to a great extent, due to effective tuning of the out-of-plane (perpendicular) magnetic anisotropy, cf. Eqs. <ref> and <ref>; and (iv) no AFMR signal was observed in alloys with nearly or completely vanishing magnetic anisotropy (0.9<x<1). This suggests that the long-range magnetic order might not be established in that composition range, in line with general expectations of Mermin-Wagner theorem <cit.> for a vanishing magnetic anisotropy. It is instructive to put the last two points in a broader context of recent research on magnetic materials. Over the past few years, there was a considerable interest in the perpendicular magnetic anisotropy, primarily motivated by possible applications in fast read-write and laterally-dense data storage <cit.>. This concerns a wide class of magnetic materials, such as thin layers of bulk ferromagnets, interfaces of magnetic and non-magnetic systems, but also antiferromagnets, including synthetic ones, see, e.g., Refs. <cit.>. In our case, our data show a possibility to effectively tune, on demand, the strength of the perpendicular anisotropy just by mixing two sibling materials with the same magnetic lattice, with the same orientation, but different strength and sign of the magnetic anisotropy. The dependence of the effective g factor, deduced from the slope of the AFMR branches, is another interesting result of our experiments (see Fig. <ref>y). The g factor for pure compounds only slightly exceeds the value expected for a bare electron (g_Fe≈ g_Ni≈ 2.1), but the g factor in the alloys visibly increases, reaching its maximum around g≈ 2.35 at x≈ 0.5. While any departure of the g factor from the free electron value must be associated with a spin-orbit coupling, we do not see any clear mechanism responsible for the observed enhancement, notably in a compound with relatively light atoms, and therefore, relatively weak spin-orbit interaction. We speculate about inhomogeneity in the alloy that may create local distortions of sulfur-octahedra around magnetic ions. As a result, the crystal field levels and effective g factors may got modified in comparison to pure compounds. To conclude, we have experimentally studied the antiferromagnetic resonance in the mixed vdW crystal Fe_1-xNi_xPS_3. Our data show that the long-range magnetic order exists even in mixed crystals with a random distribution of magnetic atoms lacking translational symmetry. The observed redshift of the AFMR mode, across the THz range, with the increasing nickel concentration demonstrates a possibility to tune widely, and on demand, the magnon energy just by choosing an appropriate mixing ratio of vdW antiferromagnets. Alloying antiferromagnetic materials thus emerge as a technologically pertinent approach for precisely tailoring their properties. It may facilitate the fabrication of magnonic structures with a reduced dimensionality. Acknowledgement M.E.Z. acknowledges support by the ANR, France, Grant No. ANR-19-CE30-0040. The work was supported by the Czech Science Foundation, project No. 22-21974S. The work has been supported by the exchange programme PHC ORCHID (50852UC). R.S. acknowledges the financial support provided by the Ministry of Science and Technology in Taiwan under project numbers NSTC-111-2124-M-001-009; NSTC-110-2112-M-001-065-MY3; AS-iMATE-113-12. M.P. acknowledges support from the European Union (ERC TERAPLASM No. 101053716) and the CENTERA2, FENG.02.01-IP.05-T004/23 project funded within the IRA program of the FNP Poland, co-financed by the EU FENG Programme.
http://arxiv.org/abs/2408.11640v1
20240821141003
Classicality of Stochastic Noise Away From Quasi-de Sitter Inflation
[ "Mahdiyar Noorbala" ]
hep-th
[ "hep-th", "astro-ph.CO", "gr-qc" ]
The saturation number for unions of four cliques Ruo-Xuan Li,  Rong-Xia Hao ,  Zhen He ,  Wen-Han Zhu School of Mathematics and Statistics, Beijing Jiaotong University, Beijing 100044, P.R. China. ========================================================================================================================================================================================= § INTRODUCTION Inflation is the most popular paradigm for primordial cosmology <cit.>. Vacuum fluctuations of the inflaton field source curvature perturbations that seed the structure in the universe as well as the anisotropies in the cosmic microwave background. The study of long wavelength component of the field is most easily carried out in the stochastic approach <cit.>. This can be done both for the inflaton field itself as well as any spectator (test) field that lives on a fixed inflationary background without affecting the dynamics of the geometry. In this paper we focus on the latter case, but in either case, the stochastic approach is based on separating the UV/IR modes via a cutoff scale k_σ = σ H, where H is the Hubble parameter and σ is a dimensionless number. By integrating out the UV modes k>k_σ, one obtains the coarse-grained field (a.k.a., the long mode, or the IR mode) and the UV modes, as they leave the horizon as a result of the accelerating expansion, turn out to play the role of a stochastic noise as a source term in a Langevin equation. Thus the stochastic approach is essentially an effective theory for the stochastically evolving classical IR modes. It is shown <cit.> that in order for this classical picture to hold for a massive field in dS space, one requires σ to be small, more specifically, exp(-3H^2/m^2) ≪σ^2 ≪m^2/3H^2≪ 1. Furthermore, the resulting Langevin equation governing the coarse-grained field has a source term that is a white noise with an amplitude that is independent of the cutoff parameter σ. The Langevin equation, or the corresponding Fokker-Planck equation, can be solved and various properties of the system can be read from the correlation functions and probability distributions <cit.>. There is also a vast literature on the stochastic δ N formalism where the statistics of curvature perturbations is related to that of the inflaton field and which is particularly useful in studying the large fluctuations of curvature perturbations  <cit.>. The above picture is conventionally derived for a field living in dS or quasi-dS background where the equation of state parameter w is equal or close to -1, i.e., the first slow-roll parameter ϵ = -Ḣ/H^2 is assumed to be small. As we show in this paper, the criterion σ≪1 is required for classicality only when ϵ is small. Indeed, we present a situation away from w=-1 (i.e., with non-small ϵ) in which fairly large values of the cutoff (σ∼1) are sufficient for classicality. To be clear, we should mention that this is not the first work to study stochastic inflation in a non-dS background. Indeed, the general form of the noise is known to be proportional to the power spectrum of field fluctuations, regardless of the background FLRW geometry (see, for example, ref. <cit.>). However, to our knowledge, when it comes to the question of classicality, either a thorough analysis is absent or it is assumed that the geometry is quasi-dS (ϵ≪1). On the other hand, there are also studies of stochastic inflation in the non-slow-roll regime (see, for example, refs. <cit.>). However, these are models of ultra-slow-roll <cit.> where the first slow-roll parameter ϵ is small and it is the second slow-roll parameter dlogϵ/dN that is large.[As usual, we reserve the term “slow-roll” for the situation where all of the slow-roll parameters are small (∀ n:ϵ_n≪1, where ϵ_1 = -Ḣ/H^2 and ϵ_n+1 = dlogϵ_n/dN). We use the term “quasi-dS” when the first slow-roll parameter ϵ_1 is small, regardless of the smallness or largeness of the higher slow-roll parameters. The term “non-slow-roll” usually means a quasi-dS regime where slow-roll is violated, but to avoid confusion, we don't use it hereafter. We do not work with higher slow-roll parameters either, and so we drop the index: everywhere that ϵ appears it refers to -Ḣ/H^2.] We demonstrate our point in a very simple setup: a free massless field on an FLRW background with arbitrary w that is constant in time but not necessarily close to -1. We also require inflation to take place, so we impose w<-1/3. Then we investigate the condition of classicality and find that, although near w=-1 we need σ≪1, as we get close to w=-1/3, this constraint is relaxed and σ doesn't have to be small. The rest of this paper is organized as follows: We review the general theory of stochastic inflation for a spectator field on a generic inflationary background in section <ref>. Then we revisit the conventional case of a free field on dS space in section <ref>. Section <ref> is the main part of the paper, where we present our results about the classicality criterion away from the quasi-dS regime. Finally we summarize and conclude in section <ref>. § REVIEW OF STOCHASTIC INFLATION AND THE CRITERION OF CLASSICALITY Let us review the derivation of the Langevin equation in stochastic inflation and see how the noise term emerges. Along the way, we pay careful attention to the criterion of classicality. We consider a non-interacting spectator scalar field on a fixed inflationary background ds^2 = -dt^2 + a^2 d x^2 = a^2 [-dτ^2 + d x^2], where τ is the conformal time. In the Heisenberg picture, the field operator χ̂(τ,x̱) satisfies χ̂” + 2 Hχ̂' + m^2 a^2 χ̂+ ∇^2χ̂= 0, where the conformal Hubble parameter H =a'/a is related to the usual Hubble parameter H = ȧ/a by H=aH.[We denote time derivative with respect to t by a dot, and derivative with respect to τ by a prime.] It follows that the Fourier mode χ̂_ḵ(τ) = ∫d^3x/(2π)^3/2χ̂(τ,x̱) e^-iḵ·x̱ is given in terms of creation and annihilation operators by χ̂_ḵ(τ) = χ_k(τ) â_ k + χ_k^*(τ) â_- k^†, where the mode function χ_ḵ is related to the Mukhanov-Sasaki variable by u_k = a χ_k which satisfies u_k” + ( k^2 + m^2a^2 - a”/a) u_k = 0. We employ the standard Bunch-Davies state by imposing the asymptotic condition u_k → e^-ikτ/√(2k) when the mode is deep inside the horizon (k≫ H). It should be noted that, although these expressions are written for generic a(τ), it must in fact be an inflationary background in the past (namely, ä>0, or equivalently, ϵ < 1), so that the modes exit the horizon rather than enter the horizon. Otherwise the asymptotic boundary condition cannot be imposed in the far past. The key idea of stochastic formalism is the short/long (UV/IR) mode decomposition with respect to a momentum cutoff k_σ(τ) = σ H(τ) that corresponds to a wavelength larger than horizon by the factor σ^-1, i.e., χ̂_s(τ,x̱) = ∫d^3k/(2π)^3/2χ̂_ k(τ) e^i ḵ·x̱θ( k - k_σ(τ) ), χ̂_l(τ,x̱) = ∫d^3k/(2π)^3/2χ̂_ k(τ) e^i ḵ·x̱θ( k_σ(τ) - k ), where χ̂_l is the long mode component of χ̂, χ̂_s is its short mode component, and θ is the Heaviside step function that is employed as our window function. Similar to the filed χ̂, the field velocity (with respect to the e-folding time N) v̂=dχ̂/dN can be split into the long mode v̂_l and the short mode v̂_s as follows v̂_s(τ,x̱) = ∫d^3k/(2π)^3/2dχ̂_ k/dN e^i ḵ·x̱θ( k - k_σ(τ) ), v̂_l(τ,x̱) = ∫d^3k/(2π)^3/2dχ̂_ k/dN e^i ḵ·x̱θ( k_σ(τ) - k ). These coarse-grained fields are related by the equation of motion dχ̂_l/dN = v̂_l + ξ̂_χ, dv̂_l/dN = -(3-ϵ) v̂_l - 1/ H^2 (m^2 a^2 + ∇^2) χ̂_l + ξ̂_v, where use has been made of eq. (<ref>) in the second line, and the noise operators ξ̂_χ and ξ̂_v that appear above are given by ξ̂_χ(τ,x̱) = [1-ϵ(τ)] ∫d^3k/(2π)^3/2χ̂_ k(τ) e^i ḵ·x̱δ( k/k_σ(τ) - 1 ), ξ̂_v(τ,x̱) = [1-ϵ(τ)] ∫d^3k/(2π)^3/2dχ̂_ k/dN e^i ḵ·x̱δ( k/k_σ(τ) - 1 ). In the following we will be interested in a single patch and ignore the spatial variation of the fields, hence dropping the x̱-dependence. Then only the time label of the fields remains, for which we switch to the e-folding time N=log a. The commutators of the noise operators are then given by [ξ̂_χ(N_1), ξ̂_χ(N_2)] = 0, [ξ̂_v(N_1), ξ̂_v(N_2)] = 0, [ξ̂_χ(N_1), ξ̂_v(N_2)] = 2i σ^3 (1-ϵ)( H/2π)^2 δ(N_1-N_2), where ϵ, H and k_σ are evaluated at N=N_1. The anti-commutators, evaluated in the vacuum state, are also found to be: ⟨ 0| {ξ̂_χ(N_1), ξ̂_χ(N_2) } |0 ⟩ = 2(1-ϵ) P_χδ(N_1-N_2), ⟨ 0| {ξ̂_χ(N_1), ξ̂_v(N_2) } |0 ⟩ = 2(1-ϵ) P_χ,vδ(N_1-N_2), ⟨ 0| {ξ̂_v(N_1), ξ̂_v(N_2) } |0 ⟩ = 2(1-ϵ) P_v δ(N_1-N_2), where P_f(k,N) = k^3/2π^2 |f_k(N)|^2 is the dimensionless power spectrum of f=χ or f=v=dχ/dN, and P_χ,v(k,N) = k^3/2π^2[ χ_k(N) v^*_k(N) ], all of which are evaluated at k=k_σ(N_1) and N=N_1. A necessary condition for having a classical picture is that the correlation functions be real and also insensitive to the order of observables, so we demand that the commutators be much smaller than the anti-commutators. Since the two commutators in eq. (<ref>) already vanish, we only need to ensure that the classicality factor C = | ⟨ 0| {ξ̂_χ(N_1), ξ̂_v(N_2) } |0 ⟩/⟨ 0| [ ξ̂_χ(N_1), ξ̂_v(N_2) ] |0 ⟩| be be much larger than unity.[The smallness must be in absolute value, since the anti-commutator is real and the commutator is imaginary.] Using eqs. (<ref>) and (<ref>), C≫1 reads P_χ,v(k_σ(N),N) ≫σ^3 ( H(N)/2π)^2. This statement is equivalent to asserting that the absolute value of the symmetric combination χ_k χ'^*_k + χ_k^* χ'_k is much larger than that of the antisymmetric combination χ_k χ'^*_k - χ_k^* χ'_k (= i/a^2, by the Wronskian identity).[Also, in terms of the Mukhanov-Sasaki variable: | [u_k u_k'^*] - H |u_k|^2 | ≫ | [u_k u_k'^*] | = 1/2.] In other words, χ_k χ'^*_k is approximately real: We know it has a fixed nonzero imaginary part, but it has a much larger real part. Under these conditions, we drop the hat sign on the field and noise operators (χ̂_l, v̂_l and ξ̂_χ,v) and work with the stochastic fields and noises (χ_l, v_l and ξ_χ,v), which are now commuting c-numbers. Another way of testing for classicality is to look at the covariance matrix. Let us elaborate. Suppose x_i (i=1,…,n) are a set of real classical random variables, conveniently chosen to have zero mean: ⟨ x_i ⟩=0 (remember than we have ⟨ξ̂⟩=0 for our noises too). Then the n× n covariance matrix ⟨ x_i x_j ⟩ must be positive semi-definite.[Proof: Let A_ij = ⟨ x_i x_j ⟩ and a be any real-valued vector. Then a^T A a = ∑_i,j=1^n ⟨ a_i x_i a_j x_j ⟩ = ( ∑_i=1^n ⟨ a_i x_i ⟩ )^2 ≥ 0.] Indeed, given any positive semi-definite matrix, there exist n Gaussian random variables whose covariance matrix is equal to the given matrix. Now we are given the matrix of vacuum expectation values [ ⟨ 0| ξ̂_χ(N_1) ξ̂_χ(N_2) |0 ⟩ ⟨ 0| ξ̂_χ(N_1) ξ̂_v(N_2) |0 ⟩; ⟨ 0| ξ̂_v(N_1) ξ̂_χ(N_2) |0 ⟩ ⟨ 0| ξ̂_v(N_1) ξ̂_v(N_2) |0 ⟩ ], and we wish to convert it to and interpret it as the covariance matrix [ ⟨ξ_χ(N_1) ξ_χ(N_2) ⟩ ⟨ξ_χ(N_1) ξ_v(N_2) ⟩; ⟨ξ_v(N_1) ξ_χ(N_2) ⟩ ⟨ξ_v(N_1) ξ_v(N_2) ⟩ ]. The original matrix is Hermitian, but neither real nor necessarily positive semi-definite. However, it is straightforward to check that taking the real part of the off-diagonal elements makes the matrix not only real and symmetric, but also positive semi-definite. Thus we see that if this process (taking the real part of the off-diagonal elements) introduces little change, i.e., if the off-diagonal elements are approximately real, then we have a fairly accurate classical description. This is clearly equivalent to the classicality criterion (<ref>). It is useful to mention a few remarks about the eigenvalues of the covariance matrix. The entries are given by eqs. (<ref>)–(<ref>), except that a division by two is necessary to go from anti-commutators to correlators. Setting aside the common factor (1-ϵ) k_σ^3/2π^2δ(N_1-N_2), this matrix is of the form [ |χ|^2 (χ v^*); (χ v^*) |v|^2 ]. As we mentioned above, this is a positive semi-definite matrix. Its eigenvalues are λ_± = T ±√(T^2-4D)/2, where T = |χ|^2 + |v|^2 is the trace and D = [ (χ v^*) ]^2 (= 1/(2 Ha^2)^2, by the Wronskian identity) is the determinant. As mentioned above, classicality implies that (χ v^*) ≪(χ v^*). On the other hand, |(χ v^*)| ≤ |χ| |v| ≤1/2 (|χ|^2 + |v|^2). Thus D ≪ T^2/4, which means that one of the eigenvalues is much smaller than the other: λ_+ ≈ T ≫λ_- ≈D/T. In fact, the ratio λ_+/λ_- is controlled by the square of the classicality factor, C^2. This is an important result, because it means that whenever the system attains classicality, there is practically only one independent noise and the other one has a comparably tiny amplitude. We have gone through this rather detailed review to emphasize that everything we have said so far applies to any inflationary background, i.e., any scale factor a(τ) with ä>0, and not just the dS or quasi-dS background. We will see below how particular choices of w lead to different consequences. Finally, let us write down the well-known Langevin equations for the long-mode field: dχ_l/dN = v_l + ξ_χ, dv_l/dN = -(3-ϵ) v_l - m^2/H^2χ_l + ξ_v, where we have dropped the gradient term as we work in a single patch at fixed x̱. To study this system of stochastic differential equations, we need to know the statistical properties of the noise. In the forthcoming sections we investigate classicality in the dS case w=-1 and then the more general case of w. § FREE FIELD ON DS We first revisit the case of a free field on an exact de Sitter background, before considering the situation away from quasi-de Sitter space in the next section. In dS space with constant Hubble parameter H, we have a = -1/Hτ and the Mukhanov-Sasaki equation reads u”_k + ( k^2-ν^2-1/4/τ^2) u_k = 0, where ν = √(9/4-m^2/H^2) is a constant parameter. The solution of this equation after imposing the Bunch-Davies initial condition is u_k = 1/2 e^i(2ν+1)π/4√(-πτ)H_ν(-kτ), where H_ν is the Hankel function of the first kind and order ν, and the irrelevant overall phase can be discarded. Inserting the mode function (<ref>) in eqs. (<ref>)–(<ref>) with ϵ=0 and dividing by 2, we find the following stochastic correlators: ⟨ξ_χ(N_1) ξ_χ(N_2) ⟩ = σ^3 H^2/8π| H_ν(σ) |^2 δ(N_1-N_2), ⟨ξ_χ(N_1) ξ_v(N_2) ⟩ = -σ^3 H^2/8 π[ ( 3/2 - ν) | H_ν(σ) |^2 + σ[ H_ν(σ) H_ν-1^*(σ) ] ] δ(N_1-N_2), ⟨ξ_v(N_1) ξ_v(N_2) ⟩ = σ^3 H^2/8 π| ( 3/2-ν) H_ν(σ) + σ H_ν-1(σ) |^2 δ(N_1-N_2). Thus the classicality criterion (<ref>) becomes ( 3/2 - ν) | H_ν(σ) |^2 + σ[ H_ν(σ) H_ν-1^*(σ) ] ≫2/π. The classicality factor C is equal to the ratio of the two sides (LHS/RHS) of this inequality. It is plotted in fig. <ref> and it is clear that C≫1 requires σ→0. This confirms the common claim that the wavelength cutoff of stochastic inflation has to be much larger than the horizon size for the entire range 3/2>ν>0. In the σ→0 limit, and provided that ν is not too close to 0 nor 3/2, the coefficients of the delta function on the right hand sides of eqs. (<ref>)–(<ref>) become, respectively,[The subleading terms are of order σ^5-2ν, σ^3 and σ^3+2ν, which are all negligible as long as ν is not too close to zero.] ( σ/2)^3-2ν[Γ(ν) H]^2/π^3, - ( σ/2)^3-2ν(3/2 - ν) [Γ(ν) H]^2/π^3, ( σ/2)^3-2ν[(3/2 - ν) Γ(ν) H]^2/π^3. Assuming the hierarchy mentioned in the Introduction, namely, exp(-3H^2/m^2) ≪σ^2 ≪m^2/3H^2≪ 1, the leading terms in the correlators become ⟨ξ_χ(N_1) ξ_χ(N_2) ⟩ = ( H/2π)^2 δ(N_1-N_2), ⟨ξ_χ(N_1) ξ_v(N_2) ⟩ = -m^2/3H^2( H/2π)^2 δ(N_1-N_2), ⟨ξ_v(N_1) ξ_v(N_2) ⟩ = ( m^2/3H^2)^2 ( H/2π)^2 δ(N_1-N_2). As a double check, we notice that the commutator is negligible compared to the correlator in eq. (<ref>), if m^2/3H^2 ≫σ^3, which is a consequence of the assumed hierarchy (<ref>) and consistent with ref. <cit.>. But note that the leftmost inequality, exp(-3H^2/m^2) ≪σ^2 is not a requirement of classicality. It is there to guarantee the additional nice property that the noise amplitudes are independent of σ. These are all consistent with the well-known results in the literature <cit.>, although it seems that classicality has always been shown in the small mass limit (ν≈3/2), where the σ-independence property shows up, too. If we give up the hierarchy (<ref>) and the desire to have σ-independent amplitude, then the sheer requirement of classicality yields (still for ν not too close to 0 nor 3/2, so using eqs. (<ref>)–(<ref>)): ( σ/2)^2ν≪1/2π( 3/2 - ν) Γ(ν)^2. This is always satisfiable by a suitable choice of σ. Indeed, since ν is away from 0 and 3/2, the right hand side is of order one, thus the classicality criterion in this range effectively becomes σ≪1, without necessarily requiring an exponential lower bound like (<ref>) on σ. Calculation of the higher order correlators beyond two-point for this non-interacting field proceeds by application of the Wick theorem and one finds that ξ_χ and ξ_v are Gaussian white noises. Furthermore, by diagonalizing the covariance matrix, it is evident that there is only one non-zero eigenvalue, corresponding to one independent Gaussian white noise with normalized unit amplitude, which we denote by ξ_n, that satisfies ⟨ξ_n(N_1) ξ_n(N_2) ⟩ = δ(N_1-N_2). The field noises are then given by ξ_χ = H/2πξ_n, ξ_v = - m^2/3H^2H/2πξ_n. Now let us consider the case where ν is very close to 3/2, i.e., the small mass limit. Clearly, the factor 3/2-ν in eqs. (<ref>) and (<ref>) vanishes and higher order terms in σ must be included. Also, for the massless case (m=0), it is impossible to choose σ to satisfy the inequalities in (<ref>). Nevertheless, when ν=3/2, we have the exact result (to all orders in σ) for eqs. (<ref>)–(<ref>): ⟨ξ_χ(N_1) ξ_χ(N_2) ⟩ = (1+σ^2) ( H/2π)^2 δ(N_1-N_2), ⟨ξ_χ(N_1) ξ_v(N_2) ⟩ = -σ^2 ( H/2π)^2 δ(N_1-N_2), ⟨ξ_v(N_1) ξ_v(N_2) ⟩ = σ^4 ( H/2π)^2 δ(N_1-N_2). Clearly, this is still a classical stochastic situation if σ≪1, as the commutators are much smaller than the anti-commutators, although the ξ_v noise amplitude depends on σ. Furthermore, the absence of interactions still implies Gaussianity. However, there is no longer an exactly vanishing eigenvalue, but to the leading order we can write ξ_χ = H/2πξ_n, ξ_v = -σ^2 H/2πξ_n, as the effect of a second independent noise starts at the order σ^3, which is why it is practically irrelevant. Thus, if m=0, we have essentially no noise on v_l in the σ→0 limit, i.e., for every realization of the stochastic field χ_l, the field v_l is deterministically specified by eq. (<ref>) without the ξ_v term. Finally, at ν=0, corresponding to m=3H/2, the σ→0 limit of the correlators (<ref>)–(<ref>) becomes: ⟨ξ_χ(N_1) ξ_χ(N_2) ⟩ = σ^3 (logσ)^2 H^2/2π^3δ(N_1-N_2), ⟨ξ_χ(N_1) ξ_v(N_2) ⟩ = -3/2σ^3 (logσ)^2 H^2/2π^3δ(N_1-N_2), ⟨ξ_v(N_1) ξ_v(N_2) ⟩ = 9/4σ^3 (logσ)^2 H^2/2π^3δ(N_1-N_2). Thus the measure of classicality (smallness of the commutator compared to eq. (<ref>)) is equivalent to the largeness of (logσ)^2. This means that σ must be exponentially small. For example, to achieve a level of classicality that is a thousand times larger than the quantumness (C=1000), we need σ_1000≈exp(-√(1000π/3)) ∼ 10^-14. Furthermore, the noises are Gaussian with only one nonzero eigenvalue, so they are both proportional to a single normalized Gaussian noise ξ_n: ξ_χ = ( σ/2π)^3/2| log(σ^2) | H ξ_n, ξ_v = -3/2( σ/2π)^3/2| log(σ^2) | H ξ_n. These amplitudes are σ-dependent but minuscule. So there is practically no noise and we have a deterministic classical system. This observation confirms the prior expectation that heavier fields behave more classical than lighter ones. § FREE MASSLESS FIELD ON ACCELERATING FLRW In the previous section we worked in an exact dS background. Now we consider a massless field on an FLRW background whose matter content is a perfect fluid whose equation of motion parameter w is time-independent. Of course, as we emphasized before, we also require an inflationary FLRW spacetime, so w<-1/3. The evolution of the scale factor can be easily found by solving Friedmann equations and we obtain: a(τ)/a_0 = ( τ/τ_0)^p, where p = 2/1+3w, ϵ = 1 + 1/p = 3/2 (1+w), H(τ) = p/τ a(τ) = H_0 ( a(τ)/a_0)^-3/2 (1+w) = H_0 e^-ϵ N, where τ_0 is an arbitrary reference time used as the origin N=0 of the e-folds and at which the scale factor and the Hubble are set by a_0 and H_0, respectively. Fig. <ref> includes a plot of p as a functions of w. As before, we have the formulas for the noise at our disposal from section <ref>. The only non-vanishing commutator can be obtained from eq. (<ref>) which now yields [ξ̂_χ(N_1), ξ̂_v(N_2)] = -iσ^3(1+3w) ( H/2π)^2 δ(N_1-N_2), where H=H_0 exp(-ϵ N_1) is the time-dependent Hubble parameter. The correlators are to be read from the anti-commutators given in eqs. (<ref>)–(<ref>). To obtain the mode function, we notice that the Mukhanov-Sasaki equation is again of the form (<ref>), thanks to the fact that m=0 and the observation that a”/a = ν^2-1/4/τ^2, where[Actually, the absolute values are unnecessary in the range of interest w<-1/3.] ν = | 1/2 - p | = 3/2| w-1/1+3w|. Note that m=0 is crucial in deriving a”/a∝1/τ^2, and the presence of mass would ruin this property. Also, as a side remark, note that a qualitative difference with the previous section is that ν≥3/2 here, whereas ν≤3/2 there. For reference, we have included a plot of ν as a function of w in fig. <ref>. Since ν is a constant independent of τ, we have the same solution (<ref>) as in the previous subsection, except that ν is now given by eq. (<ref>) instead of eq. (<ref>). Bearing in mind that k_σ = σ H = pσ/τ, we find ⟨ξ_χ(N_1) ξ_χ(N_2) ⟩ = σ^3 H^2/8π| H_ν(-pσ) |^2 δ(N_1-N_2), ⟨ξ_χ(N_1) ξ_v(N_2) ⟩ = -σ^4 H^2/8 π[ H_ν(-pσ) H_ν-1^*(-pσ) ] δ(N_1-N_2), ⟨ξ_v(N_1) ξ_v(N_2) ⟩ = σ^5 H^2/8 π| H_ν-1(-pσ) |^2 δ(N_1-N_2). These are exact expressions for the noise amplitude that we are going to exploit in the sequel. Before moving on, let us mention a remarkable cancellation here. Instead of the combination ( 3/2-ν) H_ν(σ) + σ H_ν-1(σ) that appears in eqs. (<ref>) and (<ref>), we obtain ( 1 + 2ν-1/2p) H_ν(-pσ) + σ H_ν-1(-pσ) in eqs. (<ref>) and (<ref>). Since ν = 1/2 - p, the term involving H_ν disappears altogether. Also note that the factor 1-ϵ in the noise amplitudes above is canceled out by other factors coming from the mode functions. As a cross check, we notice that setting w=-1 in eqs. (<ref>)–(<ref>), reduces them to eqs. (<ref>)–(<ref>), which is reassuring since the latter is the case of a massless field on exact dS background. With the aid of eq. (<ref>), we now check the classicality criterion (<ref>), which now reads -p σ[ H_ν(-pσ) H_ν-1^*(-pσ) ] ≫2/π. Again the classicality factor C is the LHS/RHS of this inequality and is plotted in fig. <ref>. In contrast to the case of dS space (fig. <ref>), we see that it is now not necessary to have small σ in order to achieve classicality. In particular, C diverges in the vicinity of w=-1/3. To have a more quantitative picture, consider the small σ expansion of C. There are three series of terms, starting with σ^2-2ν, σ^0 and σ^2ν, respectively: C = 1/πΓ(ν) Γ(ν-1) [ (ν-1/2) σ/2]^2-2ν( 1 + O(σ^2) ) - (νπ) ( 1 + O(σ^2) ) + π/Γ(ν) Γ(ν+1) sin^2(νπ)[ (ν-1/2) σ/2]^2ν( 1 + O(σ^2) ). For ν>1 the first term dominates and we have always the opportunity to achieve C≫11, which is desired for classicality. It is now clear that near w=-1/3, where ν blows up (see fig. <ref>), it is quite easy to obtain fairly large values of σ (of course, not larger than 1) that make C large and hence the classicality criterion (<ref>) satisfied. This is both due to the large exponent (2-2ν) of σ in eq. (<ref>), as well as the presence of huge factorials in its coefficient. Let us define σ_1000 as the cutoff value at which we can achieve a classicality factor of a thousand (C=1000). Then at w=-0.4 we have σ_1000=0.55, which is equivalent to a wavelength cutoff 1/σ H equal to twice the horizon size. Other values of σ_1000 are depicted in fig. <ref>. Note that for w≳-0.5, we can achieve high levels of classicality by choosing σ∼1. This supports our claim that classicality does not generically require σ≪1. As ν decreases and falls below 1 (corresponding to w falling below -5/3) the term in the second line of eq. (<ref>) dominates. Since this term is constant in σ, it is now impossible to control the magnitude of C by tuning to small values of σ. This is also evident in fig. <ref> where the blow-up at σ=0 disappears for w<-5/3. Of course, -(νπ) is still large around ν=1, so classicality is not lost suddenly. The next natural question is whether we can still have σ-independent noise amplitudes. In general, the noise amplitudes in eqs. (<ref>)–(<ref>) are σ-dependent, even when the classicality criterion is met. There is a special limit in which σ-independence can be achieved, which we now describe. By looking at the ξ_χ noise in eq. (<ref>) we find that we need H_ν(-pσ) to scale like σ^-3/2. Using the small argument expansion of Hankel function H_ν(-pσ) ∝σ^-ν, we find that this is possible around ν=3/2, i.e., in the quasi-dS regime. Just like the conventional case, we also need to choose σ such that σ^3-2ν is close to unity, i.e., σ≫exp( -1/|2ν-3|). This is the same as the classicality condition of ref. <cit.>. Under these circumstances the correlators (<ref>)–(<ref>) become ⟨ξ_χ(N_1) ξ_χ(N_2) ⟩ = [ Γ(ν) ( -2/p)^ν]^2 H^2/8π^3δ(N_1-N_2), ⟨ξ_χ(N_1) ξ_v(N_2) ⟩ = -σ^2 Γ(ν) Γ(ν-1) ( -2/p)^2ν-1H^2/8π^3δ(N_1-N_2), ⟨ξ_v(N_1) ξ_v(N_2) ⟩ = σ^4 [ Γ(ν-1) ( -2/p)^ν-1]^2 H^2/8π^3δ(N_1-N_2). We observe that the coefficients of the delta functions are equal to those in eqs. (<ref>)–(<ref>) (the massless field on exact dS) with corrections of order w+1, as they should. Comparing eq. (<ref>) with eq. (<ref>), the classicality criterion is found to be σ≪ -p ≈ 1. Thus we recover the same results as before in this special case, which is no surprise, as this is essentially the dS limit. Finally, we express our noises in terms of normalized Gaussian white noises. In general, the eigenvalues of the covariance matrix formed by eqs. (<ref>)–(<ref>) are given by eq. (<ref>). In the σ-independent case of the last paragraph, the answer reduces to: ξ_χ = Γ(ν)/√(2π)( -2/p)^νH/2πξ_n, ξ_χ = -σ^2 Γ(ν-1)/√(2π)( -2/p)^ν-1H/2πξ_n. In principle, one could plug this and other noises we found in this paper into the Langevin equations (<ref>) and (<ref>) and try to solve them directly or by converting to the corresponding Fokker-Planck equation. This is not awkward as the time-dependence of the noise is straightforward (∝ e^-2ϵ N), but it is not the stated purpose of this work. § SUMMARY AND DISCUSSION We have revisited the question of classicality of the stochastic noise for a free (non-interacting) field on an accelerating cosmological background. For the problem at hand, classicality reduces to the smallness of the commutators compared to the anti-commutators, and this criterion is contained in Eq. (<ref>). We also noticed that of the two classical noises, one has negligible amplitude and there is thus effectively only one independent noise. We reviewed the commonly studied case of a massive field on an exact dS background in section <ref> and confirmed that the cutoff σ (used to separate IR and UV modes) must be small. While all previous studies that we know show this for small mass, we did this for the entire range 3H/2 > m >0. We found that for most of the range, σ≪1 is sufficient for classicality, but that we need σ≪(m/H)^2/3 and (logσ)^2≫1, near m=0 and m=3H/2, respectively. The noise amplitude depends on σ, except near m=0 and with the extra assumption σ^2 ≫exp(-3H^2/m^2), in harmony with the existing literature. Although we didn't consider the quasi-dS case with time varying w≈-1, similar results hold there too (see, for example, ref. <cit.>). In section <ref> we studied the massless field on an accelerating background with general w>-1/3, not necessarily close to the quasi-dS regime w≈-1. We obtained exact expressions for the noise amplitudes in Eqs. (<ref>)–(<ref>), which we used to investigate the classicality criterion. Our key observation was that it is not necessary to have σ≪1 in order to achieve classicality, especially as w=-1/3 is approached. This is a novel feature compared to the standard lore near w=-1. We emphasize that there is nothing intrinsic about σ<1 that connects it to classicality. In fact, there is another approach to stochastic inflation that employs fields coarse-grained on a causally connected subhorizon region and it also yields a classical stochastic picture <cit.>; in terms of Starobinsky's approach adopted here, this amounts to σ>1. We also found that as long as w>-5/3, it is always possible to arrange for classicality by a suitable choice of σ. Of course, w<-1 is in conflict with energy conditions and is not well-motivated theoretically. But given some observational motivations for w slightly less than -1, we didn't exert these extra constraints on our analysis to limit its domain. A nice feature of the conventional analysis in the quasi-dS regime is that the noise amplitudes are independent of the cutoff σ. We recover the cutoff-independence condition of ref. <cit.> near w=-1 in eq. (<ref>), but this feature is lost away from w=-1. Although cutoff-independence is an attractive property of a physical observable, the dependence of a quantity on its scale of coarse-graining does not by itself make it irrelevant to the calculation of observables. We should also mention the assumptions under which our analysis is performed. In the course of the derivation of the results of section <ref> and then throughout the paper, we have assumed the Bunch-Davies vacuum as the state in which our expectation values are evaluated. We have also used a sharp window function (Heaviside step function) to define the coarse-grained fields, which is the origin of the whiteness of the noise. In addition, our field was free which simplified the complications that would arise from interaction and led to trivial higher moments of noise and its Gaussian statistics. Modification of any of these assumptions can change our analytical results, but the major conclusion cannot be changed as we have already found an example of classicality with σ∼1. § ACKNOWLEDGEMENTS I acknowledge financial support from the research council of University of Tehran. JHEP
http://arxiv.org/abs/2408.10970v1
20240820160254
Hybrid Recurrent Models Support Emergent Descriptions for Hierarchical Planning and Control
[ "Poppy Collis", "Ryan Singh", "Paul F Kinghorn", "Christopher L Buckley" ]
cs.AI
[ "cs.AI", "cs.SY", "eess.SY" ]
[Hybrid Recurrent Models Support Emergent Descriptions for Hierarchical Planning and Control equal* Poppy Collisequal,sussex Ryan Singhequal,sussex,verses Paul F Kinghornsussex Christopher L Buckleysussex,verses sussexSchool of Engineering and Informatics, University of Sussex, Brighton, UK versesVERSES AI Research Lab, Los Angeles, California, USA Poppy Collispzc20@sussex.ac.uk Hybrid control, Piecewise affine approximations, Exploration, Decision-making 0.3in ] § ABSTRACT An open problem in artificial intelligence is how systems can flexibly learn discrete abstractions that are useful for solving inherently continuous problems. Previous work has demonstrated that a class of hybrid state-space model known as recurrent switching linear dynamical systems (rSLDS) discover meaningful behavioural units via the piecewise linear decomposition of complex continuous dynamics <cit.>. Furthermore, they model how the underlying continuous states drive these discrete mode switches. We propose that the rich representations formed by an rSLDS can provide useful abstractions for planning and control. We present a novel hierarchical model-based algorithm inspired by Active Inference in which a discrete MDP sits above a low-level linear-quadratic controller. The recurrent transition dynamics learned by the rSLDS allow us to (1) specify temporally-abstracted sub-goals in a method reminiscent of the options framework, (2) lift the exploration into discrete space allowing us to exploit information-theoretic exploration bonuses and (3) `cache' the approximate solutions to low-level problems in the discrete planner. We successfully apply our model to the sparse Continuous Mountain Car task, demonstrating fast system identification via enhanced exploration and non-trivial planning through the delineation of abstract sub-goals. § INTRODUCTION In a world that is inherently continuous, the brain’s apparent capacity to distil discrete concepts from sensory data represents a highly desirable feature in the design of autonomous systems. Humans are able to flexibly specify abstract sub-goals during planning, thereby reducing problems into manageable chunks <cit.>. Furthermore, they are able to transfer this knowledge across new tasks; a process which has proven a central challenge in artificial intelligence <cit.>. Translating problems into discrete space offers distinct advantages in decision-making. Namely, the computationally feasible application of information-theoretic measures (e.g. information-gain), as well as the direct implementation of classical techniques such as dynamic programming <cit.>. One prevalent approach to tackling continuous spaces involves the simple grid-based discretisation of the state-space, however this becomes extremely costly as the dimensionality increases <cit.>. We therefore ask how we might be able to smoothly handle the presence of continuous variables whilst maintaining the benefits of decision-making in the discrete domain. To address this, we explore the rich representations learned by recurrent switching linear dynamical systems (rSLDS) in the context of planning and control. This class of hybrid state-space model consists of discrete latent states that evolve via Markovian transitions, which act to index a discrete set of linear dynamical systems <cit.>. Importantly, a continuous dependency in the discrete state transition probabilities is included in the generative model. By providing an understanding of the continuous latent causes of switches between discrete modes, this recurrent transition structure can be exploited such that a controller can flexibly specify inputs to drive the system into a desired region of the state-action space. By embracing the established control-theoretic strategy of piecewise linear decomposition of nonlinear dynamics, our approach lies in contrast to the comparatively opaque solutions found by continuous function approximators <cit.>. Using statistical methods to fit these models provides a means by which we can effectively perform online discovery of useful non-grid discretisations of the state-space for system identification and control. We describe a novel model-based algorithm inspired by Active Inference <cit.>, in which a discrete MDP, informed by the representations of an rSLDS, interfaces with a finite horizon linear-quadratic regulator (LQR) implementing closed-loop control. We demonstrate the efficacy of this algorithm by applying it to the classic control task of Continuous Mountain Car <cit.>. We show that information-theoretic exploration drive integrated with the emergent piecewise description of the task-space facilitates fast system identification to find successful solutions to this non-trivial planning problem. §.§ Contributions * The enhancement of planning via the introduction of temporally-abstracted sub-goals by decoupling a discrete MDP from the continuous clock time using the emergent representations from an rSLDS. * The lifting of information-seeking decision-making into a (discrete) abstraction of the states enabling efficient exploration and thereby reducing sensitivity to the dimensionality of the task-space. § RELATED WORK In the context of control, hybrid models in the form of piecewise affine (PWA) systems have been rigorously examined and are widely applied in real-world scenarios <cit.>. Previous work by Abdulsamad et. al. has applied a variant on rSLDS (recurrent autoregressive hidden Markov models) to the optimal control of general nonlinear systems <cit.>. The authors use these models to the approximate expert controllers in a closed-loop behavioural cloning context. While their algorithm focuses on value function approximation, in contrast, we learn online without expert data and focus on flexible discrete planning. § FRAMEWORK Here, we provide a overall outline of the approach to approximate control taken with our Hybrid Hierarchical Agent (HHA) algorithm. Consider that we have decomposed the nonlinear dynamics into piecewise affine regions of the state-space using an rSLDS. Should the HHA wish to navigate to a goal specified in continuous space, the recurrent generative model parameters of the rSLDS allow it to identify the discrete region within which the goal resides, thereby lifting the goal into a high-level objective. The agent may then generate a plan at a discrete level, making use of the information-seeking bonuses that this affords. Planning translates to specifying a sequence of abstract sub-goals. Again using the recurrent generative model, the agent can specify, for each sub-goal region, a continuous point in state-space with which to drive the system into. Once in the discrete goal region, the agent straightforwardly navigates to the continuous goal. The following sections detail the components of the HHA. For additional information, please refer to Appendix. <ref> §.§ rSLDS(ro) In the recurrent-only (ro) formulation of the rSLDS, the discrete latent states z_t ∈{1, 2, . . . , K} are generated as a function of the continuous latents x_t ∈ℝ^M and the control input u_t ∈ℝ^N via a softmax regression model P(z_t+1|x_t, u_t) = softmax(W_x x_t + W_u u_t + r) whereby W_x and W_u are weight matrices with dimensions ℝ^K × M and r is a bias of size ℝ^K. The continuous dynamics evolve according to a discrete linear dynamical system indexed by z_t with Gaussian diagonal noise, x_t+1|x_t, u_t, z_t = A_z_t x_t + B_z_t u_t + b_z_t + ν_t, ν_t ∼𝒩(0, Q_z_t) y_t| x_t = C_z_tx_t + ω_t, ω_t ∼𝒩(0, S_z_t) and identity emissions model with Gaussian diagonal noise. In order to learn the rSLDS parameters using Bayesian updates, conjugate matrix normal inverse Wishart (MNIW) priors are placed on the parameters of the dynamical system and recurrence weights. Inference requires approximate methods given that the recurrent connections break conjugacy rendering the conditional likelihoods non-Gaussian. Details of the Laplace Variational Expectation Maximisation algorithm used is detailed in <cit.>. §.§ Discrete planner We have a Bayesian Markov Decision Process (MDP) <cit.> described by ℳ_B = (S, A, P_a, R, P_θ). S represents the set of all possible discrete states of the system and are essentially a re-description of the discrete latents Z found by the rSLDS. A is the set of all possible actions which, in our case, is equal to the number of states S. The state transition probabilities, p_a(s_t+1| s_t=s, a_t=a, θ)∼ Cat(θ_as), and are parameterised by θ∈ℝ^s× s × a for which we maintain Dirichlet priors over, p(θ_as) ∼ Dir(α_as), facilitating directed exploration. Due to conjugate structure, as the agent obtains new empirical information, Bayesian updates amount to a simple count-based update of the Dirichlet parameters <cit.>. Importantly, the structure of the state transition model has been constrained by the adjacency structure of the polyhedral partitions extracted from recurrent transition dynamics of the rSLDS: invalid transitions are assigned zero probability while valid transitions are assigned a high probability (see <ref>). R is the reward function which, translated into the Active Inference framework, acts as a prior distribution over rewarding states providing the agent with an optimistic bias during policy inference <cit.>. The discrete planner outputs a discrete action, where the first action is taken from a receding horizon optimisation: a_0 = min_a_1:T J(a_1:T) J(a_1:T) = 𝔼[∑_t=0^T R(s_t, a_t) + IG_t(α) | s_0, a_1:T]. This includes an explicit information-seeking incentive IG_t(α) (see <ref>). This descending discrete action a_0 is translated into a continuous control prior x_j via the following link function, x_j = xmax P(z=j|x, u) which represents an approximately central point in the desired discrete region j requested by action a_0 (see <ref>). The ascending messages from the continuous level are translated into a categorical distribution via the rSLDS softmax link function. Importantly, the discrete planner is only triggered when the system switches into a new mode [Or a maximum dwell-time (hyperparameter) is reached.]. In this sense, discrete actions are temporally abstracted and decoupled from continuous clock-time in a method reminiscent of the options framework <cit.>. §.§ Continuous controller Continuous closed-loop control is handled by a finite-horizon linear-quadratic regulator (LQR) controller. For controlling the transition from mode i to mode j (x_i to x_j). The objective of the LQR controller is to minimise the following quadratic cost function: π_ij(x) = min_π J_ij(π) J_ij(π) = 𝔼_π, x_i[(x_S - x_j)^T Q_f (x_S - x_j) + ∑_t=0^S-1 u_t^T R u_t] where S is the finite time horizon, Q_f is the matrix that penalises the terminal state deviation from x_j and R is the control cost where high control input is penalised such that the controller only provides solutions within constraints (for further discussion, see Sec. <ref>). The approximate closed-loop solution to each of these sub-problems is computed offline by taking in the parameters of the linear systems indexed by the discrete modes and the continuous control priors acting as reference points (see <ref>). § RESULTS To evaluate the performance of our (HHA) model, we applied it to the classic control problem of Continuous Mountain Car. This problem is particularly relevant for our purposes due to the sparse nature of the rewards, necessitating effective exploration strategies to achieve good performance. The HHA is initialised according to the procedure outlined in <cit.>. The rSLDS parameters are then fitted to the observed trajectories every 1000 steps of the environment unless a reward threshold within a single episode is reached. We find that the HHA finds piecewise affine approximations of the task-space and uses these discrete modes effectively to solve the task. Fig.<ref> shows that while the rSLDS has divided up the space according to position, velocity and control input, the useful modes for solving the task are those found in the position space. Once the goal and a good approximation to the system has been found, the HHA successfully and consistently navigates to the reward. Fig. <ref> shows that the HHA performs a comprehensive exploration of the state-space and significant gains in the state-space coverage are observed when using information-gain drive in policy selection compared to without. Interestingly, even without information-gain, the area covered by the HHA is still notably better than that of the random action control. This is because the non-grid discretisation of the state-space significantly reduces the dimensionality of the search space in a behaviourally relevant way. We compare the performance of the HHA to other reinforcement learning baselines (Actor-Critic and Soft Actor-Critic) and find that the HHA both finds the reward and captilises on its experience significantly quicker than the other models (see Fig. <ref>). Indeed, our model competes with the state-space coverage achieved by model-based algorithms with exploratory enhancements in the discrete Mountain Car task, which is inherently easier to solve (see  <ref>). § DISCUSSION Through the application of our Hybrid Hierarchical Agent to the Continuous Mountain Car problem, we have demonstrated that rSLDS representations hold promise for enriching planning and control. The emergence of non-grid discretisations of the state-space allows us to perform fast systems identification via enhanced exploration, and successful non-trivial planning through the delineation of abstract sub-goals. Hence, the time spent exploring each region is not equivalent in euclidean space which helps mitigate the curse of dimensionality that other grid-based methods suffer from. Such a piecewise affine approximation of the space will incur some loss of optimality in the long run when pitted against black-box approximators. This is due to the nature of caching only approximate closed-loop solutions to control within each piecewise region, whilst the discrete planner implements open-loop control. However, this approach eases the online computational burden for flexible re-planning. Hence in the presence of noise or perturbations within a region, the controller may adapt without any new computation. This is in contrast to other nonlinear model-based algorithms like model-predictive control where reacting to disturbances requires expensive trajectory optimisation at every step <cit.>. By using the piecewise affine framework, we maintain functional simplicity and interpretability through structured representation. This method is amenable to future alignment with a control-theoretic approach to safety guarantees for ensuring robust system performance and reliability. We acknowledge there may be better solutions to dealing with control input constraints than the one given in Sec. <ref>. Different approaches have been taken to the problem of implementing constrained-LQR control, such as further piecewise approximation based on defining reachability regions for the controller <cit.>. § IMPACT STATEMENT This paper presents work whose goal is to advance the field of Machine Learning. There are many potential societal consequences of our work, none which we feel must be specifically highlighted here. icml2024 § APPENDIX / SUPPLEMENTAL MATERIAL §.§ Framework Optimal Control We adopt the optimal control framework, specifically we consider discrete time state space dynamics of the form: x_t+1 = f(x_t, u_t, η_t) with known initial condition x_0, and η_t drawn from some time invariant distribution η_t ∼ D, where f we assume p(x_t+1| x_t, u_t) is a valid probability density throughout. We use c_t: X × U →ℝ for the control cost function at time t and let 𝕌 be the set of admissible (non-anticipative, continuous) feedback control laws, possibly restricted by affine constraints. The optimal control law for the finite horizon problem is given as: J(π) = 𝔼_x_0,π[∑_t=0^T c_t(x_t, u_t)] π^* = min_π∈𝕌 J(π) PWA Optimal Control The fact we do not have access to the true dynamical system f motivates the use of a piecewise affine (PWA) approximation. Also known as hybrid systems: x_t+1 = A_i x_t + B_i u_t + ϵ_t when (x_t, u_t) ∈ H_i Where ℍ={H_i: i ∈ [K] } is a polyhedral partition of the space X× U. In the case of a quadratic cost function, it can be shown the optimal control law for such a system is peicewise linear. Further there exist many completeness (universal approximation) type theorems for peicewise linear approximations implying if the original system is controllable, there will exist a peicewise affine approximation through which the system is still controllable <cit.>. Relationship to rSLDS(ro) We perform a canonical decomposition of the control objective J in terms of the components or modes of the system. By slight abuse of notation [x_t = i]:=[(x_t, u_t) ∈ H_i] represent the Iverson bracket. J(π) = ∑_t ∫ p_π(x_t | x_t-1, u_t)c_t(x_t, u_t) d x_t dx_t-1 = ∑_t ∫∑_i∈ [K] [x_t-1 = i]p_π(x_t | x_t-1, u_t)c_t(x_t, u_t) d x_t dx_t-1 Now let z_t be the random variable on [K] induced by Z_t = i if [x_t = i] we can rewrite the above more concisely as, J(π) =∑_t ∫∑_i∈ [K] p_π(x_t, z_t-1=i | x_t-1, u_t)c_t(x_t, u_t) d x_t dx_t-1 = ∑_i∈ [K]∑_t ∫ p_π(x_t, z_t-1=i | x_t-1, u_t)c_t(x_t, u_t) d x_t dx_t-1 =∑_i∈ [K]∑_t 𝔼_π_i[c_t(x_t, u_t)] which is just the expectation under a recurrent dynamical system with deterministic switches. Later (see <ref>), we exploit the non-deterministic switches of rSLDS in order to drive exploration. §.§ Hierarchical Decomposition Our aim was to decouple the discrete planning problem from the fast low-level controller. In order to break down the control objective in this manner, we first create a new discrete variable which simply tracks the transitions of z, this allows the discrete planner to function in a temporally abstracted manner. Decoupling from clock time Let the random variable (ζ_s)_s>0 record the transitions of (z_t)_t>0 i.e. let τ_s(τ_s-1)= min{t: z_t+1≠ z_t, t> τ_s-1}, τ_0=0 be the sequence of first exit times, then ζ is given by ζ_s = z_τ_s. With these variables in hand, we frame a small section of the global problem as a first exit problem. Low level problem Consider the first exit problem defined by, π_ij(x_0) = min_π, S J_ij(π, x_0, S) J_ij(π, x_0, S) = 𝔼_π, x_0[∑_t=0^Sc(x_t, u_t)] s.t. (x_t, u_t) ∈ H_i s.t. c(x, u) = 0 when (x,u) ∈∂ H_ij where ∂ H_ij is the boundary H_i ⋂ H_j. Due, to convexity of the polyhedral partition, the full objective admits the decomposition into subproblems J(π) = ∑_s J_ζ(s+1), ζ(s)(π) Slow and fast modes The goal is to tackle the decomposed objectives individually, however the hidden constraint that the trajectories line up presents a computational challenge. Here we make the assumption that the difference in cost induced by different starting positions, induces a relatively small change in the minimum cost J_ij, intuitively this happens if the minimum state cost in each mode is relatively uniform as compared to the difference between regions. High level problem If the above assumption holds, we let J_ij^* = min_π∫_x_0J_ij(π, x_0)p(x_0) be the average cost of each low-level problem. We form a markov chain: p_ik(u) = ℙ(ζ_s+1=k |ζ_s=i, π_ij^*, u^d=j) and let p_π_d be the associated distribution over trajectories induced by some discrete state feedback policy, along with the discrete state action cost c_d(u^d=j, η=i) = J_ij^* we may write the high level problem: π_d^* = min_π_d J_d(π, η_0) = 𝔼p_π_d[∑_s=0^S c_d(η_s, u^d_s)] Our approximate control law is then given by π_ij^* ∘π_d^* ∘ id(x) §.§ Offline Low Level Problems: Linear Quadratic Regulator (LQR) Rather than solve the first-exit problem directly, we formulate an approximate problem by finding trajectories that end at specific `control priors' (see <ref>). Recall the low level problem given by: π_ij(x_0) = min_π, S J_ij(π, x_0, S) J_ij(π, x_0, S) = 𝔼_π, x_0[∑_t=0^Sc(x_t, u_t)] s.t. (x_t, u_t) ∈ H_i s.t. c(x, u) = 0 when (x,u) ∈∂ H_ij In order to approximate this problem with one solvable by a finite horizon LQR controller, we adopt a fixed goal state, x^* ∈ H_j. Imposing costs c_t(x_t, u_t) = u_t^T R u_t and c_S(x_S, u_S) = (x - x^*) Q_f (x - x^*). Formally we solve, π_ij(x_0) = min_π, S J_ij(π, x_0, S) J_ij(π, x_0, S) = 𝔼_π, x_0[(x_S - x^*)^T Q_f (x_S - x^*) + ∑_t=0^S-1 u_t^T R u_t] by integrating the discrete Ricatti equation backwards. Numerically, we found optimising over different time horizons made little difference to the solution, so we opted to instead specify a fixed horizon (hyperparameter). These solutions are recomputed offline every time the linear system matrices change. Designing the cost matrices Instead of imposing the state constraints explicitly, we record a high cost which informs the discrete controller to avoid them. In order to approximate the constrained input we choose a suitably large control cost R=rI. We adopted this approach for the sake of simplicity, potentially accepting a good deal of sub-optimality. However, we believe more involved methods for solving input constrained LQR could be used in future, e.g. <cit.>, especially because we compute these solutions offline. §.§ Online high level problem The high level problem is a discrete MDP with a `known' model, so the usual RL techniques (approximate dynamic programming, policy iteration) apply. Here, however we choose to use a model-based algorithm with a receding horizon inspired by Active Inference, allowing us to easily incorporate exploration bonuses. Let the Bayesian MDP be given by ℳ_B = (S, A, P_a, R, P_θ) be the MDP, where p_a(s_t+1| s_t, a_t, θ)∼ Cat(θ_as) and p(θ_as) ∼ Dir(α) We estimate the open loop reward plus optimistic information theoretic exploration bonuses Active Inference conversion We adopt the Active Inference framework for dealing with exploration. Accordingly we adopt the notation lnp̃(s_t, a_t) = R(s_t, a_t) and refer to this `distribution' as the goal prior <cit.>, and optimise over open loop policies π = (a_0, ..., a_T). J(a_1:T, s_0) = 𝔼[∑_t=0^T R(s_t, a_t) + IG_p + IG_s| s_0, a_1:T] where parameter information-gain is given by IG_p = D_KL[p_t+1(θ) || p_t(θ)], with p_t(θ) = p(θ| s_0:t). In other words, we add a bonus when we expect the posterior to diverge from the prior, which is exactly the transitions we have observed least <cit.>. We also have a state information-gain term, IG_s = D_KL[p_t+1(s_t+1) || p_t(s_t+1)]. In this case (fully observed), p_t+1(s_t+1) = δ_s is a one-hot vector. Leaving the term 𝔼_t[-ln p_t(s_t+1)] leading to a maximum entropy term <cit.>. We calculate the above with Monte Carlo sampling which is possible due to the relatively small number of modes. Local approximations such as Monte Carlo Tree Search could easily be integrated in order to scale up to more realistic problems. Alternatively, for relatively stationary environments we could instead adopt approximate dynamic programming methods for more habitual actions. §.§ Extracting the adjacency matrix from rSLDS In order to generate the possible transitions from the rSLDS, we calculate the set of active constraints for each region from the softmax representation, p(z| x) = σ(Wx + b). Specifically to check region i is adjacent to region j we verify the solution linear program: - b_j = min (W_i - W_j ) x s.t. (W_i - W_k)x ≤ (b_i - b_k) ∀ k ∈ [K] s.t. x ∈ (x_lb, x_ub) Where (x_lb, x_ub) are bounds chosen to reflect realistic values for the problem. This ensures we only lift transitions to the discrete model, if they are possible. Again, these can be calculated offline. We initialise the entries of the transition model in the discrete MDP for possible transitions to 0.9 facilitating guided-exploration via information-gain through a count-based updates to the transition priors. §.§ Generating continuous control priors In order to generate control priors for the LQR controller which correspond to each of the discrete states we must find a continuous state x_i which maximises the probability of being in a desired z: x_i = xmax P(z=i|x, u) For this we perform a numerical optimisation in order to maximise this probability. Consider that this probability distribution P(z = i |x) is a softmax function for the i-th class is defined as: σ(v_i) = exp (v_i)/∑_j exp (v_j), v_i = w_i · x + r_i where w_i is the i-th row of the weight matrix, x is the input and r_i is the i-th bias term. The update function used in the gradient descent optimisation can be described as follows: x ← x + η∇_x σ(v_i) where η is the learning rate and the gradient of the softmax function with respect to the input vector x is given by: ∇_x σ(v_i) = ∂σ(v_i)/∂ v·∂ v/∂ x = σ(v_i)(𝐞_i - σ(v))· W in which σ(v) is the vector of softmax probabilities, and 𝐞_i is the standard basis vector with 1 in the i-th position and 0 elsewhere. The gradient descent process continues until the probability P(z=i | x) exceeds a specified threshold θ which we set to be 0.7. This threshold enforces a stopping criterion which is required for the cases in which the region z is unbounded. §.§ Model-free RL baselines §.§.§ Soft-Actor Critic with 2 Q-functions §.§.§ Actor-Critic §.§ Model-based RL baseline §.§.§ a Deep Q-Network with Model-based Exploration (DQN-MBE)
http://arxiv.org/abs/2408.11246v1
20240820235336
Solving the strong CP problem with massless grand-color quarks
[ "Ravneet Bedi", "Tony Gherghetta", "Keisuke Harigaya" ]
hep-ph
[ "hep-ph" ]
UMN-TH-4328/24 a]Ravneet Bedi0000-0002-7104-1753 [a]School of Physics and Astronomy, University of Minnesota, Minneapolis, Minnesota 55455, USA a]Tony Gherghetta0000-0002-8489-1116 b,c,d]Keisuke Harigaya0000-0001-6516-3386 [b] Department of Physics, The University of Chicago, Chicago, Illinois, 60637, USA [c]Enrico Fermi Institute and Kavli Institute for Cosmological Physics, The University of Chicago, Chicago, Illinois 60637, USA [d]Kavli Institute for the Physics and Mathematics of the Universe (WPI), The University of Tokyo Institutes for Advanced Study, The University of Tokyo, Kashiwa, Chiba 277-8583, Japan bedi0019@umn.edu tgher@umn.edu kharigaya@uchicago.edu We propose a solution to the strong CP problem that specifically relies on massless quarks and has no light axion. The QCD color group SU(3)_c is embedded into a larger, simple gauge group (grand-color) where one of the massless, colored fermions enjoys an anomalous chiral symmetry, rendering the strong CP phase unphysical. The grand-color gauge group G_ GC is Higgsed down to SU(3)_c× G_c', after which G_c' eventually confines at a lower scale, spontaneously breaking the chiral symmetry and generating a real, positive mass to the massless, colored fermion. Since the chiral symmetry has a G_c' anomaly, there is no corresponding light Nambu-Goldstone boson. The anomalous chiral symmetry can be an accidental symmetry that arises from an exact discrete symmetry without introducing a domain wall problem. Potential experimental signals of our mechanism include vector-like quarks near the TeV scale, pseudo Nambu-Goldstone bosons below the 10 GeV scale, light dark matter decay, and primordial gravitational waves from the new strong dynamics. Solving the strong CP problem with massless grand-color quarks [ Received ; accepted =============================================================== § INTRODUCTION A popular solution to the strong CP problem in the Standard Model (SM) is the Peccei-Quinn (PQ) mechanism <cit.> in which an anomalous U(1) PQ symmetry is spontaneously broken giving rise to a Nambu-Goldstone boson, the axion <cit.>. Importantly, the global PQ symmetry is also explicitly broken by nonperturbative QCD dynamics, generating an axion potential with a minimum that cancels the strong CP phase, θ̅, thereby solving the strong CP problem. The axion mass can be precisely predicted using chiral perturbation theory, and even though the original electroweak scale axion <cit.> has been ruled out, extra model building to make the axion lighter (and thus invisible <cit.>) or heavier than the QCD prediction (due to UV modifications of QCD <cit.>) has motivated a huge experimental effort to search for axions (see for example, <cit.>). Despite the simplicity of the axion and the PQ mechanism, there is an even simpler solution to the strong CP problem, namely, assuming that the up quark is massless <cit.>. A massless up quark implies that there is an (anomalous) U(1) chiral symmetry which can be used to rotate away the strong CP phase without requiring a light axion. At the QCD scale, Λ_ QCD∼ 250 MeV, non-perturbative dynamics explicitly breaks this symmetry, generating an effective up-quark mass <cit.>. Interestingly, the strong CP problem is still solved because the QCD dynamics generates a complex mass with a phase that again cancels the strong CP phase or alternatively, a vacuum expectation value (VEV) for the QCD η' cancels the strong CP phase. Due to the difficulty of nonperturbative QCD calculations, the significance of this nonperturbative QCD contribution to the up quark mass had remained unresolved for a long time (see for example <cit.>). Recent lattice QCD calculations of the meson spectrum near the physical masses obtained an up quark (MS) mass m_u(2  GeV)=2.16_-0.26^+0.49MeV <cit.> implying that the up quark is unlikely to be massless above the QCD scale. As a complementary check, lattice QCD calculations in <cit.> computed the dependence of the pion mass on the dynamical strange quark mass and also showed that the nonperturbative QCD contribution to the up quark mass can only be a fraction of the up quark mass. The fact that there is strong lattice evidence for a perturbative contribution to the up quark mass above the QCD scale suggests that to obtain a massless up-quark type solution solution there should be new strong dynamics at UV scales which generates this “perturbative mass" (assuming the up Yukawa coupling is zero). Indeed, this possibility was explored in Ref. <cit.> where the QCD gauge group, SU(3)_c, was embedded into an SU(3)× SU(3)× SU(3) product group with each generation of quarks separately charged under only one of the SU(3) groups. A nonzero value of the up quark Yukawa coupling was then generated from small instantons at a scale ≫Λ_ QCD, which explicitly breaks the chiral symmetry. The small instanton contribution is enhanced (relative to QCD) due to the larger gauge coupling of each individual SU(3) factor at UV scales. Furthermore, the nontrivial SM flavor structure was generated by dimension five operators that arise from the symmetry breaking SU(3)^3→ SU(3). Similarly, in Ref. <cit.>, the enhanced effects of small instantons in the UV completion of a composite Higgs model was used to explicitly break the anomalous U(1) symmetry and generate the up quark Yukawa coupling. More recently, Ref. <cit.> used the small instantons of a gauged flavor symmetry (with a non-invertible symmetry structure below the gauged flavor symmetry breaking scale) to generate a down-type quark mass. In this paper, we also propose a massless up-quark type solution by introducing new strong dynamics at a UV scale. However, instead of only explicitly breaking the chiral symmetry associated with a massless quark, we generate a quark mass by spontaneously breaking the chiral symmetry. This is achieved by embedding the QCD group, SU(3)_c, into a larger, simple group, G_ GC=SU(2N+3) <cit.>, referred to as the grand-color group <cit.>. In particular, the SM quarks are combined with grand-color partner fermions into the fundamental representation of the grand-color group. The grand-color group is then Higgsed to SU(3)_c× G_c' which gives rise to the following two features. First, the Higgsing generates, via heavy gauge boson exchange, dimension-six, four-fermion terms that contain the massless quark and the grand color partner fermions. Second, when G_c' confines, the grand-color fermion bilinear condensate spontaneously breaks the chiral symmetry, generating a nonzero quark mass via the dimension six terms. A schematic diagram of the scales associated with our mechanism is shown in figure <ref>. Importantly, since the chiral symmetry is spontaneously broken, the generated quark mass is not suppressed by the product of SM Yukawa couplings and given that the chiral symmetry is anomalous with respect to G_c', there is also no light axion. Alternatively, the symmetry breaking can be communicated from G_c' to SU(3)_c via scalar bosons with Yukawa couplings to quarks such as extra Higgses, or via fermions charged under both SU(3)_c and G_c'. Our mechanism is first applied to a model containing one fermion generation with G_c'=Sp(2N) which ensures that the SM electroweak symmetry is not broken. We assume the right-handed up quark is massless and therefore has an anomalous chiral symmetry. When this symmetry is spontaneously broken by the Sp(2N) dynamics, a positive up quark Yukawa coupling is generated and is proportional to the down quark Yukawa coupling. If Sp(2N) confines at the scale , then a sufficiently large up Yukawa coupling can be generated in this minimal setup provided 11≲ N≲ 35 for 10^3 GeV≲≲ 10^12 GeV. Smaller values of N (≳ 5) can also explain the observed up Yukawa by introducing an extra pair of vector-like quarks which obtain a Dirac mass or mix with the up quark via the Sp(2N) dynamics. Furthermore, with the vector-like quarks, we show how the anomalous chiral symmetry can accidentally arise from an exact ℤ_2 symmetry. The model has a domain wall problem, which can be solved by imposing a ℤ_3 symmetry instead. Thus, our solution to the strong CP problem is based on a simple exact symmetry, without imposing an anomalous chiral symmetry that is actually not a symmetry. We then analyze models with more fermion generations, beginning with two generations. We compute the Yukawa couplings arising from the Sp(2N) dynamics by carefully minimizing the potential of the Sp(2N) pions and find that a minimal two-generation extension of the one-generation model is ruled out because it leads to an unacceptably large strange quark mass. However, this problem can easily be avoided by again adding an extra pair of vector-like quarks. It is then shown that a three-generation model has a Higgs-pion mixing problem which makes it difficult to obtain an order one top quark Yukawa coupling. There are several ways this problem can be avoided, but we focus on the solution that charges the third generation of fermions under an extra SU(2) or SU(3) gauge group. The SM flavor structure arises from the extra gauge interactions. Interestingly, we show how the chiral symmetry can again be accidental in the low-energy theory from a ℤ_2 or ℤ_3 symmetry. We also show that threshold corrections to the strong CP phase are sufficiently suppressed in the third generation model with extra gauge interactions. Finally, instead of having only fermions charged under the anomalous chiral symmetry, there is also the possibility of introducing chirally-charged extra Higgs fields. We present a three-generation model with two-(or more) Higgs doublets, that has some advantages over the fermion models, except for a hierarchy problem which may be addressed with supersymmetry. In this class of models, the chiral symmetry breaking is mediated from G_c' to SU(3)_c via the extra Higgses, unlike the minimal chiral fermion model where the mediation occurs via heavy gauge bosons. We also comment on the possibility of mediating the chiral symmetry breaking via fermions in higher representations of SU(2N+3). The massless up quark solution has inspired other UV modifications of QCD that also invoke an anomalous U(1) symmetry with no light axion. Early work in Refs. <cit.> considered massless quarks in technicolor scenarios which incorporated QCD color. Since the technicolor strong dynamics breaks electroweak symmetry, there is no Higgs field and all Yukawa couplings originate from gauge interactions. A more phenomenologically viable scenario was studied in Ref. <cit.> which considered a mirror QCD model with a ℤ_2 symmetry where the anomalous U(1) symmetry is associated with extra massless exotic quarks. In the UV, the anomalous U(1) rotations of the massless quarks can then be used to set all theta angles to zero. When the mirror QCD confines, it spontaneously breaks the anomalous U(1) symmetry, generating a mass for the QCD-charged exotic quarks. In the IR, the solution to the strong CP problem can be interpreted as either the mirror QCD η' VEV cancelling the strong CP phase or the phase in the complex mass aligning with the strong CP phase. This model is similar in spirit to our approach, except that our mechanism relies on grand color rather than mirror-QCD dynamics and furthermore, we directly generate the up quark mass from the new strong dynamics. Our model has several phenomenological implications. A Nambu-Goldstone boson (NGB) corresponding to the spontaneous breaking of an approximate baryon symmetry can be the dark matter that decays into SM particles via the weak anomaly. Moreover, some of the NGBs that couple to gluons and photons are much lighter than the G_c' confinement scale. Phenomenologically, these are similar to “heavy QCD axions" and may be below the 10 GeV scale. Successful models require new vector-like quarks which may be near the TeV scale and therefore accessible at colliders. Finally, the confinement of G_c' could involve a first-order phase transition that produces primordial gravitational waves. The outline of our paper is as follows. In section <ref> we present a one-flavor toy model to illustrate the basic features our mechanism. A more realistic model is then constructed in section <ref> where we first present details of a one fermion generation model in section <ref>, followed by a two-fermion generation model in section <ref>. The full three-generation case is then discussed in section <ref>. A potential issue for generating the top quark mass arising from the Higgs-pion mixing is discussed in section <ref>. This can be resolved with extra SU(2) or SU(3) gauge interactions as detailed in section <ref>. Arguments for why corrections to the strong CP phase are sufficiently suppressed are presented in section <ref> with phenomenological consequences of our scenario given in section <ref> and the accidental chiral symmetry in our model is discussed in section <ref>. A class of models with extra Higgses or higher fermion representations is discussed in section <ref> and the possibility of explaining the dark matter with the lightest pion is studied in section <ref>. A summary and concluding remarks are given in section <ref>. The appendices contain further aspects of the computation including a discussion on the stability of the grand-color symmetry-breaking vacuum in appendix <ref>, a proof of θ̅=0 in appendix <ref>, a derivation of the four-fermion operators due to gauge boson exchange in appendix <ref>, details of the pion potentials and vacuum alignment for the one and two-generation models in appendix <ref> and a computation of the flavor invariants for the three-generation models is given in appendix <ref>. § GENERATING FERMION MASS BY GRAND COLOR: ONE-FLAVOR TOY MODEL To understand the essence of our proposed mechanism, we first present a toy model with grand-color gauge group G_ GC=SU(N+3) and one flavor of massless Weyl fermions Ψ_U,Ψ_U̅ at the UV scale, transforming in the fundamental, anti-fundamental representation, respectively. The θ term can then be simply removed by a fermion chiral rotation Ψ_U,U̅→ e^i α_U,U̅Ψ_U,U̅ and the model preserves CP symmetry. At a lower energy scale, we assume that SU(N+3) is spontaneously broken to SU(N)× SU(3) by appropriate Higgs fields. The grand-color fermion Ψ_U then decomposes into ψ_U (, 1) and U( 1,), where the SU(N)× SU(3) charges are shown in the parentheses. Similarly, Ψ_U̅ decomposes into ψ_U̅ (, 1) and U̅( 1,). The SU(3) fermions U and U̅ are the toy version of the up quark and SU(3) is the toy version of QCD SU(3). As we will see, U and U̅ can obtain a mass by SU(N) dynamics. For N >3, SU(N) confines at Λ_SU(N), above the SU(3) confinement scale. The chiral symmetry of ψ_U and ψ_U̅ is explicitly broken by the SU(N) anomaly and spontaneously broken by the fermion condensate ⟨ψ_U ψ_U̅⟩≠ 0. Since U and U̅ are in the same SU(N+3) multiplets as ψ_U and ψ_U̅, respectively, the broken chiral symmetry of ψ_U,ψ_U̅ also induces a broken U,U̅ chiral symmetry. This indeed occurs via the exchange of heavy gauge bosons, which generates the dimension-six term ℒ⊃g_ GC^2/2M_ GC^2ψ_U^†σ̅^μ U ψ_U̅^†σ̅_μU̅ + h.c. = g_ GC^2/M_ GC^2ψ_U^†ψ_U̅^† U U̅ + h.c., where σ̅^μ=(1,-σ⃗) with σ_1,2,3 the Pauli matrices, g_ GC is the SU(N+3) gauge coupling and M_ GC is the mass scale of the heavy gauge bosons. The four-fermion operator in (<ref>) now connects the chiral symmetries of ψ_U,ψ_U̅ and U, U̅. When the condensation ⟨ψ_U ψ_U̅⟩≠ 0 breaks the chiral symmetry, it generates a non-zero up quark mass ∼Λ_SU(N)^3/M_ GC^2. For θ=0, the sign of the condensate is negative, ⟨ψ_U ψ_U̅⟩<0 (see appendix <ref>), and the up quark mass term is positive, thereby solving the strong CP problem. The absence of the strong CP phase can also be understood from the parity conservation theorem in Ref. <cit.>. Although, unlike the assumptions in <cit.>, some of the gauge bosons obtain a mass, but that does not affect the positivity of the path-integral measure and parity should not be spontaneously broken. Note that the effect of the anomaly vanishes in the large N limit while the condensation does not, so the chiral symmetry breaking can be considered as dominantly spontaneous rather than explicit, with a light NGB. However, for finite N, there is no light NGB and the model is distinct from the axion solution to the strong CP problem. In fact, below the confinement scale, an SU(N) η' meson plays the role of a heavy axion to preserve the CP symmetry. The implication of the spontaneous nature of the symmetry breaking can be further illuminated by adding vector-like quarks Ψ_D, Ψ_D̅ with mass m_D≪Λ_SU(N), much below the confinement scale Λ_SU(N). The quarks U, U̅ still obtain a mass ∼Λ_SU(N)^3/M_ GC^2 from the condensate of ψ_U, ψ_U̅, but there is no dependence on m_D. This is in contrast with the case where the SU(N) is Higgsed above the confinement scale and the mass of U and U̅ is generated only by SU(N) instantons, which now give a mass suppressed by m_D. As we will see, the spontaneous nature of the chiral symmetry breaking is crucial for generating a sufficiently large quark mass. Furthermore, the spontaneous nature allows the chiral symmetry to be an accidental symmetry that arises from another exact symmetry. [The model with bi-fundamental fermions ψ_B,ψ̅_B in Ref. <cit.> also has this feature. We can impose a ℤ_3 symmetry under which ψ_B has charge +1. The ℤ_3 symmetry does not have a color or mirror-color anomaly and forbids the mass term of the bi-fundamental fermions. The model, however, has a domain wall problem arising from the ℤ_3 symmetry.] In the model with Ψ_U,Ψ_U̅,Ψ_D and Ψ_D̅, this can be seen by imposing a ℤ_2 symmetry under which Ψ_U and Ψ_D are odd. The ℤ_2 symmetry does not have an SU(N+3) anomaly and therefore can be an exact gauge symmetry. At the renormalizable level the quark mass terms are forbidden and instead the U,D quarks obtain masses from the SU(N) dynamics. Note that it is crucial to generate the masses by spontaneous breaking; explicit breaking by SU(N) instantons would only generate an ℤ_2 preserving effective interaction UU̅ D D̅ (implying m_U,m_D still remain zero). The ℤ_2 symmetry is spontaneously broken by the SU(N) dynamics, but a linear combination of a continuous flavor symmetry subgroup (SU(2)_A described below) and the ℤ_2 symmetry remains unbroken, preventing the formation of stable domain walls. There are three massless NGBs, arising from the flavor symmetry breaking SU(2)_V× SU(2)_A→ SU(2)_V, but their shift symmetry does not have a color anomaly and therefore they are not QCD axions. We can also impose U(1) gauge symmetries on the theory to explicitly break the shift symmetry and give a mass to the NGBs, or have some of the NGBs eaten by gauge bosons. This ℤ_2 extension can be easily incorporated into the models with extra vector-like quarks, to be discussed later. However, the accidental chiral symmetry is violated by dimension six, four-fermion operators Ψ_U Ψ_U̅Ψ_U Ψ_U̅, Ψ_D Ψ_D̅Ψ_D Ψ_D̅, Ψ_U Ψ_U̅Ψ_D Ψ_D̅, suppressed by a UV cut-off scale, M_ UV. The condensation of SU(N)-charged fermions then generates SU(3)_c-charged fermion masses ∼Λ_SU(N)^3/M^2_ UV that may be complex. To avoid too large a strong CP phase, M_ GC< 10^-5 M_ UV is required, which constrains the parameter space. As will be shown in section <ref>, for realistic models with NGBs that have masses much below the chiral symmetry breaking scale and couple to gluons, this bound will be modified. The higher dimensional operators also introduce a domain wall problem, since they explicitly break the SU(2)_A symmetry. The mass of the field that comprises domain walls is Λ_SU(N)^2/M_ UV and the energy density inside the domain walls is Λ_SU(N)^6/M_ UV^2. The resultant domain wall tension σ is Λ_SU(N)^4/M_ UV. Domain walls dominate the universe at a temperature T∼(σ/M_ Pl)^1/2≃ 10  eV(M_ Pl/M_ UV)^1/2(Λ_SU(N)/10^5  GeV)^2, so the universe becomes domain wall dominated much before the cosmological epoch today. To improve the quality of the accidental symmetry and avoid the domain wall problem, we can add more fermions and impose a higher-order symmetry. For example, with two vector-like fermions in addition to the up quark, we may impose a ℤ_3 symmetry. Higher dimensional operators that violate CP and introduce the domain wall problem now have dimension nine. In order that domain walls never dominate the energy density of the universe before the present dark energy epoch (with ρ_ DE∼ meV^4), now requires Λ_SU(N)≲ 10^7 GeV (M_ UV/M_ Pl)^5/11. In the next section, we construct a realistic model using the chiral symmetry breaking by grand color and further require that SU(N) is broken down to an Sp or SO group (similar to the grand color axion model <cit.>), so that electroweak symmetry is not spontaneously broken. A variety of models may be constructed, but the essence of the mechanism is universal: we impose a chiral symmetry on massless fermions to remove the strong CP phase and the chiral symmetry is spontaneously and explicitly broken by the strong dynamics of the grand-color partner gauge group of SU(3). § A QUARK MASS BY GRAND COLOR In this section, we construct a class of models where the strong CP problem is solved without any light axion by embedding the SM quarks into the grand color group. We discuss models with one and two generations of fermions. These models will then be extended to realistic models with three generations in section <ref>. §.§ Grand color and confinement of We employ the gauge dynamics proposed in <cit.> which involves a UV modification of QCD where SU(3)_c is embedded into an enlarged color gauge group, G_ GC with the same flavor structure as QCD. The grand color group, G_ GC, is then broken down to SU(3)_c× G_c^', and G_c^' eventually confines. If G_c^'=SU(N) (for N>2), the electroweak symmetry is spontaneously broken by the G_c^' dynamics, essentially reproducing the dynamics of technicolor models that are inconsistent with electroweak precision data and the light Higgs boson. Instead, if G_c^'=Sp(2N), the confinement does not break the electroweak symmetry. Furthermore, as we will show, the G_c^' dynamics will play a role in generating a quark mass via chiral symmetry breaking. Assuming a grand-color group G_ GC=SU(2N+3), the gauge symmetry-breaking chain is SU(2N+3)× U(1)_Y' SU(2N)× SU(3)_c× U(1)× U(1)_Y' Sp(2N)× SU(3)_c× U(1)_Y, where G_ GC is first broken at the scale M_ GC, followed by a second breaking to Sp(2N) at the scale M_ Sp. The first breaking can be achieved by the VEV of an adjoint scalar while the second breaking can be achieved by a rank-2 anti-symmetric scalar. This two-step breaking is required to stabilize the vacuum. It is not possible to have a one-step breaking by a single SU(2N+3) anti-symmetric representation (see appendix <ref>). The SM quarks (q,u̅,d̅) combine with exotic fermions (ψ_q,ψ_u̅,ψ_d̅) to form grand-color fermion multiplets (Ψ_q,Ψ_u̅,Ψ_d̅), where all fermions are left-handed Weyl spinors. The gauge charges of the SM quarks and their grand-color partners are given in Table <ref>. Note also that ψ_q=(ψ_q_u,ψ_q_d) is a SM electroweak doublet. Above the grand-color scale M_ GC, the Yukawa interactions between the fermions and the SM SU(2)_L doublet Higgs field H=(φ^+,φ^0)^T with hypercharge +1/2 are given by L = -y_u Ψ_q Ψ_u̅H- y_d Ψ_q Ψ_d̅H + h.c. = -y_u ( q u̅ + ψ_q ψ_u̅)H-y_d ( q d̅ + ψ_q ψ_d̅)H + h.c. , where H=iσ_2H^† and y_u,d are the Yukawa coupling matrices. Note that for SU(2)_L doublets, such as Ψ_q and H, we use the convention that ε^12=+1 in ℒ⊃ -Ψ_qΨ_u̅H≡-ε^ijΨ_q_iH_jΨ_u̅. The Sp(2N) sign convention is given in appendix <ref>. The exchange of heavy gauge bosons with a mass M_ GC and gauge coupling g_ GC, associated with the breaking SU(2N+3) → SU(3)_c× SU(2N)× U(1), generates the following four-fermion interactions [Note that as in Ref. <cit.>, we assume that the grand color-breaking scalars do not couple to the fermions.] ℒ = -g_ GC^2/2M_ GC^2|∑_iψ_χ_i^†σ̅^μχ_i -∑_iχ̅_i^†σ̅^μψ_χ̅_i|^2 + h.c. ⊃g_ GC^2/2M_ GC^2ψ_q_i^†σ̅^μ q_i( ψ_u̅_j^†σ̅_μu̅_j +ψ_d̅_j^†σ̅_μd̅_j )+ h.c., where in the second line, we have assumed χ_i=q_i and χ̅_i=u̅_i,d̅_i in the fundamental and antifundamental representation of SU(3)_c, respectively, with ψ_χ_i, ψ_χ̅_i the corresponding grand-color partners as given in Table <ref>. After the second symmetry breaking stage in (<ref>) at M_ Sp, the Sp(2N) group is assumed to confine at a scale , below M_ Sp but above the electroweak scale. With 2F Weyl fermions in the fundamental representation of Sp(2N), the strong dynamics breaks the (approximate) global symmetry SU(2F) → Sp(2F), provided 2F≲ (5-8)N <cit.>, which is easily satisfied in our model. Correspondingly, the quark bilinears form condensates, which in the large N limit are estimated to be ⟨ψ_Iψ_J⟩≃N/16π^2^3 Σ_0^IJ≡Σ_0^IJ , where I,J,… are SU(2F) flavor indices, Σ_0 denotes the vacuum configuration and ψ_I = (ψ_q_u, ψ_q_d, ψ_u̅, ψ_d̅, ⋯)^T (with T denoting the transpose). In summary, the essence of the model is as follows: We impose a classical U(1)_ξ chiral symmetry Ψ_ξ→ e^iαΨ_ξ on one of the quarks, Ψ_ξ=(ξ,ψ_ξ)^T. Performing a U(1)_ξ chiral rotation removes the strong CP phase. The U(1)_ξ symmetry of ψ_ξ is then spontaneously broken by the fermion bilinear condensate in Eq. (<ref>). Since ξ is in the same SU(2N+3) multiplet as ψ_ξ, the quark ξ should feel the same symmetry breaking and obtain a mass. This can indeed occur via the heavy gauge boson exchange that generates the four-fermion operator in Eq. (<ref>). In the minimal model, ξ is identified as one of the right-handed SM quarks. Alternatively, we may add a vector-like fermion and also generate the ξ quark mass by Sp(2N) dynamics. §.§ One-generation model To illustrate the basic mechanism, we first present a model with a single fermion generation. Although this is a toy model, the analysis will subsequently be extended to realistic models with multiple generations. §.§.§ The minimal setup We consider a single SM quark generation {Ψ_q_u,Ψ_q_d, Ψ_u̅,Ψ_d̅} charged under the grand-color group and impose a chiral symmetry on Ψ_u̅ which forbids Ψ_u̅ Yukawa couplings. This then allows the θ term to be zero and all Yukawa couplings to be real and positive. The fermions charged under the Sp(2N) group have an approximate global SU(4) symmetry associated with SU(4) transformations of ψ = (ψ_q_u,ψ_q_d,ψ_u̅,ψ_d̅)^T. This symmetry is broken down by the Sp(2N) dynamics to Sp(4), which gives 15-10=5 NGBs, or “pions", in the coset space SU(4)/Sp(4). The dynamics of the pions Π^α(x) can be studied in the non-linear sigma model with a non-linear sigma field Σ(x)=exp(iΠ^α(x)T^α/f) Σ_0, where Σ_0 denotes the vacuum of the theory, as defined in (<ref>), f is the decay constant associated with the global symmetry breaking SU(4)→ Sp(4), and T^α are the generators of SU(4) not in Sp(4). To preserve electroweak symmetry, the vacuum is chosen with Σ_0, EW≡ diag(iσ_2, -iσ_2) which is equivalent to the fermion condensates ⟨ψ_q_uψ_q_d⟩ = ⟨ψ_d̅ψ_u̅⟩ = . These condensates spontaneously break the chiral symmetry of Ψ_u̅ and generate the Yukawa coupling of u̅, as will be shown below. Here we assumed ψ_q_uψ_q_d>0 by a baryon number rotation, and fixed the sign of ψ_d̅ψ_u̅=-ψ_u̅ψ_d̅>0. As discussed in appendix <ref>, this is done by choosing θ=0 which implies ⟨ψ_q_uψ_u̅⟩=⟨ψ_q_dψ_d̅⟩ < 0 and then relating the signs of these condensates to those in the vacuum (<ref>) using a non-anomalous flavor transformation in SU(4)/Sp(4). The SU(4)/Sp(4) coset space can be more conveniently understood in terms of the SO(6)/SO(5) coset space, which has the same structure. The pions are then in the 5 of SO(5). To identify the quantum numbers, we can decompose the pions as 1+ 4 of SO(4)⊂ SO(5). The 4 representation is equivalent to the (2,2) representation of SU(2)× SU(2)≅ SO(4). Thus, the NGBs in SU(4)/Sp(4) transform as an electroweak singlet and a Higgs-like doublet. The Higgs-like NGBs, Π_H, obtain a mass-squared due to electroweak loops given by m_Π_H^2≃g_2^2/16π^2^2 , where g_2 is the SU(2)_L gauge coupling (ignoring hypercharge contributions), and the overall constant in (<ref>) is assumed to be order one. The gauge contributions are assumed to dominate the corrections to the pion potential and thus the positive mass squared in (<ref>) implies that electroweak symmetry remains unbroken after Sp(2N) confinement. This is indeed the case unless some of the Yukawa couplings are O(1) (as will be discussed in section <ref>). The flavor singlet, however, remains massless. It is associated with the spontaneous breaking of the U(1) baryon symmetry due to condensation. [The spontaneous breaking of baryon number does not cause proton decay since the SM quarks possess an independent baryon symmetry, under which Sp(2N) quarks are not charged, due to the SU(3)_c gauge symmetry, and also lepton number is not violated. This independent baryon symmetry also guarantees the absence of neutron-antineutron oscillations.] To see how the condensates (<ref>) generate a mass for the massless up quark we next consider corrections to the NGB Yukawa interactions due to the grand-color gauge bosons as well as corrections to the NGB potential from interactions with the SM Higgs. When the Higgs field breaks electroweak symmetry these corrections will generate an up quark mass. The relevant terms in the Lagrangian are given by ℒ⊃g_ GC^2/M_GC^2quψ^†_qψ^†_u̅ -y_dψ_qψ_d̅H + h.c. , where the first term, due to heavy gauge boson exchange, is computed in appendix <ref> and the second term follows from (<ref>). As shown in the left diagram of figure <ref>, the Sp(2N) quark interactions in (<ref>) together with the fermion condensates in (<ref>) are responsible for generating the up quark Yukawa coupling. In the pion picture, a non-zero up Yukawa coupling arises from the Yukawa interaction in (<ref>) and can be understood as a result of the pion-Higgs mixing between H and the Higgs-like NGBs Π^H (see right diagram of figure <ref>), or equivalently, a non-zero pion VEV for the real, neutral component of Π^H, induced by electroweak symmetry breaking. To compute the induced VEV effect, we consider the fluctuations around the condensate in (<ref>) ψ^I ψ^J ≃(Σ_0^IJ+i/f(T^αΠ^α)^IKΣ_0^KJ+…), where I,J,K= q_u,q_d, u̅, d̅ and there is a sum over α=1,…,5. A non-zero condensate of ψ_q_dψ_d̅ corresponds to the displacement from the electroweak-symmetric vacuum by rotations of (ψ_q_d, ψ_u̅) and (ψ_q_u, ψ_d̅). In the basis (ψ_q_u, ψ_q_d, ψ_u̅, ψ_d̅), the generator of the corresponding NGB direction Π^h, where Π^h/√(2) is the real part of the electromagnetic neutral component of Π^H, is T^h=1/2√(2)[ 0 0 0 -i; 0 0 i 0; 0 -i 0 0; i 0 0 0 ], so that the quark bilinears are mapped into ψ_q_dψ_d̅ = ψ_q_uψ_u̅≃/2√(2)fΠ^h , where we have omitted Im Π^H since it does not obtain a VEV. The Lagrangian (<ref>) can be rewritten in terms of pions ℒ⊃g_ GC^2/2√(2)fM_GC^2 q_uu Π^h -y_d/√(2)fΠ^h h-1/2m^2_Π^H(Π^h)^2 , where h/√(2) is the real part of φ^0 and we have also included the mass term from (<ref>). The Π^h h term generates a mixing between the Higgs H and the Higgs-like NGBs Π^H. Equivalently, it can be thought of as a tadpole term below the electroweak symmetry breaking scale, v, i.e. when h=v it induces a non-zero VEV Π^h≃ -y_d v N /√(2) g_2^2 f. This also corresponds to a non-zero condensate of ψ_q_uψ_u̅ <0. The four-fermion operator in (<ref>) then generates a non-zero up quark mass proportional to the down Yukawa coupling y_u = m_u/v≃N/4g_GC^2/g_2^2^2/M_ GC^2y_d ≡ϵ y_d , where we have used f≃√(N)/4π. Note that the generated up quark mass is positive, so the strong CP phase θ̅ is indeed zero rather than π. An alternate proof for θ̅=0 without using the pion picture is given in appendix <ref>. The fact that the up Yukawa coupling is proportional to y_d can also be understood using a symmetry argument. When y_d = 0, there are two chiral symmetries associated with u̅ and d̅. The condensation of ψ_u̅ψ_d̅ breaks these symmetries to a diagonal subgroup which does not have an Sp(2N) anomaly. This unbroken chiral symmetry forces the up and down quarks to be massless. The observed magnitude of the up Yukawa coupling can be obtained from (<ref>) by relating the mass scales and M_ GC via the running of the gauge couplings. In particular, since SU(2N) and SU(3)_c unify into a single group at M_ GC, and SU(2N) breaks down to Sp(2N) at the scale M_ Sp, we obtain 2π/α_ Sp()+b_ Splog(M_ Sp/) +b_ 2Nlog(M_ GC/M_ Sp) =2π/α_c(m_Z)+b_3log(M_ GC/m_Z) , where m_Z is the Z-boson mass, α_c and α_ Sp are the fine-structure constants of SU(3)_c and Sp(2N), respectively, and b_3, b_ Sp and b_ 2N is β-function coefficients of QCD, Sp(2N) and SU(2N), respectively. The β-function coefficients, b_N and b_ Sp for SU(N) and Sp(2N), respectively are b_ Sp= 11/3(N+1) - 2/3F_ Sp , b_ N= 11/3 N - 2/3F_ N , where F_ Sp and F_ N are the number of fermion flavors charged under Sp(2N) and SU(N), respectively. Since <M_ Sp < M_ GC, it suffices to consider two limiting cases, M_ Sp=M_ GC and M_ Sp=, and find the range of up Yukawa couplings allowed by them. Using Eqs. (<ref>), (<ref>), and (<ref>), with α_c(m_Z)=0.1184, the value of y_u can be computed for given a and N. In figure <ref>, we show the constraints on and N. In the lower (upper) green region, the up Yukawa coupling is too small (large) when M_ Sp= (M_ GC). Between these two regions (shown in white), the observed value of the up Yukawa coupling can be obtained for <M_ Sp < M_ GC. It is clear that a reasonably large value of N is required in the minimal case when there is no up-quark Yukawa coupling at the UV scale. For large N, there are many SU(2)_L charged states above , so the Landau pole scale of SU(2)_L may be low. The dotted contours of figure <ref> show the SU(2)_L Landau pole scale, Λ_2. The Landau pole is seen to be much above and therefore the theory can be safely analyzed without knowing the UV completion of SU(2)_L. Furthermore, it should be noted that a sufficiently large grand-color symmetry breaking scale can generate a non-zero strong CP phase of QCD. This arises from the dimension-six term ⊃ϕ_ GC^†ϕ_ GC GG between the grand-color breaking scalar fields, ϕ_ GC, and the SU(2N+3) gauge fields with gauge field strength G, which leads to θ_Sp(2N)≠θ_SU(3)_c. Assuming these terms are suppressed by the Planck scale and a loop factor, imposing θ̅ < 10^-10 requires that M_ GC < 10^-5 M_ Pl. [As in Ref. <cit.>, we assume that the dimension-five term ϕ_ GCGG is suppressed, which can be guaranteed by an additional symmetry such as a ℤ_2 symmetry under which ϕ_ GC is odd. If the dimension-five term exists, the upper bound on M_ GC becomes 10^-10 M_ Pl and there are still viable parameter regions even with this more stringent constraint. ] In the orange-shaded region (to the right of the orange-dashed line) of figure <ref>, M_ GC > 10^-5 M_ Pl when M_ Sp= (M_ GC). This constraint can be relaxed when the grand-color breaking occurs via strong dynamics. In addition, there are also constraints from higher-dimensional CP-violating operators <cit.> without grand-color breaking fields such as GGG. Assuming the operator is suppressed by the Planck scale, the constraint θ̅ < 10^-10 requires that < 10^-5 M_ Pl. This constraint is weaker than M_ GC < 10^-5 M_ Pl, but to avoid the constraint without imposing CP symmetry will require a drastic assumption such as the compositeness of the SU(2N+3) gauge field. Note that even supersymmetry cannot forbid GGG̃. §.§.§ Adding vector-like quarks The required up Yukawa coupling can be obtained with a smaller value of N by adding a vector-like fermion to the theory. In particular, we introduce a pair of Weyl fermions Ψ_U and Ψ_U̅ in the fundamental and anti-fundamental representations of SU(2N+3), respectively, where Ψ_U̅ has the same gauge charge as Ψ_u̅ in Table <ref> while Ψ_U has conjugate charges. The vector-like fermions have a mass m_U and a Yukawa interaction L⊃ - m_U Ψ_U Ψ_U̅- λ_UΨ_q Ψ_U̅H + h.c. The parameters m_U and λ_U are assumed to be real and positive by taking advantage of the phase rotation of Ψ_U and Ψ_U̅, respectively. The θ term associated with the grand-color group can then be removed by a phase rotation of Ψ_u̅, which is assumed to have a zero Yukawa coupling. [There is also the alternative possibility of imposing a chiral symmetry on Ψ_U which will be discussed later.] The SU(2)_L interaction prefers the condensate ⟨ψ_q_uψ_q_d⟩>0 (which is made positive by a baryon number rotation), while the U(1)_Y interaction forces ψ_d̅ to condense with ψ_u̅ and ψ_U̅. This then leads to the following possible condensates ⟨ψ_q_uψ_q_d⟩=⟨ψ_d̅ψ_U̅⟩/cosϕ= ⟨ψ_Uψ_u̅⟩/cosϕ= ⟨ψ_d̅ψ_u̅⟩/sinϕ =-⟨ψ_Uψ_U̅⟩/sinϕ= , where ϕ corresponds to the VEV of one of the NGB directions (see appendix <ref>). Note that the condensate signs in (<ref>) are determined by requiring that the vacuum is connected to ψ_q_uψ_u̅=ψ_q_dψ_d̅= ψ_U ψ_U̅<0 by a non-anomalous flavor transformation (see Eq. (<ref>)). The value of ϕ is determined by the relative importance of the Dirac mass m_U with the Yukawa couplings λ_U and y_d. As shown in appendix <ref> (see (<ref>)), we obtain ϕ≃arctan(16 π^2m_U/λ_U y_d) . This equation shows that for m_U≫λ_U y_d Λ_ Sp, ϕ≃π/2, so that ψ_U̅ forms a condensate mainly with ψ_U. This is because the Dirac mass term, which gives rise to a tadpole term for the meson composed of ψ_U ψ_U̅, favors the condensation of ψ_U ψ_U̅. On the other hand, for m_U≪λ_U y_d Λ_ Sp, we see that ϕ≃ 0, implying that the Higgs boson exchange favors the condensation of ψ_U̅ with ψ_d̅. This can be intuitively understood by an effective interaction ∝λ_U y_d ψ_q ψ_d̅ψ_q ψ_U̅ generated by the exchange of the Higgs boson. The condensation of ψ_q ψ_q then generates an effective mass term ψ_U̅ψ_d̅. The condensates involving ψ_u̅ spontaneously break the U(1) chiral symmetry of ψ_u̅, which is then communicated to u̅ via the following four-fermion operator, g_ GC^2/2M_ GC^2ψ_U^†σ̅^μ U ψ_u̅^†σ̅_μu̅ + h.c. = g_ GC^2/M_ GC^2 U u̅ψ_u̅^†ψ_U^† + h.c. Using (<ref>), the condensation of ψ_U and ψ_u̅ then generates a Uu̅ mass term N g_ GC^2/16π^2Λ_ Sp^3/M_ GC^2cosϕ Uu̅≡ - m_Uu Uu̅ . The generation of this mass term in the quark picture can be understood by the Feynman diagram shown in figure <ref>. When m_U > |m_Uu|, we may integrate out U U̅ and then via u̅-U mixing, u̅ obtains a Yukawa coupling with q H given by y_u^(1) = -m_Uu/m_Uλ_U = N g_ GC^2/16π^2Λ_ Sp^3/m_U M_ GC^2 λ_Ucosϕ. Furthermore, as in the minimal model without vector-like quarks, there is an additional contribution to the up Yukawa coupling generated from the four-fermion operator in Eq. (<ref>). The ψ_u̅ψ_d̅ condensate then leads to an up Yukawa coupling proportional to the down Yukawa coupling given by y_u^(2) = N/4g_GC^2/g_2^2^2/M_ GC^2y_d sinϕ . The total contribution to the up Yukawa coupling is then y_u = y_u^(1) + y_u^(2) where the relative contributions of the two effects depends on the value of ϕ. The allowed parameter space of and N that generates the required up Yukawa coupling is shown in figure <ref> where the shaded regions and contours are similar to those in figure <ref>. The straight boundaries of the green regions (≲ 10^6 GeV) are determined by the up Yukawa contribution in Eq. (<ref>), while the curved boundaries (≳ 10^6 GeV) are determined by Eq. (<ref>). We have assumed m_U = 1 TeV, λ_U =0.1, and included the vector-like quark contribution in the running of the gauge coupling. The assumed value of m_U in figure <ref> is consistent with the lower bound on the vector-like quark mass which is approximately 500 GeV if it decays dominantly into the first two generations <cit.>, whereas it is approximately 1.3 TeV if it decays dominantly into the third-generation quarks <cit.>. As m_U becomes larger or λ_U becomes smaller, the curved boundaries move to the right. Thus, as shown in figure <ref> in the vector-like quark case, a value of N as low as 4 can generate the observed up Yukawa coupling, which is much less compared to the case without a vector-like quark. This arises because the mass mixing contribution (<ref>) is enhanced by a factor of (g_2^2/64π^4)(λ_U /m_U)^2 compared to the down quark Yukawa coupling contribution (<ref>) and therefore with vector-like quarks, the up Yukawa coupling does not require large values of N for ≫ m_U. Note that the condensate ⟨ψ_d̅ψ_U̅⟩ also gives a contribution to the down-Yukawa coupling in a similar manner as the up-Yukawa generation (<ref>), namely y_d^(2)=N/4g_GC^2/g_2^2^2/M_ GC^2λ_U cosϕ . This contribution is in addition to the tree-level down Yukawa coupling y_d, but can be made subdominant when λ_U < 𝒪(1). Therefore, the down Yukawa coupling is not excessively generated via condensation. Note that when y_d=0 we cannot use (<ref>) to generate the down Yukawa coupling because in this limit both (<ref>) and (<ref>) vanish, since due to (<ref>), we obtain ϕ=π/2 . Finally, when m_U < |m_Uu|, we may integrate out Uu̅ and identify U̅ with the right-handed up quark and y_u = λ_U, while U and u̅ are extra heavy colored states. The mass m_Uu should be greater than the TeV scale to avoid collider constraints on new colored particles. In this case, the constraints on as a function of N are shown in figure <ref>. The parameter space is excluded in the red region (below the red dashed line) in figure <ref>, because m_Uu is below the TeV scale assuming SU(2N) is broken to Sp(2N) just above (below M_ GC.) In the green region, the up Yukawa coupling becomes too large from the y_d contribution (<ref>). The special limiting case with m_U=0 can be achieved by imposing a chiral symmetry on Ψ_U rather than on Ψ_u̅. In this case, both Ψ_u̅ and Ψ_U̅ have Yukawa couplings, but we can perform a flavor transformation on Ψ_u̅ and Ψ_U̅ so that only Ψ_U̅ has a Yukawa coupling. Note that the generated up Yukawa or the mass of the new vector-like fermion is not suppressed by the product of the Yukawa couplings y_d, λ_U or the Dirac mass term m_U, owing to the spontaneous breaking of the chiral symmetry. This differs from generating the Yukawa coupling or mass using instanton effects. §.§.§ Accidental chiral symmetry The model with vector-like quarks can be promoted to a model where the anomalous chiral symmetry is accidental as discussed in section <ref>. To do so, we add an extra pair of vector-like quarks Ψ_D and Ψ_D̅ and assume both Ψ_u̅ and Ψ_D are odd under a ℤ_2 symmetry. The quarks Ψ_D̅ can have a Yukawa coupling to Ψ_qH, but one can take a linear combination of Ψ_d̅ and Ψ_D̅, re-branded as Ψ_d̅, so that only Ψ_d̅ couples to Ψ_qH. There is an SU(2) flavor symmetry of (ψ_u̅, ψ_D) at the Sp(2N) scale and associated massless directions, but the exchange of heavy SU(2N) gauge bosons breaks the symmetry and gives masses ∼^2/M_ Sp along those directions. As a result, ψ_q,u̅,d̅,U,U̅ condense in the same way as discussed in section <ref> to generate the up Yukawa coupling, while ψ_D forms a condensate with ψ_D̅ to give a mass to D and D̅. Alternatively, we may impose an odd ℤ_2 charge to Ψ_D̅ rather than Ψ_D. In this case, a Dirac mass term between Ψ_d̅ and Ψ_D is allowed by the discrete symmetry. However, unless the Dirac mass term is small, the analysis in section <ref> will need to be modified. For m_U=0, we may instead impose a ℤ_2 symmetry under which Ψ_U and Ψ_D are odd and other fields, including Ψ_u̅, are even. Finally, recall that the ℤ_2 symmetric model in section <ref> has a domain wall problem. Similarly, in the Sp(2N) model with ℤ_2 symmetry, a domain wall problem arises if the Sp(2N) confinement occurs after inflation. This problem can be avoided by extending the symmetry to ℤ_3 with the addition of one more pair of vector-like quarks, D^', D̅^' where u̅, D, and D^' are charged under this symmetry. §.§ Two-generation model §.§.§ The minimal setup We next analyze the two-generation case with Ψ_q_1,2,Ψ_u̅_1,2, and Ψ_d̅_1,2. However, unlike the one-generation model we will find that this minimal setup does not work and vector-like quarks will eventually need to be added. First, consider the minimal model with a chiral symmetry on Ψ_d̅_1 and introduce the following Yukawa interactions L = -Y^u_iaΨ_q_iΨ_u̅_a H - Y^d_i2Ψ_q_iΨ_d̅_2H+ h.c., where i=1,2, a=1,2, and the up-type quark Yukawa coupling matrix Y^u is taken to be diagonal by flavor transformations of Ψ_q_1,2,Ψ_u̅_1,2, while Y^d has only Ψ_d̅_2 interactions, i.e., Y^u = [ y_u 0; 0 y_c ],  Y^d = [ 0 y_1; 0 y_2 ]. We denote the upper (lower) component of the doublet q_1 as q_u(q_d) and similarly for the doublet q_2 as q_c(q_s), although q_d and q_s are not purely the left-handed down and strange quarks. All of the coupling constants are taken to be real and positive by the phase rotation of quarks and θ=0 by the (anomalous) chiral rotation of d̅_1. The Sp(2N) strong dynamics generates the condensate as parameterized in Eq. (<ref>). As shown in appendix <ref>, the condensate is given by ψψ^T=×[ 0 cosϕ 0 sinϕ 0 0 0 0; -cosϕ 0 -sinϕ 0 0 0 0 0; 0 sinϕ 0 - cosϕ 0 0 0 0; -sinϕ 0 cosϕ 0 0 0 0 0; 0 0 0 0 0 -1 0 0; 0 0 0 0 1 0 0 0; 0 0 0 0 0 0 0 1; 0 0 0 0 0 0 -1 0 ],  ϕ = -arctany_1/y_2, in the basis ψ=(ψ_q_u,ψ_q_d,ψ_q_c,ψ_q_s,ψ_u̅_1,ψ_d̅_1,ψ_u̅_2,ψ_d̅_2)^T. Note that ψ_u̅_1 and ψ_u̅_2 condense only with ψ_d̅_1 and ψ_d̅_2, respectively. This can be intuitively understood as follows. The exchange of the Higgs generates effective interactions L∼1/^2( y_u ψ_q_uψ_u̅_1 + y_c ψ_q_cψ_u̅_2) ( y_1 ψ_q_dψ_d̅_2 + y_2 ψ_q_sψ_d̅_2). For y_c > y_u, to minimize the energy given by this interaction, ψ_d̅_2 should condense exclusively with ψ_u̅_2, since a non-zero ψ_d̅_2ψ_u̅_1 condensate would reduce the magnitude of the ψ_d̅_2ψ_u̅_2 condensate and increase the energy. More explicitly, for the field space parameterized by two parameters ϕ and α as ψ_q_uψ_q_d =- ψ_q_cψ_q_s = cosϕ , ψ_q_uψ_q_s =- ψ_q_dψ_q_c =sinϕ , -ψ_u̅_1ψ_d̅_1 =ψ_u̅_2ψ_d̅_2= cosα , -ψ_u̅_1ψ_d̅_2 = -ψ_u̅_2ψ_d̅_1 = sinα , the potential energy obtained from (<ref>) is V ∼ - ( y_c cosα (-y_1 sinϕ + y_2 cosϕ) + y_u sinα (y_1 cosϕ + y_2 sinϕ) ) , which is minimized at α=0 and ϕ =- arctan(y_1/y_2). However, the vanishing ψ_d̅_2ψ_u̅_1 condensate leads to a phenomenological problem. Similar to the mechanism in the minimal one-generation model, the quark condensate generates the following Yukawa couplings, Y_d = [ ϵ y_u y_2/√(y_1^2 + y_2^2) y_1 - ϵ y_c y_1/√(y_1^2 + y_2^2); - ϵ y_u y_1/√(y_1^2 + y_2^2) y_2 + ϵ y_c y_2/√(y_1^2 + y_2^2) ], Y_u = [ y_u 0; 0 y_c + ϵ√(y_1^2 + y_2^2) ] , where ϵ∼^2/M_ GC^2 and we have omitted O(ϵ) off-diagonal terms in Y_u since they result in 𝒪(ϵ^2) corrections to the up-type Yukawa couplings. The determinant of the Yukawa matrix Y_d is positive and the strong CP phase is zero. However, we cannot reproduce the observed Yukawa couplings. Since ψ_d̅_1 only condenses with ψ_u̅_1, the Yukawa coupling of d̅_̅1̅ is proportional to y_u and is at most ϵ y_u. Given that we must have ϵ = y_d/y_u to obtain the observed down Yukawa coupling, the charm Yukawa then generates a down-type Yukawa as large as y_c y_d/y_u ≫ y_s, leading to an unacceptably large strange quark mass. Alternatively, we could impose a chiral symmetry on Ψ_u̅_1, but the failure persists. In this case, to obtain the observed up Yukawa coupling, we need ϵ = y_u/y_d, but then the strange quark mass is again too large. The minimal two-generation model is therefore not phenomenologically viable. §.§.§ Adding vector-like quarks A successful model can be constructed by adding vector-like quarks. For example, we can introduce Ψ_U and Ψ_U̅ with the following interactions L = -Y^u_i2Ψ_q_iΨ_u̅_2H- Y^d_iaΨ_q_iΨ_d̅_aH - λ_UiΨ_q_iΨ_U̅ H - m_U Ψ_UΨ_U̅+ h.c., where Ψ_u̅_1 is assumed to have a chiral symmetry and Ψ_u̅_2,Ψ_d̅_1,2 now have nonzero Yukawa interactions with Yukawa matrices Y^u = [ 0 y_1^'; 0 y_2^' ], Y^d = [ y_d 0; 0 y_s ] . Note that, in principle, there is a Ψ_UΨ_u̅_2 mass term, but we have redefined the Ψ_U̅, Ψ_u̅_2 fields since only one linear combination of them couples to Ψ_U. Furthermore, we use the remaining Ψ_U, Ψ_U̅ phase rotations to make m_U, λ_U2 real. As such, λ_U1 is the only complex coupling appearing in (<ref>). The vacuum structure of the model for generic couplings and masses in (<ref>) is complicated, but there are few limiting cases that can be easily analyzed. When m_U=0, ψ_U and ψ_u̅_1 do not have a mass or Yukawa couplings with other quarks, so they condense with each other and generate a mass term U u̅_1. As in the one-generation model, m_U=0 can be achieved by imposing a chiral symmetry on Ψ_U, rather than to Ψ_u̅_1. The quarks U and u̅_1 are new Dirac fermions while U̅ is identified with one of the SM right-handed up-type quarks. The observed up Yukawa coupling is then explained by appropriately choosing Y^u,d and λ_Ui. The constraints on the parameter space is similar to the one-generation model with m_U < |m_Uu|, except that an extra constraint from generating a too large strange Yukawa from the charm Yukawa should be added. In figure <ref>, we show the constraints on and N. When m_U≠ 0, some of the SM quark Yukawa couplings can be generated in the same manner as in section <ref>. This is in particular effective when m_U ≪λ_U1 y_d or λ_U2 y_s, for which the condensation pattern of ψ_U̅ is mainly determined by the Yukawa couplings rather than by the mass terms. The vector-like fermion ψ_U̅ mainly condenses with ψ_d̅_i, while ψ_U condenses with ψ_u̅_1, generating a mass m_Uu̅_1≃ - N g_ GC^2 ^3 / (16π^2 M_ GC^2) to U u̅_1. Assuming m_U > |m_Uu̅_1|, u̅_1 obtains a Yukawa coupling -λ_Ui m_Uu̅_1/ m_U to q_i H via the mixing with U̅. Instead of Ψ_U and Ψ_U̅, we may add vector-like quarks with different gauge charges and construct a similar model. Furthermore, as in the one-generation model, the anomalous chiral symmetry can be realized as an accidental symmetry at low energies. We will discuss such a model in section <ref>, together with the constraints from long-lived relics and a domain wall problem. § THREE-GENERATION MODELS In this section we discuss the three-generation case in the SM. The natural generalization of the one- and two-generation models, however, has a Higgs-pion mixing problem that results in a large electroweak symmetry breaking scale. [This should also be an issue in the grand-color axion model <cit.>.] Nevertheless, this problem can be avoided as will be subsequently discussed. §.§ Higgs-pion mixing In order to obtain a large top quark mass, the Higgs must couple to the third-generation quarks with an O(1) Yukawa coupling L⊃ -y_3 Ψ_q_3Ψ_u̅_3H + h.c.= -y_3 ( q_3 u̅_3 + ψ_q_3ψ_u̅_3)H+ h.c. . The second term in (<ref>) generates a large mixing between the Higgs and a Π^H pion [The SM Higgs also mixes with pions that arise from the Yukawa interactions with the first two generations (see Eq. (<ref>)), but this mixing is negligible.] given by y_3κ√(N)/4πΛ_ Sp^2 Π^H H^† + h.c., where κ is an order one constant and the third-generation index on the pion field will be suppressed in the following for simplicity. This mixing destabilizes the Higgs and/or pion. Indeed, the mass-squared matrix is given by V⊃[ H^† Π^H† ][ m_H^2 y_3κ√(N)/4πΛ_ Sp^2; y_3κ√(N)/4πΛ_ Sp^2 κ'y_3^2/16π^2Λ_ Sp^2 ][ H; Π^H ] , where m_H is the tree level Higgs mass and the (2,2) entry arises from quantum corrections due to the Yukawa coupling, y_3. The sign and magnitude of the O(1) coefficient κ' is model dependent. [The sum of the quantum corrections due to the mass mixing between H and Π^H and the quartic terms from the Yukawa couplings or the kinetic term gives a positive κ', but the corrections due to a trilinear coupling is negative. See figure <ref> for the analogous corrections to the mass of neutral pions, where the size of the corrections depends on the matter content of the model.] If κ' <0, then Π^H obtains a VEV of O(f) which breaks electroweak symmetry since Π^H is an SU(2)_L doublet. Even if one assumes κ' >0 to avoid electroweak symmetry breaking, it is difficult to obtain an O(1) top Yukawa coupling. In order to have the electroweak scale much below , the determinant of the mass-squared matrix in (<ref>) must nearly vanish, which requires m_H^2 ∼κ^2 N ^2/κ'. The (1,1) entry is then much larger than the other entries, so we may integrate out H. The SM-like Higgs is dominantly Π^H that mixes with H by an angle ∼ y_3κ' / (4πκ√(N)). The top Yukawa is then ∼ y_3^2 κ' / (4πκ√(N)). Assuming κ∼κ', an O(1) top Yukawa coupling requires that y_3 should almost be non-perturbative. The coupling y_3 then develops a Landau pole just above the Sp(2N) confinement scale. §.§.§ Possible ways out Here we list a few possible ways to avoid the difficulty with the top Yukawa coupling: * The third generation quarks are SU(2N+3) singlets and are charged under another gauge group SU(3)_T. The product group SU(3)_T× SU(3) is then broken down to a diagonal SU(3) subgroup, which is identified with the QCD color group, SU(3)_c. The bottom Yukawa coupling vanishes at tree level and is generated by SU(3)_T instantons, so that CP violation from the SU(3)_T θ term is absent. * The third generation fermions are charged under SU(2)_T which has a large gauge coupling. The first two generations of fermions are charged under SU(2)_FS. The product group SU(2)_FS× SU(2)_T is broken down to SU(2)_L at a scale below the Sp confinement scale. The large SU(2)_T gauge coupling generates a large positive correction to the (2,2) entry of Eq. (<ref>) which ameliorates the necessity of requiring a large Yukawa y_3. Furthermore, the top Yukawa at high energy scales can be smaller because of the RGE corrections from SU(2)_T. * Introduce a term ψ_q_3ψ_q_i or ψ_u̅_3ψ_d̅_i with a mass larger than to decouple Π^H. Indeed, these mass terms preserve the Sp(2N)× SU(3)× U(1)_Y symmetry and may arise from SU(2N+3)× U(1) breaking. As long as there is only one mass term, the phase of the mass term can be removed by the baryon number rotation and does not introduce a strong CP phase. Additional mass terms, if they exist, should be smaller than 10^-9 in order not to introduce a strong CP phase. Such significant suppression of the mass terms will require additional structures in the model. * Introduce supersymmetry, which will suppress the off-diagonal entry of Eq. (<ref>). For sufficiently large N, strong dynamics yields the ADS potential <cit.> or the deformed moduli constraint <cit.> and the chiral symmetry may be spontaneously broken. In the next section, we analyze in more detail the first two possibilities that charge the third generation fermions under an extra gauge interaction, leaving a detailed investigation of the supersymmetric possibility for future work. §.§ Extra gauge interactions In this section, we discuss a model with an extra gauge symmetry, SU(3)_T or SU(2)_T, under which only the third generation quarks are charged. The first two generations have the same structure and lead to the same constraints as presented in section <ref>. Note that in section <ref> Ψ_u̅_1 was massless, and now Ψ_d̅_3 will also be assumed to be massless in section <ref>. §.§.§ Extra SU(3) In Ref. <cit.>, the QCD color group SU(3)_c is embedded into SU(3)^3, and the instanton effects of the three gauge groups generates the mass of lighter fermions in each generation. [The minimal model in Ref. <cit.> also requires additional SU(3) factors. However, the computation in <cit.> overestimates the instanton effect as pointed out in Ref. <cit.>.] In our model, we only need to generate the bottom Yukawa coupling by the instanton effect of an extra SU(3)_T. This is because SU(3)_c is embedded into SU(2N+3)× SU(3)_T, where SU(2N+3) is eventually broken down to Sp(2N)× SU(3) and Sp(2N) confines. The remaining groups SU(3)× SU(3)_T are then broken down to SU(3)_c. The third-generation quarks q_3 and u̅_3 have the following Yukawa coupling, L = - y_t q_3 u̅_3H + h.c. , where y_t is taken to be real and positive by a chiral rotation. The SU(3)_T theta parameter is taken to be θ_T=0 by the rotation of d̅_3. The right-handed bottom quark d̅_3 does not have a Yukawa interaction and enjoys a chiral symmetry. We assume that the breaking SU(3)_T× SU(3) → SU(3)_c occurs at the scale v_T via a Higgs field Φ_3 and the SU(3)_T gauge coupling is semi-perturbative so that the dilute instanton gas approximation is valid. Then the SU(3)_T instanton can generate a sizable bottom Yukawa coupling <cit.>, y_b ≃ 2× 10^-4 y_t ( 2π/α_T(μ))^6∫dρ/ρ^5 4π^2/3ρ^4 e^-2π/α_T(1/ρ)-i θ_T e^-4π^2v_T^2 ρ^2 , ≃ 3× 10^-4 (Λ_T/v_T)^19/2( ln μ/Λ_T )^6 , where the renormalization scale μ should be near v_T. The would-be confinement scale, Λ_T of SU(3)_T, when the SU(3)_T× SU(3) symmetry is unbroken (i.e., Λ_T < v_T), is defined to be Λ_T ≃ v_T × exp(- 2π/α_T(v_T) × b_T) , where b_T=19/2 is the β-function coefficient of the SU(3)_T gauge coupling which includes the gauge, fermion, and Higgs contributions. Note that the gauge coupling of SU(3)_T is a free parameter so that α_T(v_T) can be chosen as large as possible to maximize the instanton effect. This is unlike the minimal model of Ref. <cit.>. Despite this, the largest possible bottom Yukawa coupling that can be generated is, for μ=v_T, y_b≃ 10^-7 when Λ_T ≃ 0.5 v_T, or equivalently α_T(v_T)≃ 1. To estimate the uncertainty of the generated bottom Yukawa coupling in (<ref>), we can vary μ by an O(1) factor. For μ=(1/6-6)v_T, we find that the maximal value of y_b ranges from 10^-15-10^-2. Given this large uncertainty, it is possible that the observed bottom Yukawa could be explained in the minimal model, but instead we propose an extension of the model where the observed bottom Yukawa coupling can be more concretely obtained. To fix the problem of the minimal setup and enhance the bottom Yukawa coupling, we introduce vector-like quarks B and B̅, where B̅ has the same gauge charges as d̅_̅3̅, with the following couplings and masses, L = - m_B BB̅ - y_t q_3 u̅_3 H - λ_B q_3 B̅H + h.c.. The phases in the Yukawa coupling and mass terms can be removed by rotations of the fermions B̅, q_3, and u̅_3, so that m_B, y_t and λ_B are real parameters. The SU(3)_T instanton generates a Dirac mass for d̅_3 and B, L = m_Bd B d̅_3+ h.c. , where m_Bd ≃ 2 × 10^-4 y_tλ_B/24π^2 e^-2π/α_T(v_T)( 2π/α_T(μ))^6∫dρ/ρ^5 4π^2/3ρ^3 (v_Tρ)^53/6 e^-4π^2v_T^2 ρ^2 , ≃ 10^-4 λ_B v_T (Λ_T/v_T)^53/6( ln μ/Λ_T )^6 . The would be confinement scale, Λ_T, is defined as before in (<ref>), with b_T=53/6 due to the gauge, fermion and Higgs contributions. When m_Bd < m_B, we may integrate out BB̅ and the bottom Yukawa in the low energy effective theory is y_b = λ_B m_Bd/m_B. In figure <ref>, we show the required value of α_T(v_T) (or equivalently v_T/Λ_T) as a function of v_T for λ_B=1 and a few choices of m_B. In the red-shaded region, CP-violating higher dimensional operators such as G_TG_TG_T, where G_T is the SU(2)_T gauge field strength, can introduce too large a strong CP phase <cit.>, assuming the cutoff scale around the Planck scale. [ We assume that any combinatoric factors arising from the color and Lorentz indices are absorbed into the definition of the cutoff scale, M_ UV, of these operators.] One can see that the observed bottom Yukawa coupling can be obtained for α_T(v_T)≲ 1 and v_T≫Λ_T, for which the instanton computation is reliable. Note that the same mechanism with vector-like quarks can be applied to the model in Ref. <cit.> to obtain a sufficiently large bottom, strange, and up Yukawa couplings from the top, charm, and down Yukawa couplings, respectively. To obtain the CKM mixing, q_3 must couple to the second and first generation right-handed quarks. This can be achieved by the following coupling L = y^B_a B d̅_aΦ_3 + h.c., where a=1,2 is the flavor index and Φ_3 is the SU(3)_T× SU(3) breaking Higgs field. The phases in y_a^B cannot be removed and a particular linear combination of them is responsible for the CKM phase. After integrating out the vector-like fermions, we obtain generational mixing, L = -λ_B y^B_a v_T/m_Bq_3 d̅_a H+ h.c. . For m_B ∼ TeV, v_T∼ 10^13 GeV and λ_B y^B_a∼ 10^-12, the CKM mixing of the third generation with the first two generations can then be explained. §.§.§ Extra SU(2) We embed SU(2)_L into SU(2)_FS× SU(2)_T where the first two generations of left-handed fermions are charged under SU(2)_FS while the third generation left-handed fermions are charged under SU(2)_T. The product group SU(2)_FS× SU(2)_T is then broken down to SU(2)_L by the VEV of a pseudo-real bifundamental Higgs Φ_2. Furthermore, there is an SU(2)_FS doublet Higgs H_FS and SU(2)_T doublet Higgs H_T whose VEVs give masses to the first two generations of fermions and the third generation, respectively. The Lagrangian is given by L= - Y^u_3aΨ_q_3Ψ_u̅_a H_T - Y^d_3aΨ_q_3Ψ_d̅_aH_T - Y^u_iaΨ_q_iΨ_u̅_a H_FS - Y^d_iaΨ_q_iΨ_d̅_aH_FS + h.c., where i=1,2 and a=1,2,3. The vector-like fermion can have Yukawa interactions with H_T and/or H_FS depending on their gauge charges. The two Higgses also mix with each other via the interaction L = A Φ_2 H_FS H_T^† + h.c. The coupling A may be taken real by the phase rotation of H_T, so that the VEVs of H_T and H_FS are real. Besides the CKM phase, there remain complex phases in the Yukawa couplings in Eq. (<ref>) which can induce corrections to the strong CP phase. These will be considered in section <ref>. We assume that SU(2)_FS× SU(2)_T symmetry breaking occurs below the Sp(2N) confinement scale and that the SU(2)_T gauge coupling g_2,T is much larger than the SU(2)_FS coupling g_2,FS. Then the (2,2) entry of the mass-squared matrix in Eq. (<ref>) receives a large positive correction ∼ g_2,T^2 ^2/(16π^2). When g_2,T^2 > 4π√(N) y_3, the small electroweak scale is achieved by requiring m_H^2 to be smaller than the (2,2) entry, so that the Higgs field which couples to the third generation below the confinement scale is dominantly H_T, and the top Yukawa is y_3. This case does not require large y_3, but requires large g_2,T. An SU(2) gauge theory with the number of flavors between 6 and 11 is considered to be in the conformal window <cit.>. In our case, this implies that 12 < 2N+4≤ 22, since we have 2N+3 grand-color copies of Ψ_q_3 as well as the third generation SM lepton doublet. Thus, SU(2)_T can flow into a conformal fixed point above the Sp(2N) confinement scale for 4<N≤ 9. Near the lower edge of the window, the fixed point value of g_2,T is large. Below the confinement scale, since the number of SU(2)_T charged fields decreases, the SU(2)_T gauge coupling increases, causing SU(2)_T to eventually confine. This means the SU(2)_FS× SU(2)_T symmetry breaking should occur before SU(2)_T confines. Therefore, since g_2,T at is required to be large, the SU(2)_FS× SU(2)_T symmetry breaking scale should be just below . When g_2,T^2 < 4π√(N) y_3, the Higgs field that couples to the third generation below the confinement scale is dominantly Π^H and the top Yukawa coupling is approximately g_2,T^2 / (4π√(N)). The required value of g_2,T^2 is the same as the g_2,T^2 > 4π√(N) y_3 case with the bound saturated. §.§ Corrections to the strong CP phase We next discuss possible corrections to the strong CP phase that occur in the three-generation models. The quantum correction below the Sp(2N) confinement scale due to the CKM phase arises at the seven-loop level and therefore is negligible <cit.>. However, there can be threshold corrections near the Sp(2N) confinement scale, . These corrections are model dependent and nontrivially depend on m_U. To simplify the analysis we consider a case where the SM Yukawa coupling of the quarks q, u̅, d̅ is non-zero above the confinement scale and a new massless (m_U=0) Dirac fermion U and U̅ obtains a mass by the Sp(2N) dynamics. A more general analysis for all m_U is given in appendix <ref> where we explicitly show that the corrections to the strong CP phase remain small by constructing the possible flavor invariant combinations that can appear in the corrections. Note that U̅ and u̅_1 in this subsection correspond to u̅_1 and U̅ in section <ref> with m_U=0, respectively. Let us for now ignore the issue of a large electroweak symmetry breaking scale and consider a three-generation model without any extra gauge interactions. A non-zero strong CP phase can arise from the VEVs of the Sp(2N) pions, which are determined by the Yukawa couplings. The pion interactions from the Yukawa couplings are given by - √(N)/4π^2 Y^u_i aΠ_q_i u̅_aH - √(N)/4π^2Y^d_i aΠ_q_i d̅_aH + h.c., where Π_XY is the pion containing the Sp(2N) fermions ψ_X and ψ_Y. The Π_qd̅ pion is related to Π_qu̅ by the quark condensation, Π_q_i d̅_b = ψ_q_iψ_q_j/ψ_d̅_bψ_u̅_a/Π^†_q_j u̅_a. Note that ψ_U and ψ_U̅ do not have Yukawa couplings and hence do not form condensates with ψ_u̅,d̅. The Higgs-pion loop then gives rise to a potential V ≃N ^4/(16π^2)^2 Y^u_iaY^d_jbψ_q_iψ_q_j/ψ_d̅_bψ_u̅_a/ + h.c. = - N ^4/(16π^2)^2 Tr[Y^u Σ_u̅d̅ (Y^d)^T Σ_q]e^i θ_η + h.c. , where Σ_u̅d̅ and Σ_q are the non-linear sigma fields corresponding to the flavor symmetries SU(3)_u̅× SU(3)_d̅/SU(3) and SU(3)_q/SO(3), respectively, and SU(3)_u̅,d̅,q are the flavor symmetry groups of the SM quarks u̅, d̅, and q. The condensation of ψ_Uψ_U̅ enters via the pion field θ_η that corresponds to the U(1) symmetry ψ_q_i(1), ψ_u̅_a(1), ψ_d̅_a(1), ψ_U(-6), ψ_U̅(-6). The potential (<ref>) determines the alignment of the neutral pions. If θ_η obtains a non-zero VEV, the mass term of UU̅ generated by ψ_U ψ_U̅ will obtain a complex phase. This potential was numerically minimized in Ref. <cit.>, which found that the VEV of θ_η remains nearly zero and is suppressed by small Yukawa couplings and the CKM mixing angles. Therefore, in our setup the correction to the strong CP phase will also be smaller than the experimental upper bound θ̅≲ 10^-10. We next discuss the extensions of the model that include an extra SU(3) or SU(2) gauge group. In the SU(3) extension, only the first and second generations have Sp(2N) charged fermions. Thus, we can remove the phases of the Yukawa couplings of the first two generations, and the Sp(2N) dynamics does not generate any new phase. There are non-zero phases in Eq. (<ref>), but they do not generate the strong CP phase at leading-order. This is because the down Yukawa coupling matrix obtained from (<ref>) and (<ref>) is given by Y^d = [ Y^d_11 Y^d_12 0; Y^d_21 Y^d_22 0; λ_B y_1^Bv_T/m_B λ_B y_2^Bv_T/m_B y_b ] , where y_b is generated from SU(3)_T instantons. The determinant of Y^d does not depend on the complex parameters y_1,2^B and is real. There can be higher-order threshold corrections when B,B̅ are integrated out, but the corrections are suppressed by the smallness of the Yukawa couplings and CKM mixings, similar to the SM corrections. In the SU(2) extension, assuming m_H_ FS≫, we can integrate out H_ FS to obtain the effective theory with Lagrangian L= - Y^u_3aΨ_q_3Ψ_u̅_a H_T - Y^d_3aΨ_q_3Ψ_d̅_aH_T - A Φ_2/m_H_FS^2Y^u_iaΨ_q_iΨ_u̅_a H_T -A Φ_2/m_H_FS^2Y^d_iaΨ_q_iΨ_d̅_aH_T + h.c. , where i=1,2 and a=1,2,3. The contribution from the corrections where Φ_2 is treated as a background field with a VEV is equivalent to the case without any SU(2) extension by identifying Y_3a^u,d→ Y_3a^u,d and A Φ_2Y_ia^u,d/m^2_H_FS→ Y_ia^u,d, and does not generate too large a strong CP phase. However, the corrections with a dynamical Φ_2 field can be different. In fact, when the Φ_2 legs are closed into a loop instead of taking Φ_2 VEVs, we obtain A^2Y^u_iaY^d_jb/16π^2 m_H_FS^4 ∼ Y^u_ia Y^d_jb/ ^2, which leads to a factor of ^2/(16π^2 Φ_2^2) in comparison with the VEV contribution. Since Φ_2≲, the loop contribution is smaller than the VEV contribution. Although the total pion potential is different from the case without an SU(2) extension, the strong CP phase should remain small. This can be seen by taking the basis where Ψ_q_1,2 couples only to Ψ_u̅_1,2 and Ψ_d̅_1,2 with real Yukawa couplings and Y^u,d_33 are real. In this basis, the determinant of the SM Yukawa couplings is real and the strong CP phase is proportional to θ_η. Since Φ_2 only couples to Ψ_q_1,2, the extra corrections from the Φ_2 loop only modifies the product of the real Yukawa couplings in Eq. (<ref>). Furthermore, the Φ_2 loop correction is subdominant in comparison with the VEV contribution, and therefore the VEV of θ_η should remain small. §.§ Phenomenology In this section we discuss the phenomenological implications of our scenario. Recall that since the third generation fermions are treated differently from the first two generations, we can still employ the mechanisms discussed in section <ref> to generate the up quark mass (or the mass of new vector-like fermions). In particular, the spectra of pions and vector-like quarks will have interesting experimental consequences. §.§.§ Pion spectrum In section <ref>, vector-like quarks were introduced with a mass term m_U. The Sp(2N) dynamics generates a mixing between the vector-like quark and u̅_1, given by m_Uu. The Sp(2N) dynamics spontaneously breaks the flavor symmetry, giving rise to a pion spectrum. For both m_U=0 and m_U≠ 0, the pion corresponding to the spontaneous breaking of the baryon symmetry remains massless and does not couple to the photon or gluon. We discuss the possibility of this pion being the dark matter in section <ref>. The next-to-lightest pions may also have phenomenological implications. We first discuss the case with m_U=0, for which they may be below the electroweak scale. There are three next-to-lightest pions, which are associated with the symmetries restored in the limit y_u,d→ 0, have masses m_Π_ NL∼√(y_u y_d) /(4π)∼ 10^-6 (see Eq. (<ref>) with λ_U → y_u), but two of them do not have anomalous couplings to SM gauge bosons. One of them, corresponding to the U(1) symmetry Ψ_U̅(1), Ψ_d̅_1(1), Ψ_U(-1), and Ψ_u̅_1(-1), appears in the Yukawa and mass terms of u̅_1, d̅_1, U, and U̅ via the four-fermion operators in Eq. (<ref>). Performing a phase rotation of Uu̅_1 then generates anomalous couplings to photons and gluons.[It may appear that since the U(1) symmetry, which is a part of an anomaly-free flavor symmetry, does not have an Sp(2N) or SU(3)_c anomaly, the corresponding pion does not couple to gluons. However, the shift symmetry is explicitly broken by the Yukawa interactions and consequently, the pion interactions cannot be determined purely by a symmetry argument. ] Note that the contributions from the phase rotation of U̅ and d̅_1 are negligible, since the up and down Yukawa couplings are dominated by the Yukawa couplings that already exists before the Sp(2N) confinement. For ∼ 10^6-7 GeV, this pion has a mass ∼ 1-10 GeV and the couplings with photons and gluons are suppressed by f ≃√(N)/(4π) ∼ 10^6-7 GeV. Such a particle can be discovered by axion-like particle searches at DUNE <cit.>. It is important to note that this pion can also be considered as a “heavy QCD axion" in the following sense. Since Ψ_U,Ψ_u̅_1 are massless, the theory has an anomalous PQ symmetry, which can remove the strong CP phase. The PQ symmetry is then spontaneously broken by the Sp(2N) dynamics yielding a NGB that corresponds to the phase direction of ψ_Uψ_u̅_1 and couples to the SU(3)_c gluon. Because of the explicit PQ breaking by the Sp(2N) anomaly, the NGB appears to obtain a mass solely by the Sp(2N) dynamics. However, a linear combination of the PQ and quark chiral symmetry does not have an Sp(2N) anomaly, but is instead explicitly broken by the Yukawa interactions, giving rise to a pseudo-NGB with mass ∼√(y_u y_d) /(4π) rather than . This state, which may be referred to as a “heavy QCD axion", is a mixture of the phase direction of ψ_Uψ_u̅_1 and the Sp(2N) η' meson. Our setup can be compared with the conventional light QCD axion models that have a similar feature where the light axion state is also a mixture of the NGB resulting from the spontaneous PQ breaking and the SU(3)_c η', η, and π^0 mesons. A slight difference, however, is that in light QCD axion models, the spontaneous PQ symmetry breaking occurs by dynamics not related to the explicit PQ breaking dynamics, while in our setup, the spontaneous and explicit PQ breaking dynamics are unified and occur simultaneously at the same scale. In <cit.>, instead of introducing massless fermions Ψ_U,Ψ_u̅_1, there is a NGB that couples to the SU(2N+3) gauge field, where the associated PQ symmetry is spontaneously broken at a scale f_a, not related to the SU(2N+3) strong dynamics. The heavy QCD axion obtains a mass ∼√(y_u y_d)/(4π) × f/f_a. In the limit f_a = f, this QCD axion has a similar property to our pion. For m_U≠ 0, the next-to-lightest pions receive masses from the m_U mass term that are at least as large as O (√(m_U )). As m_U is increased to be ≳, the lightest pion associated with the U(1) symmetry Ψ_U̅(1), Ψ_d̅_1(1), Ψ_U(-1), and Ψ_u̅_1(-1) becomes the Sp(2N) η' since the state behaving as a “heavy QCD axion" decouples. This is a formal limit, but mimics the massless up quark solution in QCD, where the SU(3)_c η' would be the lightest neutral meson in the formal limit of decoupling the down quark. Thus, even with m_U∼ TeV, it might be possible to discover the η'-like pions in this “massless up-quark limit". For large m_U, the next-to-lightest pions are associated with the breaking SU(2)_u× SU(2)_d → SU(2), where SU(2)_u and SU(2)_d act on Ψ_u̅_1,2 and Ψ_d̅_1,2, respectively, and the breaking of a generation non-universal U(1) baryon symmetry under which Ψ_q_1(1), Ψ_q_2(-1), Ψ_u̅_1(-1), Ψ_d̅_1(-1), Ψ_u̅_2(1), and Ψ_d̅_2(1). The masses of the former are approximately √(y_c y_s)/4π∼ 10^-4. One of them, associated with a U(1) subgroup Ψ_u̅_1(1), Ψ_d̅_1(1), Ψ_u̅_2(-1), Ψ_d̅_2(-1), couples to the gluon and photon. Note that dynamical scales of ≲ 10^4 GeV are excluded by beam-dump experiments (see <cit.> for a summary) while ∼ 10^4-10^5 GeV can be probed by the LHC <cit.>. The mass of the latter is further suppressed by √(sinθ_c), which is only an O(1) factor. Two of these next-to-lightest pions have flavor-violating couplings to the first two generation quarks, which are constrained by the rare decay of K and D mesons if the next-to-lightest pions are lighter than them. Since D mesons are heavier, we discuss the constraint from the decay of D into the next-to-lightest pions, which is kinematically allowed if ≲ 10^4 GeV. The flavor-violating coupling can arise from the following two contributions. One is due to the four-fermion interaction q_2u̅_1 ψ_q_2ψ_u̅_1 in Eq. (<ref>) which induces a Yukawa coupling between q_2, u̅_1, and Π_q_2 u̅_1. The scalar Π_q_2 u̅_1 couples to H and Π_u̅_1d̅_2 via y_s, and after electroweak symmetry breaking, mixes with Π_u̅_1d̅_2. A second contribution arises from the coupling u̅_2^†σ̅^μu̅_1 ψ_u̅_1^†σ̅_μψ_u̅_2, where ψ_u̅_1^†σ̅_μψ_u̅_2 can be identified as f ∂_μΠ_u̅_1d̅_2. We find that the former contribution dominates and the Yukawa coupling between q_2, u̅_1, and Π_u̅_1d̅_2 is g_ GC^2/M_ GC^2(4π)^2 m_c f. The rare D meson constraints derived in <cit.> requires (4π)^2 g_ GC^2 f / M_ GC^2 < 10^-8 GeV^-1. With this lower bound on M_ GC, we cannot obtain a sufficiently large quark mass for < 10^4 GeV; m_Uu in Eq. (<ref>) is below 1 TeV, and the up Yukawa coupling in Eqs. (<ref>) is smaller than the observed value. We conclude that ≳ 10^4 GeV is required, so that the next-to-lightest pions are heavier than D meson. §.§.§ Long-lived relics When the vector-like quarks introduced in section <ref> are massless (m_U=0), the theory possesses an accidental U(1) symmetry with charges Ψ_U(1) Ψ_u̅_1(-1), giving rise to long-lived particles. The lightest U(1) charged particles are the quarks U and u̅_1 with a mass m_Uu or a pion, Π_UU̅ made from ψ_U and ψ_U̅, with a mass m_Π_ NL∼√(y_u y_d)/(4π)∼ 10^-6. For m_Uu< 10^-6, the quarks U and u̅_1 are stable. They will be abundantly produced in the early universe by the SU(3)_c gauge interaction and will form bound states with the up quark. The bound states have strong interactions with nucleons, and such particles might be excluded by direct detection experiments <cit.>. [ Furthermore, these states can be bound with protons to become electromagnetic charged states, which are subject to even stronger constraints <cit.>. ] The vector-like quarks u̅_1 and U may decay via a higher-dimensional operator Ψ_U Ψ_U̅ (Ψ_U Ψ_u̅_1)^†/M_ UV^2 or Ψ_U Ψ_U̅Ψ_U Ψ_u̅_1/M_ UV^2. This operator induces a tadpole term of Π_UU̅, which mixes U̅ with u̅_1 via the four-fermion operator in Eq. (<ref>). The mixing angle is θ_U̅u̅_1∼N/(4π)^2^4/M_ UV^2 m_Π_ NL^2∼ 10^-4(/10^9  GeV)^2 (10^16  GeV/M_ UV)^2 N/4. The decay rate of U and u̅_1 is 0.1 θ_U̅u̅_1^2y_u^2 m_Uu. The decay occurs before BBN if θ_U̅u̅_1 > 10^-8( TeV/m_Uu)^1/2, which requires M_ UV < 10^18  GeV(/10^9  GeV) (m_Uu/ TeV)^1/4(N/4)^1/2. Hence, for m_Uu> TeV corresponding to > 10^9 GeV, the Planck-suppressed operator leads to rapid enough decay of U and u̅_1. Note that the mixing angle (<ref>) is enhanced because of the smallness of m_Π_ NL. Therefore, operators involving the second generation quarks such as Ψ_U Ψ_u̅_2 (Ψ_U Ψ_u̅_1)^†/M_ UV^2 lead to smaller decay rates because of the larger mass of the pion made of ψ_U and ψ_u̅_2 is proportional to √(y_c y_s). For m_Uu> 10^-6, Π_UU̅ is stable. Unless the reheating temperature of the universe is much below f, the pion will be abundantly produced from the thermal bath and over-close the universe. It can decay via the operator Ψ_U Ψ_U̅ (Ψ_U Ψ_u̅_1)^†/M_ UV^2 or Ψ_U Ψ_U̅Ψ_U Ψ_u̅_1/M_ UV^2, which mixes U with U̅, and the Π_UU̅`-UU̅ interaction arising from the four-fermion operator in Eq. (<ref>). The resulting interaction is L∼√(N)/4π^3/M_ UV^2 m_Π_ NL^2∂_μΠ_UU̅U̅^†σ̅^μU̅ + h.c., and Π_UU̅ can decay into a pair of up quarks. Requiring that the decay occurs before BBN, we obtain M_ UV < 10^14  GeV(/10^7  GeV)^3/4(N/4)^1/4. Thus, a sufficiently rapid decay of Π_UU̅ occurs when M_ UV is much below the Planck scale. §.§ Accidental chiral symmetry The third generation model in section <ref> assumes that there are two massless SM quarks u̅_1 and d̅_3, associated with two anomalous chiral symmetries. These anomalous symmetries may arise accidentally due to an exact symmetry that can be promoted to a gauge symmetry. In the two generation model of section <ref> we introduce a ℤ_2 symmetry which can accidentally realize the chiral symmetry responsible for a massless quark. We first discuss the quality of this symmetry assuming m_U=0, for which there exists light pions and θ̅ can be more easily shifted by higher-dimensional operators. We impose a ℤ_2 symmetry on Ψ_U, which can be exact if an extra vector-like quark pair Ψ_D and Ψ_D̅ is added with an odd ℤ_2 charge assigned to one of the quarks. These charge assignments allow a higher-dimensional operator (Ψ_U Ψ_u̅_1)^2/M_ UV^2 whose coefficient is generically complex. This operator generates a tadpole term for the heavy QCD axion which then gives a complex mass to U and u̅_1, generating a non-zero θ̅: θ̅∼N ^4/M_ UV^2 m_Π_ NL^2. Requiring θ̅<10^-10, we obtain M_ UV≳ 10^19  GeV(/10^9  GeV) (N/4)^1/2. For M_ UV near the Planck scale, < 10^8 GeV is required. A large scale M_ UV in Eq. (<ref>), however, is in tension with the requirement on the suppression scale from long-lived relics in Eq. (<ref>). This requires either a low reheating temperature of the universe so that the long-lived relics are not abundantly produced, or more symmetry should be introduced to control higher-dimensional operators. For example, there can be a flavor symmetry under which Ψ_U̅ and Ψ_u̅_1 have the same charge, so that the operator Ψ_U Ψ_U̅ (Ψ_U Ψ_u̅_1)^† is allowed while (Ψ_U Ψ_u̅_1)^2 is not. Alternatively, instead of ℤ_2, we may impose a ℤ_3 symmetry on Ψ_U, which forbids (Ψ_U Ψ_u̅_1)^2. The ℤ_3 symmetry can be exact if one more extra massless vector-like quark pair is introduced. The ℤ_3 symmetry then also solves the domain wall problem as discussed in section <ref> and <ref>. For m_U≠ 0, we may construct a ℤ_2 symmetric model without stable particles at the renormalizable level. For example, we introduce two vector-like quark pairs Ψ_U,Ψ_U̅ and Ψ_D,Ψ_D̅ with Dirac mass terms m_UΨ_UΨ_U̅ + m_Dd̅_iΨ_DΨ_d̅_i, and impose an anomaly-free ℤ_2 symmetry on Ψ_u̅_1 and Ψ_D̅. Note that we impose an odd ℤ_2 charge on Ψ_D̅ rather than on Ψ_D, so that the Dirac mass terms of Ψ_D,Ψ_d̅_i are allowed. In this setup, there is no unbroken symmetry that can prevent the decay of new particles and therefore all new particles are unstable. The up and down Yukawa couplings are generated in the same way as in section <ref>, as long as the masses m_Dd̅_i are sufficiently small, otherwise a reanalysis of the vacuum structure is required. There exists a domain wall problem, as discussed in Secs. <ref> and <ref>, which means that the reheating temperature should be below , or the model should be extended with a ℤ_3 symmetry. Finally, for the third generation sector of the model in section <ref>, it is possible to understand the anomalous chiral symmetry of d̅_3 as an accidental symmetry, by imposing a U(1) symmetry under which d̅_3 has charge k. Furthermore, we introduce k pairs of vector-like fermions D and D̅ with U(1) charges 0 and -1, respectively. This U(1) symmetry does not have an SU(3)_T anomaly and is an exact symmetry. It is also possible to promote this U(1) global symmetry to a gauge symmetry by adding extra SU(3)_T neutral and U(1) charged fermions to cancel the U(1)^3 gauge anomaly. The U(1) symmetry is then assumed to be broken by the VEV of an operator O, with U(1) charge +1, to give masses to D and D̅ via the coupling O D D̅. The coupling or mass terms containing d̅_3 instead require couplings with O^k†, so for sufficiently large k, the chiral symmetry of d̅_3 is maintained to be of good quality. For example, when O is a fundamental scalar field and ⟨ O⟩∼ 10^14 GeV, the operator ( O^†/M_ Pl)^k q_3 d̅_3 H can be sufficiently suppressed by requiring k>2. In this way, the chiral symmetry of d̅_3 can be understood as an accidental symmetry in the low energy theory. § OTHER TYPES OF MODELS So far we have considered the simplest case where the chiral symmetry arises from a massless quark that obtains a mass by grand-color strong dynamics via the exchange of heavy gauge bosons. In this section, we discuss other classes of models that contain extra Higgses charged under the chiral symmetry or fermions in higher representations of SU(2N+3). §.§ Extra Higgses We first consider models that contain extra Higgs fields. In particular, we impose a chiral symmetry on an extra Higgs doublet as well as some of the quarks as in the Weinberg-Wilczek model <cit.>. The quarks charged under the chiral symmetry do not couple to the SM Higgs and only couple to the extra Higgs doublet. The Sp(2N) dynamics spontaneously (and explicitly) breaks the chiral symmetry and generates a mixing between the Higgs doublets. After electroweak symmetry is broken by the SM Higgs, a VEV is induced for the extra Higgs which generates masses for the chirally-charged quarks. In this setup, the extra Higgs doublet mediates the chiral symmetry breaking from Sp(2N) to SU(3)_c, so M_ GC does not have to be close to and hence N does not have to be large. Instead, the extra Higgs should be sufficiently light since the VEV of the extra Higgs is inversely proportional to its mass squared. The light extra Higgses will introduce another Higgs mass fine-tuning problem which should be eventually addressed, e.g., by supersymmetry or anthropic requirements on the Yukawa couplings. The model still suffers from having too large an electroweak scale by Higgs-pion mixing, but the SU(3) or SU(2) extension can fix this problem. For the SU(3) extension, instead of forbidding the bottom Yukawa coupling, we can in total introduce three Higgs doublets H, H_b, and H_u and impose two chiral symmetries, U(1)_b:  d̅_3(1),  H_b(-1), U(1)_u:  Ψ_u̅_1(1), H_u(-1), to remove the strong CP phases of SU(2N+3) and SU(3)_T. The Yukawa interactions are L = - y_t q_3 u̅_3 H - ỹ_b q_3 d̅_3 H_b - Y^u_iaΨ_q_iΨ_u̅_a H - Y^d_iaΨ_q_iΨ_d̅_aH -ỹ^u_i Ψ_q_iΨ_u̅_1 H_u + h.c. , where Y^u_i1=0, i=1,2 and a=1,2,3. The SU(3)_T instanton effect generates the mixing H H_b (see figure <ref>), while the Sp(2N) dynamics generates a mixing H_u H. After H obtains a VEV, these mass mixing terms induce VEVs for H_b and H_u. The CKM mixing arises from introducing vector-like fermions B,B̅ charged under SU(3)_T that couples to q_3 H and mix with d̅_1,2 by SU(3)^2 breaking into SU(3)_c. Other variants of the model are possible. For example, we can eliminate H_b and generate the bottom Yukawa by SU(3)_T instantons as in section <ref>. Alternatively, instead of imposing a chiral symmetry on Ψ_u̅_1, we can impose a chiral symmetry on Ψ_d̅_1 and introduce H_d that couples to Ψ_d̅_1. For the SU(2) extension, instead of a single H_FS, we introduce H_FS and H_FS,u and impose a chiral symmetry with the following charge assignment, Ψ_u̅_1(1), H_FS,u(-1). The Sp(2N) dynamics then generates the mixing H_FS^† H_FS,u. A VEV of H_FS induces a VEV of H_FS,u to generate the up quark mass. We may also construct a model analogous to the model in <cit.> by introducing a pair of vector-like quarks Ψ_Q,Ψ_Q̅ and a complex scalar field S with a U(1) chiral symmetry Ψ_Q(1), Ψ_Q̅(0), S(-1). These fields couple to each other via a Yukawa coupling L= - λ_S Ψ_Q Ψ_Q̅S + h.c. = - λ_S ψ_Q ψ_Q̅S -λ_S Q Q̅S + h.c. , where Ψ_Q=(Q, ψ_Q) (and similarly for Ψ_Q̅) are the grand-color multiplets. With the U(1) chiral rotation, the strong CP phase can be removed and is unphysical. The Sp(2N) dynamics generates a ψ_Q ψ_Q̅ condensate, which then induces a VEV of S and hence a non-zero mass for Q and Q̅. With extra Higgses, it is also possible to utilize instanton effects to generate sufficiently large quark masses. For example, let us impose a chiral symmetry on the right-handed up quark and introduce an extra Higgs that is charged under the chiral symmetry. The up quark couples to the extra Higgs while the other quarks couple to the SM Higgs. The gauge group is SU(N+3), which breaks down to SU(N)× SU(3), and SU(N) is further Higgsed down. The SU(N) instanton effect generates the mixing between the Higgses proportional to the product of the Yukawa couplings. For a sufficiently large SU(N) symmetry breaking scale, a large SU(N) gauge coupling at that scale, and a light extra Higgs, the observed up Yukawa can be obtained. These type of scenarios have the advantage that the difficulty pointed out in section <ref> is absent. §.§ Higher fermion representations If all massless SU(3)_c charged particles are also charged under G_c', as in the model <cit.> with bi-fundamental fermions, the chiral symmetry of these fermions can be directly broken by the G_c' dynamics. In our SU(2N+3) grand-color model, this can be achieved by the rank 3 anti-symmetric tensor representation of SU(2N+3) and its anti-representation, which is similar to the rank 3 anti-symmetric tensor representation of SU(6) considered in <cit.>. In this type of model, there is no need for large N or extra Higgses. However, because the anti-symmetric rank 3 tensor representation contributes significantly to the β-function of the gauge coupling, it remains to be shown that G_c' exhibits chiral symmetry breaking, rather than flowing to a conformal fixed point. The contribution of the anti-symmetric three tensor representation to the β-function is Δ b = -(2N)(2N+1)/3. Adding the contribution from the Sp(2N) gauge boson and the three generations, Sp(2N) is asymptotically free if N=1 or 2 (assuming the extra SU(3)_T model to remove the third generation fermions does not change the range of N). The value N=1 leads to < Λ_ QCD and does not work. Instead, N=2 can have > Λ_ QCD, but given that the theory is near the boundary between asymptotically free and non-free, it seems plausible that Sp(2N) does not exhibit chiral symmetry breaking and rather flows into a conformal fixed point. § DARK MATTER In this section we discuss the possibility that the lightest pion arising from the Sp(2N) confinement can provide the missing dark matter component of the Universe. An interesting feature of Sp(2N) strong dynamics is that there are no stable Sp(2N) baryons and most of the Sp(2N) pions are unstable, except for one; in all models, the SM baryon symmetry is spontaneously broken by the Sp(2N) dynamics. The baryon symmetry can be explicitly broken by introducing a small coupling between the SU(2N+3) quark and the grand-color breaking field. This generates a tiny mass for the corresponding NGB, which can then be dark matter. The dark matter can be produced in the early universe from the thermal bath. The dark matter couples to hypercharge and SU(2)_L gauge bosons through the anomaly of the baryon symmetry. When the reheating temperature T_ RH is above the weak scale, the dark matter is dominantly produced at T=T_ RH with a rate ∼ g_2^2(α_2/4π)^2 T^3 / (8π f^2), where g_2 is the SU(2)_L gauge coupling. If T_ RH is below the weak scale, the production still dominantly occurs at T=T_ RH but via scattering with a photon and an off-shell Z boson exchange. The rate is suppressed by a factor of (T_ RH/m_Z)^4, where m_Z is the Z boson mass. The resultant dark matter abundance is ρ_ DM/s≃ 0.4  eV( 10^10  GeV/f)^2 (m_ DM/1  MeV) (T_ RH/10^3  GeV) min(1, (T_ RH/m_Z)^4) , where m_ DM is the dark matter mass. Note that dark matter is sufficiently cold for m_ DM≳ 25 keV <cit.>. The dark matter can also be produced via the misalignment mechanism <cit.>, which gives ρ_ DM/s≃ 0.4  eV( f/10^12  GeV)^2 (T_ RH/10^4  GeV) θ_i^2 min(1, √(m_ DM M_ Pl)/T_ RH), where θ_i is the initial misalignment angle. The first/second case in “min" corresponds to the beginning of oscillations before/after the completion of reheating. The parameter space is constrained by indirect detection experiments. For the dark matter mass below the weak scale, the dark matter can decay into a photon and a pair of SM fermions via off-shell Z boson exchange with a rate Γ≃g_2^2/128π^3(e g_2/16π^2)^2 m_ DM^7/f^2m_Z^4, where e is the electromagnetic gauge coupling. For m_ DM=0.1-10^4 MeV, which is the relevant mass range for setting limits, indirect-detection constraints from X-ray and gamma ray observatories require Γ^-1≳ 10^27 sec <cit.>. In Fig. <ref>, we show the bound on (m_ DM,f). For a given reheating temperature, (m_ DM,f) on each contour can explain the observed dark matter abundance. On the horizontal and negatively sloped segments of the contours, the misalignment mechanism determines the dark matter abundance, with the oscillation of dark matter begins before and after the completion of reheating, respectively. On the positively sloped segments, scattering from the thermal bath determines the abundance. Therefore, on the positively sloped segments of the contours, only m_ DM> 25 keV provides sufficiently cold dark matter. The blue-shaded region is excluded by indirect-detection experiments. § DISCUSSION AND SUMMARY In this paper, we have proposed a new massless quark solution to the strong CP problem by embedding the QCD group, SU(3)_c, into a larger, simple (grand-color) gauge group. The chiral symmetry of the massless quark makes the θ term unphysical and the grand color gauge interaction preserves the CP symmetry. The chiral symmetry can be realized as an accidental symmetry at low energies, arising from an exact discrete symmetry at UV scales. The grand color gauge group is then spontaneously broken down to SU(3)_c × G_c'. The G_c' strong dynamics exhibits chiral symmetry breaking, which is transferred to the SU(3)_c charged fermions by dimension-six, four-fermion operators generated by the exchange of heavy gauge bosons, and the massless quark obtains a non-zero mass. Furthermore, we showed that the strong CP phase of SU(3)_c remains below the experimental upper bound after including quantum corrections from the Yukawa interactions. In the simplest class of models, the electroweak vacuum becomes unstable due to a mixing between the Higgs and a NGB leading to a tachyonic direction. This problem can be avoided in several ways such as by introducing a mass term from grand-color breaking, introducing supersymmetry, or extending the gauge group. The first solution requires a non-trivial flavor structure of the mass term, while the supersymmetric case will be analyzed in future work. Instead, we focused on extending the gauge group, and considered two possibilities. In the first model, the third-generation fermions are charged under an SU(2)_T gauge symmetry with a large gauge coupling where quantum corrections proportional to the gauge coupling stabilizes the tachyonic direction. The first and second generation fermions are charged under a different SU(2)_FS gauge symmetry with a weak coupling and the two SU(2)'s are broken down to SU(2)_L. In the second model, the third generation fermions are charged under SU(3)_T and not under SU(2N+3). This means there are no Sp(2N) charged partners of the third generation quarks and the tachyonic direction is absent. The SU(3)_T and SU(3)⊂ SU(2N+3) groups are then broken down to SU(3)_c. The possible strong CP phase from the extra SU(3) is removed because of a massless bottom Yukawa coupling at UV scales. The bottom Yukawa coupling is generated by SU(3)_T instantons. We also considered a class of models with extra Higgses that are charged under the chiral symmetry. The chiral symmetry breaking in the Sp(2N) dynamics is communicated to SU(3)_c-charged fermions by the Higgses, so large N is not necessary. Another possibility is a model where all SU(3)_c massless fermions are also charged under Sp(2N), so that the chiral symmetry breaking in Sp(2N) is directly communicated to SU(3)_c. It remains, however, to be shown that the Sp(2N) dynamics breaks chiral symmetry, rather than the theory flowing to a conformal fixed point. Even though our massless quark solution has no light axion there may still be phenomenological signals. In some models, one of the NGBs has a mass ∼ 10^-6 that couples to gluons and photons. For ∼ 10^6 GeV, the mass is ∼ 1 GeV and may be discovered at DUNE. In fact, this pseudo-NGB behaves as a “heavy QCD axion" that arises from the mixing between one of the NGBs and the Sp(2N) η'. This is similar to what occurs in usual QCD axion models, except that the pseudo-NGB (or heavy axion) is composite and the spontaneous breaking of the PQ symmetry occurs via the same dynamics that also has an Sp(2N) anomaly. Furthermore, the vector-like quarks were crucial to obtain viable scenarios. Since they are charged under QCD with possible masses near the TeV scale, they could potentially be directly produced at colliders and provide a hint for our mechanism. There are also cosmological consequences of our model. Since N is large, we expect that the confinement of Sp(2N) is associated with a first-order phase transition <cit.>, which may produce primordial gravitational waves. While interesting, a more detailed study of the phase transition is beyond the scope of this work. In addition, the lightest NGB which arises from the spontaneous breaking of baryon number can be a decaying dark matter candidate. We comment on possible future directions. It will be interesting to embed the theory into supersymmetric theories. Supersymmetry can solve the problem of Higgs-pion mixing and may also (partially) explain the small mass scales in the theory such as the electroweak scale, grand-color symmetry breaking scale, and the extra Higgs masses. Supersymmetric extensions, however, may have extra CP phases in the masses or couplings of superpartners, and thus it remains to be carefully checked whether the corrections to the strong CP phase from these possible phases are sufficiently suppressed. It will also be interesting to use the Sp(2N) dynamics to solve the problems in (beyond) the Standard Model in addition to the strong CP problem. At the very least, it provides a new massless up-quark type solution to the strong CP problem, providing an intriguing alternative to a light axion. § ACKNOWLEDGMENTS We thank Raymond Co for collaborating in the early stages of this work and Luca Vecchi for useful discussions. The work of R.B. and T.G. is supported in part by the Department of Energy under Grant No. DE-SC0011842 at the University of Minnesota. K.H. is supported by the Department of Energy under Grant No. DE-SC0025242 at the University of Chicago, a Grant-in-Aid for Scientific Research from the Ministry of Education, Culture, Sports, Science, and Technology (MEXT), Japan (20H01895), and by World Premier International Research Center Initiative (WPI), MEXT, Japan (Kavli IPMU). T.G. also acknowledges the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-2210452, where part of this work was performed. § GRAND COLOR SYMMETRY BREAKING In this appendix, we present details of the grand color symmetry breaking. There are two ways to break the grand color group either as a two-step breaking considered in Eq. (<ref>) or just breaking the grand color group in a single step. We show that the latter is not possible. We first discuss the two-step symmetry breaking in Eq. (<ref>). The first symmetry breaking, SU(2N+3)→ SU(2N)× SU(3)_c × U(1), can be achieved by the VEV of scalar field transforming in the adjoint representation of SU(2N+3), which is proportional to [ 3 𝕀_2N ; -2N 𝕀_3 ] , where 𝕀_2N (𝕀_3) is a 2N× 2N (3× 3) identity matrix. It is shown in <cit.> that this VEV can be the absolute minimum of the scalar potential. The second symmetry breaking in Eq. (<ref>), SU(2N)× U(1)× U(1)_Y'→ Sp(2N)× U(1)_Y, is achieved by a scalar field, A_ij, transforming as a rank 2 anti-symmetric tensor of SU(2N+3) with a non-zero U(1)_Y' charge. Under SU(2N)× SU(3), A_ij decomposes into a rank 2 anti-symmetric tensor B_ij of SU(2N), a 3̅ of SU(3), and a bi-fundamental of SU(2N)× SU(3). By coupling the adjoint representation of SU(2N+3) to A_ij, we can generate a negative mass squared for B_ij while keeping the mass squared of all other terms positive, so that only B_ij obtains a non-zero VEV among the components of A_ij. Next, we show that the VEV of B_ij that breaks SU(2N) into Sp(2N) is the absolute minimum of the potential <cit.>. The scalar potential at the renormalizable level is V = - m_B^2 Tr(B B^†) + λ_B1( Tr(B B^†))^2 + λ_B2 Tr(B B^† B B^†). Note that the first two terms depend on the norm of B_ij, so the extremization of the potential is determined by the last term with the norm fixed. The B_ij VEV can be generically parameterized as B = [ i σ_2 b_1 ; ⋱ ; i σ_2 b_N ],   b_i ≥ 0 . Fixing the norm of B to be v_B via b_N^2=v_B^2- ∑_i=1^N-1 b_i^2, we extremize the following function of b_i (i=1,…,N-1), F(b_i) = ∑_i=1^N-1 b_i^4 + (v_B^2- ∑_i=1^N-1 b_i^2 )^2, which corresponds to the last term of Eq. (<ref>). The derivative of F with respect to b_i is ∂ F/∂ b_i = 4 b_i ( b_i^2 -b_N^2) , so the potential is extremized at b_i=0 or b_i^2=b_N^2 for i=1,…, N-1. This means that any non-zero b_i should take the same value. Without loss of generality, we take n≤ N nonzero b_i, with the remainder zero i.e. b_1 =⋯ =b_n≡ b, b_n+1 = ⋯ = b_N=0, b^2 = v_B^2/n . The last term in Eq. (<ref>) is then proportional to λ_B2 n b^4 = λ_B2v_B^4/n. When λ_B2 >0, the absolute minimum of the potential occurs for the largest possible value of n, i.e. n=N. Thus, SU(2N) is spontaneously broken to Sp(2N). We next discuss the alternative possibility of breaking the grand color group with a one-step symmetry breaking <cit.> SU(2N+3)× U(1)_Y'→ Sp(2N)× SU(3)_c× U(1)_Y. This can be achieved by the following VEV of A_ij, A∝[ i σ_2 ; ⋱ ; i σ_2 ; 0_ 3 ]. where 0_ 3 is a zero 3× 3 matrix. However, the VEV in Eq. (<ref>) is a saddle point rather than a local minimum and is not stable. To see this, we can extend the previous analysis on the breaking, SU(2N)→ Sp(2N) to SU(2N+3). Since 2N+3 is odd, the absolute minimum occurs by taking the (2N+3)th diagonal entry of A to be zero and the remaining 2N+2 entries nonzero. This gives the parametrization A = [ i σ_2 a_1 ; ⋱ ; i σ_2 a_N ; i σ_2 a ; 0 ]. The potential of A is V = - m_A^2 Tr(A A^†) + λ_A1( Tr(A A^†))^2 + λ_A2 Tr(A A^† A A^†) = -2m_A^2 (∑_i=1^N a_i^2 +a^2) +4 λ_A1(∑_i=1^N a_i^2 +a^2)^2 + 2λ_A2(∑_i=1^N a_i^4 +a^4). We consider the extremum with Eq. (<ref>), where a_i^2=m_A^2/(4 λ_A1 N+2λ_A2)≡ v_A^2 for i=1,…,N and a^2=0. At this extremum, the Hessian of the potential is 32λ_A1[ 1 ⋯ 1 0; ⋮ ⋱ ⋮ 0; 1 ⋯ 1 0; 0 0 0 0 ] v_A^2 +8λ_A2[ 2 𝕀_N 0; 0 -1 ] v_A^2 , with the N+1 eigenvalues 16(2 N λ_A1+ λ_A2 )v_A^2, 16λ_A2 v_A^2, ⋯, 16λ_A2 v_A^2,  - 8 λ_A2 v_A^2. Clearly, for λ_A2 >0 there is one negative eigenvalue, while for λ_A2 <0 there are N-1 negative eigenvalues. Thus, in both cases, the VEV in Eq. (<ref>) is not stable. For |λ_A2|≪ 1, quantum corrections due to gauge interactions will determine the stability of the VEV. However, these corrections should favor the VEV with maximal residual gauge symmetry, i.e., a_1≠ 0, a_2 = a_3 = ⋯ a_N=a=0. We conclude that the one-step gauge symmetry breaking is not possible. § PROOF THAT USING THE QUARK PICTURE In this appendix, we argue that the strong CP phase of QCD is zero rather than π using the quark picture. We choose the Sp(2N) sign convention so that Ψ_f Ψ_f̅≡Ψ_f^A Ψ_f̅,A = f^a f̅_a + J_ijψ_f^i ψ_f̅^j ≡ f f̅ + ψ_f ψ_f̅, where A, a, i are SU(2N+3), SU(3), and Sp(2N) indices, respectively and J_ij is an anti-symmetric Sp(2N) invariant tensor. §.§ One flavor toy model We consider the one flavor toy model in section <ref> and show that ψ_uψ_u̅<0, which is the key point. Since we are interested in the vacuum of the theory, we use the Euclidean path-integral. Introducing a mass m for the vector-like fermions ψ_u,ψ_u̅, the partition function is defined as Z(m) = ∫ Dψ_u Dψ_u̅DA e^- S_E -∫ d^4x_E (m ψ_u ψ_u̅ + h.c.) , where x_E are Euclidean spacetime coordinates and S_E is the Euclidean action of the gauge bosons A and quarks ψ_u,ψ_u̅ not including the mass terms. With θ set to zero above the SU(N) confinement scale, the Euclidean action S_E is real. Then ψ_u ψ_u̅ = - V^-1.1/Z∂ Z/∂ m|_m=0, where V is the spacetime Euclidean volume. [Note that since the chiral symmetry is explicitly broken by the strong dynamics, the order of the limits V→∞ and m→ 0 is not crucial.] Performing the path integral over fermions, we obtain Z(m) = ∫ DA  det(D + m) = ∫ DA ∏_λ_n >0m^n_0(A)(λ_n^2(A) + m^2), where n_0 is the number of fermion zero modes and iλ_n are the non-zero eigenvalues of the Dirac operator D. The only non-zero contribution to ∂ Z/∂ m|_m=0 arises from gauge field configurations with n_0=1, and in this case ∂ Z/∂ m|_m=0 >0. In addition, Z(0) is also positive, which then using (<ref>) gives the result ψ_u ψ_u̅ <0. §.§ One-generation models We first consider the minimal one-generation model in section <ref> where the key point required that ψ_q_uψ_u̅<0. To show this, we assume θ =0, and use the fact that the Sp(2N) vacuum is connected with ψ_q_uψ_u̅ = ψ_q_dψ_d̅ = - by a non-anomalous flavor transformation, chosen to be an SO(2) rotation: (ψ_q_d,ψ_u̅)→ (ψ_u̅, -ψ_q_d). In the electroweak symmetric limit, the vacuum is ψ_q_uψ_q_d = - ψ_u̅ψ_d̅ =. With a non-zero VEV of H, a nonzero condensate ψ_q_dψ_d̅<0 is induced, since y_d >0. Given that this vacuum is connected to the Sp(2N) vacuum via ψ_q_uψ_u̅ = ψ_q_dψ_d̅ by the SO(2) rotation, the desired negative value of ψ_q_uψ_u̅ is obtained. Next we consider the one-generation model with vector-like quarks in section <ref>. The case with m_U < m_Uu, where the SM up quark is mostly U̅, reduces to the analysis of the minimal one-generation model, so let us analyze the case with m_U > m_Uu. In this case, the SM right-handed up quark mainly comes from u̅, so choosing θ=0 means that the vacuum of Sp(2N) is connected with ψ_q_uψ_u̅ = ψ_q_dψ_d̅ = ψ_Uψ_U̅= - by a non-anomalous flavor transformation.[Note that the simple exchange of U̅ with u̅ involves an anomalous chiral rotation by π so one should be careful about which fermion is identified with a SM quark and the meaning of θ for a given choice.] Such a vacuum is parameterized by ψ_q_uψ_q_d = ,  [ ψ_d̅ψ_u̅ ψ_d̅ψ_U̅; ψ_Uψ_u̅ ψ_Uψ_U̅ ] = [ sinϕ cosϕ; cosϕ - sinϕ ]. The exchange of the Higgs neutral component at the confinement scale, generates an effective operator L∼λ_U y_d/Λ_ Sp^2(ψ_q_uψ_U̅) (ψ_q_dψ_d̅). Since the effective operator (<ref>) depends on the strong dynamics, we cannot precisely determine the magnitude of the coefficient of the operator, but we can still determine the sign. The condensation ψ_q_uψ_q_d= gives L∼λ_U y_dΛ_ Spψ_d̅ψ_U̅≡ - V_ eff . To minimize the energy (or V_ eff), we require ψ_d̅ψ_U̅>0, so that using (<ref>) we obtain ψ_U ψ_u̅ >0. For this sign, the mass generated by the dimension-six operator (<ref>) is negative, i.e. m_Uu<0. After integrating out UU̅, the up Yukawa coupling obtained from (<ref>) is given by - λ_U m_Uu/ m_U and is therefore positive. §.§ Two-generation models In the minimal two generation case, we can again rely on the arguments presented in section <ref> for each generation to conclude that θ̅=0. Starting from the vacuum ψ_q_uψ_u̅, ψ_q_dψ_d̅, ψ_q_cψ_c̅, ψ_q_sψ_s̅ <0, we can perform a baryon number and an SO(2) rotation to obtain ψ_q_uψ_q_d = - ψ_u̅ψ_d̅ =. Similarly, we can perform an SO(2) rotation of (ψ_q_s,ψ_c̅) to obtain ψ_q_cψ_q_s= - ψ_c̅ψ_s̅. Note that the relative sign of ψ_q_uψ_q_d and ψ_q_cψ_q_s does not need to be fixed a priori, since the two vacua are connected by a (non-anomalous) flavor transformation. The two-generation model with vector-like quarks in section <ref> reduces to the case with one generation and a vector-like quark, so the proof in appendix <ref> is applicable. § FOUR-FERMION OPERATORS DUE TO GAUGE BOSON EXCHANGE In this appendix we derive the interactions mediated by the massive grand-color gauge bosons. §.§ The first stage in the grand color symmetry breaking pattern (<ref>) contains the breaking SU(2N+3)→ SU(2N)× SU(3)_c. The branching rule of SU(2N+3) to SU(2N)× SU(3)_c is (2N+3)⊗(2N+3)→ (adj_2N,1)⊕ (1,1)⊕ (1,adj_3)⊕ (1,1)⊕ (2N, 3̅)⊕ (2N,3) . Therefore, there are additional massive vector bosons with mass M_ GC transforming as (2N, 3̅) and (2N, 3) which can have phenomenological consequences at low energies. These gauge bosons are the X_μ, am, and X_μ, ma subset of the SU(2N+3) gauge fields A_μ, ij, where i,j,a,m are gauge indices – i,j=1,2… 2N+3, a=1,2,3 and m=4,… 2N+3, and μ is a Lorentz index. The X_μ matrices refer to the off-diagonal subset of A_μ=A_μ^α t^α, where α=1,2… (2N+3)^2-1 is an adj_2N+3 index and can be written as X_μ, ij≡[ X_μ, am; X_μ, ma ] . It is convenient to define X_μ=X^α^'_μ t^α^', where t^α^' are the off-diagonal generators[These can always be chosen as matrices with σ_1/2, σ_2/2 in the a, m subspace – e.g. t^α^' defined by t^α^'_am=-i/2, t^α^'_ma=i/2 and t^α^'_ij=0 ∀ i,j≠ a,m.] of SU(2N+3), corresponding to the (2N, 3̅)⊕ (2N,3) representation in (<ref>). The matter content of the SU(2N+3) grand color theory, given in Table <ref>, consists of the left-handed Weyl fermions Ψ_q,Ψ_u̅ and Ψ_d̅. The interactions between these fermions and the massive gauge bosons X_μ are ℒ⊃ iΨ^†_q aσ^μ(-ig_GCX_μ am ) Ψ_q m+iΨ_u̅ m^†σ^μ(+ig_ GC X_μ am)Ψ_u̅ a+ M^2_ GCX_μ amX_ma^μ+ h.c. , where the normalization for the gauge boson mass term has been fixed using Tr t^α^'t^β^'=1/2δ^α^'β^' so that ℒ⊃1/2M_ GC^2 X^α^'_μX^α^' μ=M_ GC^2 Tr X_μ X^μ=M^2_ GC(X_μ amX_ma^μ+X_μ maX_am^μ) . In (<ref>), we have also used the property that (t_2N^α^')_ij=-(t_2N^α^')_ji to write the Ψ_u̅ a interaction in terms of X^μ_am, where t_2N are the generators for the fundamental representation of SU(2N). The equation of motion obtained from (<ref>) is then given by -2M_ GC^2X^μ_ma =g_GC(Ψ^†_q aσ^μΨ_q m-Ψ^†_u mσ^μΨ_u a) . Integrating out the massive gauge bosons in (<ref>), using (<ref>), then gives ℒ ⊃g_GC^2/2M_ GC^2((ψ^†_qσ^μq)(ψ_u̅^†σ_μu)+(q^†σ^μψ_q)(u̅^†σ_μψ_u)) , =g_GC^2/M_ GC^2((ψ^†_qψ_u̅^†)(uq)+(q^†u̅^†)( ψ_q ψ_u)) , where the grand-color fermion multiplets have been split into their SU(2N)× SU(3)_c components as shown in Table <ref>. A Fierz rearrangement has been performed to obtain the second line in (<ref>). This is the same four-fermion operator given in (<ref>). Note that there are no four-fermion operators such as d̅^†u̅^†ψ_d̅ψ_u̅ or q^† q^†ψ_qψ_q. Therefore, the condensates in Eqs. (<ref>) and (<ref>) do not generate mass for the SM fermions due to the grand color gauge bosons, unlike the one flavor toy model in section <ref>. §.§ The second stage of the grand color symmetry breaking (<ref>) involves SU(2N)→ Sp(2N). Due to this breaking at the scale M_ Sp, we obtain four-fermion operators corresponding to the exchange of massive SU(2N) gauge bosons between two pairs of fermions – two in the 2N representation and two in 2N representation. Therefore, we expect the four-fermion operators to appear with both signs. Since, ψ^Iψ^J∼ e^iΠ^IK/fΣ_0^KJ, these four-fermion operators have the form ψ^4/M_ Sp^2 and contribute to the pion masses only to subleading order O(^2/M_ Sp^2), compared to the Higgs-mediated Yukawa contributions that will be discussed in appendix <ref>. § VACUUM ALIGNMENT BY YUKAWA COUPLINGS AND MASSES In this appendix, we perform the vacuum stability analysis for the SU(2F) → Sp(2F) symmetry breaking associated with the confinement of Sp(2N) for the one and two-generation models discussed in sections <ref> and <ref>. This involves appropriately parameterizing the broken symmetry directions of the vacuum in terms of pions, and studying their potential. In particular, we will compute the pion VEVs and check whether these vacua are tachyonic or not, hence establishing their stability. Above the confinement scale, there is an SU(2F) flavor symmetry for the 2F fermions, ψ, charged under Sp(2N). Under an SU(2F) transformation, the fermions ψ and the non-linear sigma field Σ (as well as Σ_0) defined in (<ref>) transform as <cit.> ψ →𝒰ψ, 1/ψψ^T = Σ →𝒰Σ 𝒰^T where 𝒰 is a 2F× 2 F unitary matrix. The “effective" fermion masses can then be written as L = -1/2ψ^T M ψ + h.c. = 1/2 Tr(ψψ^T M) + h.c. , where the “effective" mass matrix M includes Dirac masses, Yukawa couplings and the Higgs field H. Once the Sp sector confines, the flavor symmetry breaks down to Sp(2F) <cit.>, and M can be considered as an SU(2F) spurion transforming as M→𝒰^*M 𝒰^†. This leads to the low energy Lagrangian ℒ ⊃/2Tr(Σ M +M^†Σ^†)+c_1/Tr(Σ M)Tr(M^†Σ^†) +c_2/{[Tr(Σ M)]^2+h.c.}+c_3/{Tr(Σ MΣ M)+h.c.} , where we have only included terms linear or quadratic in M, absorbed an order one coefficient in the first term into the definition of and c_1,2,3 are constants. §.§ One-generation model We first analyze the one-generation model with a vector-like quark presented in section <ref> where ψ= [ ψ_q_u ψ_q_d ψ_U ψ_u̅ ψ_d̅ ψ_U̅ ]^T  . As discussed in appendix <ref>, θ=0 corresponds to ψ_q_uψ_u̅=ψ_q_dψ_d̅= ψ_U ψ_U̅<0. The relevant condensates related to this by a non-anomalous flavor transformation are given by ψ_q_uψ_q_d, ψ_Uψ_u̅, ψ_d̅ψ_U̅>0. In terms of Σ_0^IJ≃⟨ψ_Iψ_J⟩ /, this can be written as Σ_0=[ iσ_2 0 0; 0 i σ_2 0; 0 0 i σ_2 ] . Given that the number of flavors F=3, there are 14 broken symmetry generators, corresponding to the flavor symmetry breaking SU(6)→ Sp(6) by the Sp(2N) strong dynamics, which satisfy T^αΣ_0=Σ_0T^α T. These generators T^α are given by T^1=1/2√(2)[ 𝕀_2 0 0; 0 - 𝕀_2 0; 0 0 0 ], T^2=1/2√(6)[ 𝕀_2 0 0; 0 𝕀_2 0; 0 0 -2 𝕀_2 ], T^3=1/2√(2)[ 0 0 iσ_1; 0 0 0; -iσ_1 0 0 ], T^4=1/2√(2)[ 0 0 iσ_2; 0 0 0; -iσ_2 0 0 ], T^5=1/2√(2)[ 0 0 iσ_3; 0 0 0; -iσ_3 0 0 ], T^6=1/2√(2)[ 0 0 𝕀_2; 0 0 0; 𝕀_2 0 0 ], T^7=1/2√(2)[ 0 0 0; 0 0 iσ_1; 0 -iσ_1 0 ], T^8=1/2√(2)[ 0 0 0; 0 0 iσ_2; 0 -iσ_2 0 ], T^9=1/2√(2)[ 0 0 0; 0 0 iσ_3; 0 -iσ_3 0 ], T^10=1/2√(2)[ 0 0 0; 0 0 𝕀_2; 0 𝕀_2 0 ], T^11=1/2√(2)[ 0 iσ_1 0; -iσ_1 0 0; 0 0 0 ], T^12=1/2√(2)[ 0 iσ_2 0; -iσ_2 0 0; 0 0 0 ], T^13=1/2√(2)[ 0 iσ_3 0; -iσ_3 0 0; 0 0 0 ], T^14=1/2√(2)[ 0 𝕀_2 0; 𝕀_2 0 0; 0 0 0 ] . We next redefine the generators as T^1=T^1, T^2=T^2, T^3=T^3+iT^4/√(2), T^4=T^5+iT^6/√(2), T^5=T^7+iT^8/√(2), T^6=T^9+iT^10/√(2), T^7=T^11+iT^12/√(2), T^8=T^13+iT^14/√(2) , which will be convenient for identifying the pion constituents. The non-linear sigma field is written in the form Σ(x)=exp[ i/fΠ· T ] Σ_0 , Π· T ≡∑_α=1^2Π^α(x)T^α + ∑_α=3^8 (Π^α(x)T^α + Π^α †(x)T^α†) , where Π^1,2 are real scalar fields and Π^3… 8 are complex scalar fields, so that the pion fields can be identified with quark-antiquark and diquark states. Using the basis (<ref>) for the generators in SU(6)/Sp(6), we can expand Σ in (<ref>) to leading-order beyond Σ_0 iΠ· T ·Σ_0=1/2[ 0 i/√(6)(√(3)Π^1+Π^2) Π^7 -Π^8 Π^3 -Π^4; - i/√(6)(√(3)Π^1+Π^2) 0 -Π^8 † -Π^7 † -Π^4 † -Π^3 †; -Π^7 Π^8 † 0 i/√(6)(-√(3)Π^1+Π^2) Π^5 -Π^6; Π^8 Π^7 † i/√(6)(√(3)Π^1-Π^2) 0 -Π^6 † -Π^5 †; -Π^3 Π^4 † -Π^5 Π^6 † 0 -i√(2/3)Π^2; Π^4 Π^3 † Π^6 Π^5 † i√(2/3)Π^2 0; ] . Using (<ref>) and (<ref>), we can read off the quark content of the different pion fields to linear order in the pion fields: f/(ψ_q_uψ_q_d)= i/2√(2)Π^1 + i/2√(6)Π^2,   f/(ψ_Uψ_u̅)= -i/2√(2)Π^1 + i/2√(6)Π^2, f/(ψ_d̅ψ_U̅)= - i √(2/3)Π^2,   f/ (ψ_q_uψ_d̅) = - f/(ψ_q_dψ_U̅)^† = 1/2Π^3, f/(ψ_q_uψ_U̅) = f/(ψ_q_dψ_d̅)^† = -1/2Π^4,   f/(ψ_Uψ_d̅) = - f/(ψ_u̅ψ_U̅)^† = 1/2Π^5 , f/(ψ_Uψ_U̅) = f/(ψ_u̅ψ_d̅)^† = -1/2Π^6,   f/(ψ_q_uψ_U) = - f/(ψ_q_dψ_u̅)^† = 1/2Π^7 , f/(ψ_q_uψ_u̅) = f/(ψ_q_dψ_U)^† = -1/2Π^8 . From the quark content in (<ref>) one can see that Π^1, Π^2 and Π^6 are electroweak-neutral pions. The electroweak-charged pions receive their dominant mass contribution via gauge boson loops, which results in a positive mass squared for all of them. Therefore, to study the vacuum stability, it suffices to check the signs of the quadratic Π^1, Π^2 and Π^6 terms that arise from (<ref>). The matrix M in (<ref>) can be obtained by identifying the terms L⊃ -y_d ψ_q ψ_d̅H - m_U ψ_U ψ_U̅- λ_Uψ_q ψ_U̅ H + h.c. , in (<ref>) and comparing these with (<ref>), implying M =[ 0 0 0 0 y_d φ^- λ_U φ^0; 0 0 0 0 y_d (φ^0)^† -λ_U φ^+; 0 0 0 0 0 m_U; 0 0 0 0 0 0; - y_d φ^- -y_d (φ^0)^† 0 0 0 0; -λ_U φ^0 λ_U φ^+ -m_U 0 0 0 ] , where φ^0 and φ^+ are the neutral and charged components of H=(φ^+  φ^0)^T, respectively, with φ^-=(φ^+)^†. Note that we work in the convention that ε^12=+1 such that ℒ⊃ -ψ_qψ_U̅H=-ε^ijψ_q iH_jψ_U̅=(φ^+ψ_q_d-φ^0ψ_q_u)ψ_U̅. The linear and quadratic potential of the electroweak-neutral pions obtained from Eq. (<ref>) is ℒ⊃ m_U/2( 1/fΠ^6 - i /4 √(6)f^2Π^6 (√(3)Π^1 + Π^2) ) + h.c. + c_3 y_d λ_U ^2/16π^2( |Π^6|^2 +1/2(Π^1 - 1/√(3)Π^2 )^2 ), where we have ignored O(m_U^2) terms, which are subdominant since we have assumed m_U≪. The vacuum alignment of the electroweak neutral pions is determined by c_3, and this coefficient can be obtained from the quantum corrections due to the pion-Higgs interactions. The terms proportional to c_3 originally included |H|^2 but since we are only interested in the pion potential in the electroweak symmetric limit, |H|^2 has been replaced with ^2/(16π^2), corresponding to the quantum corrections from the Higgs loop. To compute the coefficient c_3 in the chiral Lagrangian, we can analyze the mass corrections of the pions resulting from the kinetic term and the term linear in M in (<ref>), and compare them to the terms in (<ref>) that are quadratic in M. The quantum correction to the Π^6 mass can be computed using the following relevant interaction terms L⊃ /8 f^2(-λ_U + y_d ) Π^8 Π^6 φ^0 - /48 f^3 |Π^6|^2 (λ_U + y_d ) Π^4φ^0 + /2f(λ_U + y_d ) Π^4 φ^0 + h.c. -1/48f^2Π^6 Π^6 †∂^μΠ^4∂_μΠ^4 † . The quantum correction from φ^+ is the same as that from φ^0. The quantum corrections to the mass squared of Π_6 can be calculated from the diagrams in figure <ref> which gives m_Π_6^2 = ( ( -λ_U+y_d/8)^2 +2(-λ_U+y_d/48 ) (λ_U+y_d/2) - ( -1/48 )(λ_U+y_d/2)^2) Λ_Sp^2/16π^2lnΛ_Sp^2/m_Π_4^2 = - λ_Uy_d/8 ^2/16π^2lnΛ_Sp/m_Π_4 . Including similar contributions from φ^+ gives an additional factor of 2. Thus, comparing (<ref>) with the mass squared obtained from (<ref>) gives the identification c_3 = - 1/4 ln/m_Π_4. Since c_3<0, the mass squared of Π^6 is positive. For simplicity, the 𝒪(1) log factor in (<ref>) has been ignored in section <ref>. The first term in Eq. (<ref>) destabilizes Π^6=0, while the term proportional to c_3 λ_U y_d stabilizes it. The vacuum alignment is determined by the balance between these two terms. However, the potential (<ref>) is obtained from expanding around the vacuum (<ref>) assuming a zero Π^6 VEV. To allow for the possibility of a nonzero Π^6 VEV requires computing the full potential around Eq (<ref>). The full potential containing all the electroweak neutral pions is then given by L⊃1/2( 4 m_U/Π^ A1sin(Π^ A1/2f) (Im(Π^6) sin(Π^ B1/4√(6)f)+Re(Π^6) cos(Π^ B1/4√(6)f)) +c_3/2√(2)π^2λ_U y_d ( (Π^1-√(3)Π^2) 1/Π^ A1sin(Π^ A1/2f) sin(Π^ B1/4√(6)f) -2 √(2)cos(Π^ A1/2f) cos(Π^ B1/4√(6)f))) , where we have defined Π^ A1≡√(Π^6 †Π^6+1/8(Π^1-√(3)Π^2)^2) and Π^ B1≡√(3)Π^1+Π^2. Since only Re (Π^6) obtains a VEV, we can set all other pion fields to zero in (<ref>) and work with the much simpler potential ℒ⊃ -V(ϕ) = 2 m_Usinϕ - c_3/2π^2λ_U y_d cosϕ, ϕ≡ ReΠ^6/2f . The minimum of the potential (<ref>) is simply tanϕ = 4π^2 m_U/|c_3| λ_U y_d ≃16π^2 m_U/λ_U y_d , where in the second relation we have used (<ref>) (ignoring the log). The relation between ϕ and the quark condensation is given by Eq. (<ref>) and (<ref>) is used in (<ref>). In the limit, m_U ≪ y_d λ_U, the squared masses of the neutral pions are m_Π^2 = 0,  - c_3 y_d λ_U/16π^2^2, - c_3 y_d λ_U/16π^2 ^2,  -c_3 y_d λ_U/12π^2^2 , where c_3<0 is given in (<ref>). The nonzero mass-squared values are all positive and the massless neutral pion is associated with the spontaneous breaking of the baryon symmetry. For m_U ≫ y_d λ_U, three neutral pions have masses squared of O(m_U ) and one is massless. In this limit, ϕ→π/2 and the vacuum reduces to the one used in section <ref>: ⟨ψ_q_uψ_q_d⟩=-⟨ψ_u̅ψ_d̅⟩ > 0. We can also confirm the results presented in appendix <ref> in terms of fermions in the pion picture by including η' and the corresponding generator =𝕀/√(12) in (<ref>). As expected, we find that the η' mixes with the other pions – Π^1, Π^2, and Im Π^6, but the vacuum remains stable at η'=0. Note that there are no neutral pion mass terms proportional to λ_U^2 or y_d^2. This can be understood by collective symmetry breaking. Neglecting the Yukawa couplings, the model has an SU(2) flavor symmetry corresponding to transformations between ψ_U̅ and ψ_u̅, and another SU(2) corresponding to transformations between ψ_U and ψ_d̅. In the IR, this SU(2)× SU(2) flavor symmetry is broken down to SU(2) upon the quark condensation. Now, if we turn on only one of the Yukawa couplings, the symmetry breaking pattern is SU(2)→∅. Therefore, none of the neutral Nambu-Goldstone bosons receive a mass contribution proportional to λ_U^2 or y_d^2. This is evident in (<ref>) where we explicitly confirm the absence of a quantum correction to the mass of Π^6 proportional to λ_U^2 or y_d^2 from the interactions in Eq. (<ref>). §.§ Two-generation model We next analyze the two-generation model without a vector-like quark where ψ^T = ( ψ_q_uψ_q_dψ_q_cψ_q_sψ_u̅_1ψ_d̅_1ψ_u̅_2ψ_d̅_2) . As discussed in appendix <ref>, we need ψ_q_uψ_q_d >0 and ψ_u̅ψ_d̅ <0. Thus, we start from the field space Σ_0= [ iσ_2 0 0 0; 0 -iσ_2 0 0; 0 0 -iσ_2 0; 0 0 0 iσ_2 ] . Given the number of flavors F=4, the symmetry breaking is SU(8)→ Sp(8) which leads to 27 broken generators, with 6 of them corresponding to neutral pions. The neutral pions correspond to the following symmetry-breaking patterns. In the limit of vanishing Yukawa couplings, the theory has an SU(2)_q× SU(2)_u̅c̅× SU(2)_d̅s̅× U(1)_B× U(1)_u̅c̅-d̅s̅ flavor symmetry. The SU(2)_q symmetry is broken down to U(1), the SU(2)_u̅c̅× SU(2)_d̅s̅ symmetry breaks into SU(2), and U(1)_B is completely broken, giving six pions. The associated six generators are T^1=1/4[ 𝕀_2 0 0 0; 0 𝕀_2 0 0; 0 0 - 𝕀_2 0; 0 0 0 -𝕀_2 ], T^2=1/2√(2)[ 𝕀_2 0 0 0; 0 - 𝕀_2 0 0; 0 0 0 0; 0 0 0 0 ], T^3=1/2√(2)[ 0 0 0 0; 0 0 0 0; 0 0 𝕀_2 0; 0 0 0 -𝕀_2 ], T^4=1/2√(2)[ 0 i𝕀_2 0 0; -i𝕀_2 0 0 0; 0 0 0 0; 0 0 0 0 ], T^5=1/2√(2)[ 0 0 0 0; 0 0 0 0; 0 0 0 σ_3; 0 0 σ_3 0 ], T^6=1/2√(2)[ 0 0 0 0; 0 0 0 0; 0 0 0 i𝕀_2; 0 0 -i𝕀_2 0 ] . These generators are then redefined as T^i=T^i (i=1…4), T^5=i T^5+T^6/√(2), in order to determine the pion constituents. The non-linear sigma field associated with this redefinition is then Σ(x)= exp[ i/fΠ· T ] Σ_0, Π· T ≡∑_α=1^4Π^α(x)T^α + Π^5(x)T^5 + Π^5†(x)T^5† , where Π^1…4 are real scalar fields and Π^5 is a complex scalar field. The leading non-trivial term in the expansion of Σ in (<ref>) is iΠ· T ·Σ_0 = i/2√(2)[ 0 1/√(2)Π^1 + Π^2 0 -i Π^4 0 0 0 0; -1/√(2)Π^1 - Π^2 0 i Π^4 0 0 0 0 0; 0 -i Π^4 0 -1/√(2)Π^1 + Π^2 0 0 0 0; i Π^4 0 1/√(2)Π^1 - Π^2 0 0 0 0 0; 0 0 0 0 0 1/√(2)Π^1 - Π^3 0 i √(2)Π^5; 0 0 0 0 -1/√(2)Π^1 + Π^3 0 -i √(2)Π^5† 0; 0 0 0 0 0 i √(2)Π^5† 0 -1/√(2)Π^1 - Π^3; 0 0 0 0 -i √(2)Π^5 0 1/√(2)Π^1 + Π^3 0 ] . The matrix M in (<ref>) then becomes M = [ 0 0 0 0 y_u φ^0 0 0 y_1 φ^-; 0 0 0 0 -y_u φ^+ 0 0 y_1 (φ^0)^†; 0 0 0 0 0 0 y_c φ^0 y_2 φ^-; 0 0 0 0 0 0 -y_c φ^+ y_2 (φ^0)^†; - y_u φ^0 y_u φ^+ 0 0 0 0 0 0; 0 0 0 0 0 0 0 0; 0 0 - y_c φ^0 y_c φ^+ 0 0 0 0; -y_1 φ^- - y_1 (φ^0)^† - y_2 φ^- - y_2 (φ^0)^† 0 0 0 0 ]. The linear and quadratic potential of the electroweak-neutral pions obtained from (<ref>) is ℒ⊃c_3/4π^2( 1/√(2)fy_1 y_c Π^4 -1/f y_1 y_u Re Π^5 +y_2 y_c /8f^2((Π^2+ Π^3)^2+(Π^4)^2 + 2|Π^5|^2 ) -y_u/2√(2) f^2( y_2 Π^4 Re Π^5 - y_1 Π^2 Im Π^5 ) ) , where Π^4 and Re Π^5 have tadpole terms and can obtain VEVs. The Π^4 tadpole term is much larger than the Re Π^5 tadpole term since y_c ≫ y_u. The VEV of Π^4 then generates another Re Π^5 tadpole term which exactly cancels the term independent of Π^4. The cancellation occurs beyond the leading-order which requires the full potential. This is obtained from (<ref>) and is given by ℒ⊃ -c_3/2π^2/Π^ A2Π^ B2(sin(Π^ B2/2 √(2) f) (-sin(Π^ A2/2 √(2) f) (y_2 (y_cΠ^2 Π^3-√(2)y_u Π^4 Re Π^5) +√(2)y_1 y_uΠ^2 Im Π^5)-y_1y_cΠ^4Π^ A2cos(Π^ A2/2 √(2) f)) + Π^ B2cos(Π^ B2/2 √(2) f) (y_2 y_cΠ^ A2cos(Π^ A2/2 √(2) f)+√(2)y_1 y_u Re Π^5 sin(Π^ A2/2 √(2) f))) , where we have defined Π^ A2≡√((Π ^3)^2+2Π ^5 †Π ^5) and Π^ B2≡√((Π ^2)^2+(Π ^4)^2). Focusing on the potential of the fields that can acquire a VEV, namely ϕ≡Π^4/(2√(2)f) and α≡ Re Π^5/(2f), we obtain ℒ⊃ -V(ϕ, α) = -c_3 /2π^2( y_c cosα(y_2 cosϕ - y_1 sinϕ) +y_u sinα(y_1 cosϕ + y_2 sinϕ) ) . For y_c > y_u, the minimum is given by Π^4 = -2 √(2) farctany_1/y_2, Π^5 = 0 , where Π^2 + Π^3 and the imaginary part of Π^5 both have positive mass-squared and no tadpole terms, and therefore do not obtain VEVs. The linear combination Π^2 - Π^3 is not fixed by the leading-order potential, but we find that it obtains a positive mass squared from a higher-order coupling with Π^4. With a non-zero Π^4 VEV, the quark condensate is ψψ^T = ×[ 0 y_2/√(y_1^2 + y_2^2) 0 - y_1/√(y_1^2 + y_2^2) 0 0 0 0; -y_2/√(y_1^2 + y_2^2) 0 y_1/√(y_1^2 + y_2^2) 0 0 0 0 0; 0 - y_1/√(y_1^2 + y_2^2) 0 - y_2 /√(y_1^2 + y_2^2) 0 0 0 0; y_1/√(y_1^2 + y_2^2) 0 y_2/√(y_1^2 + y_2^2) 0 0 0 0 0; 0 0 0 0 0 -1 0 0; 0 0 0 0 1 0 0 0; 0 0 0 0 0 0 0 1; 0 0 0 0 0 0 -1 0 ] , where ψ is given in (<ref>). This condensate is then rewritten in (<ref>). We also comment on the nature of the vacuum when the two-generation Yukawa matrix in (<ref>) is replaced by the SM-like Yukawa structure Y^d=[ y_dcosθ_c y_ssinθ_c; -y_dsinθ_c y_scosθ_c ] . In this case, ℒ⊃c_3/4π^2( 1/√(2)f(y_s y_c+y_dy_u)sinθ_c Π^4 -1/f(y_d y_c+y_sy_u)sinθ_c Re Π^5 +1 /8f^2(y_s y_c+y_dy_u)cosθ_c ( (Π^2+ Π^3)^2+(Π^4)^2 + 2|Π^5|^2 ) -1/2√(2) f^2(y_d y_c+y_sy_u)( cosθ_c Π^4 Re Π^5 - sinθ_c Π^2 Im Π^5 ) ) . We see from (<ref>) that only Π^4 and Re Π^5 can obtain VEVs due to Yukawa interactions. Including higher order terms in ϕ≡Π^4/(2√(2)f) and α≡ Re Π^5/(2f) and setting all other pion VEVs to zero, we obtain ℒ⊃ -c_3 /2π^2( (y_s y_c+y_dy_u) cos(θ_c+ϕ)cosα +(y_d y_c+y_sy_u)sin(θ_c+ϕ) sinα) . The minimum of this potential is given by Π^4 = -2 √(2) f θ_c , with all other pions having a vanishing VEV. The pion masses at this minimum are then m_Π^2 =  0,  -c_3/16π^2(y_d y_u+y_sy_c)^2,  -c_3/16π^2 (y_s± y_d )(y_c± y_u)^2,   - c_3/16π^2(y_d y_u+y_sy_c ±√((y_cy_d+y_uy_s)^2sin^2θ_c+(y_cy_s+y_uy_d)^2cos^2θ_c ))^2 . Note that the next-to-lightest state in the limit y_u,y_d→ 0 has a mass ∼√(y_sy_c)sin(θ_c/2) /4π, while for θ_c→ 0, the mass would instead be θ_c^2/2(y_c^2-y_u^2) (y_s^2-y_d^2) / (y_c y_s+y_d y_u) /4π. We also briefly comment on the case with vector-like quarks U, U̅. In this case, we start from the vacuum Σ_0= diag(iσ_2,iσ_2,iσ_2,iσ_2,iσ_2) , in the basis ψ^T = (ψ_q_u ψ_q_d ψ_q_c ψ_q_s ψ_U ψ_u̅_1 ψ_d̅_1 ψ_u̅_2 ψ_d̅_2 ψ_U̅ ) . The symmetry breaking generates 44 pions, 11 of which are electroweak neutral. In this case, we only show the pions that may obtain a VEV which are given by iΠ· T ·Σ_0 = - i/2√(2)[ 0_4× 4 0_4× 6; 0_6× 4 A_6× 6 ], A=[ 0 0 0 0 0 Π^1; 0 0 0 0 Π^1 0; 0 0 0 0 0 Π^2; 0 0 0 0 Π^2 0; 0 -Π^1 0 -Π^2 0 0; -Π^1 0 -Π^2 0 0 0 ] . The linear terms in the pion potential arising from the Yukawa interactions in (<ref>) are given by ℒ⊃/√(2)f(m_UΠ^1 + c_3/4π^2( y_d Re (λ_U1)- y_2^' y_s) Π^2 ) . Writing the potential to all orders then gives Π^1 = -2√(2) f α sinϕ, Π^2 = -2√(2) f α cosϕ , where ϕ= arctan4π^2m_U/c_3 ( y_d Re (λ_U1)- y_2^' y_s) , α=arctan√((c_3 ( y_d Re (λ_U1)- y_2^' y_s))^2+16π^4m_U^2)/c_3 (λ_U2 y_s+ y_1^' y_d) . The condensates can then be written as Σ_q = [ [first-row,first-col] ψ_q_u ψ_q_c ψ_q_d -1 0 ψ_q_s 0 -1 ] , Σ_ud = [ [first-row,first-col] ψ_d̅ ψ_s̅ ψ_U ψ_u̅ - sin^2α/2 sin2ϕ sinαsinϕ cos^2α/2+cos2ϕsin^2α/2 ψ_c̅ cos^2α/2-cos2ϕsin^2α/2 sinαcosϕ -sin^2α/2 sin2ϕ ψ_U - sinαcosϕ cosα -sinαsinϕ ] , where we have separated the SU(2) doublets and the singlets for convenience. In particular, note that in the limit m_U→ 0, we obtain Π^1 = 0, Π^2 = -2 √(2) farctan y_d Re(λ_U1)- y_2^' y_s/ y_sλ_U2+ y_1^' y_d . Also, when λ_U1→ y_2^' y_s/y_d, this reduces to the one generation case with a vector-like quark. § FLAVOR STRUCTURE OF THE THREE-GENERATION MODELS In this appendix we study the flavor structure of the three-generation models by computing the flavor invariants which can then be used to estimate the contribution of Yukawa phases to the strong CP phase. Consider the Yukawa interactions and mass terms in a generic model discussed in section <ref> L= - Y^u_ajψ_q_aψ_u̅_j H - Y^d_aiψ_q_aψ_d̅_iH - M_ijψ_d̅_iψ_u̅_j , where a=1… F_q, i=1,… F_d̅+1, and j=1,… F_u̅+1 are flavor indices and we have defined ψ_q =[ ψ_q_1, … , ψ_q_F_q ]^T , ψ_u̅ =[ ψ_u̅_1, …, ψ_u̅_F_u̅, ψ_U̅ ]^T , ψ_d̅ =[ ψ_d̅_1, … , ψ_d̅_F_d̅, ψ_U ]^T , which transform under the flavor symmetries[ Note that in this appendix, we are only interested in the flavor symmetries below the scale , and hence below the grand color breaking scale, M_ GC. In contrast to this, above M_ GC the flavor symmetries are SU(F_q)_q × SU(F_u̅+1)_u̅× SU(F_d̅)_d̅.] SU(F_q)_q × SU(F_u̅+1)_u̅× SU(F_d̅+1)_d̅ as ψ_q→ U_q ψ_q, ψ_u̅→ U_u̅ψ_u̅, ψ_d̅→ U_d̅ψ_d̅ , where U_q∈ SU(F_q)_q and U_u̅,d̅∈ SU(F_u̅,d̅+1)_u̅,d̅. Additionally, there are two more non-anomalous U(1) symmetries, which for F_u̅=F_d̅ are U(1)_1: ψ_q(F_u̅,d̅ +1), ψ_u̅(-F_q), ψ_d̅(-F_q) , U(1)_2: ψ_q(0),  ψ_u̅(1),   ψ_d̅(-1) . The Yukawa couplings Y^u, Y^d and the mass terms M can be treated as spurions under these symmetries transforming as Y^u → U_q^* Y^u U_u̅^† , Y^d → U_q^* Y^d U_d̅^† , M → U_d̅^* M U_u̅^† , under the non-abelian flavour symmetries, whereas under U(1)_1,2, M has charge (2F_q,0), and Y^u, d has charge (F_q-F_u̅,d̅-1,∓ 1), respectively. Apart from the Yukawa couplings themselves, the quark condensates, Σ_ud and Σ_q, described in appendix <ref> can also contribute to the corrections to the strong CP phase. Under the non-abelian flavor symmetries, these transform as Σ_ud→ U_u̅Σ_ud U_d̅^T , Σ_q→ U_qΣ_q U_q^T , and under U(1)_1,2, their charges are Σ_ud(-2F_q,0), and Σ_q(2F_u̅,d̅+2,0), respectively. To study CP violation arising from the Yukawa couplings, masses, and condensates, we can construct quantities with a non-zero phase that are invariant under SU(F_q)_q × SU(F_u̅+1)_u̅× SU(F_d̅+1)_d̅× U(1)_1× U(1)_2. We first discuss the phenomenologically viable models discussed in section <ref>, specifically the model described in section <ref>. This has the flavor symmetries SU(2)_q× SU(3)_u̅× SU(3)_d̅ for the fermions charged under SU(3) (corresponding to F_q=F_u̅=F_d̅=2) and a flavor symmetry SU(2)_B for the quarks (d̅_3, B̅) charged under SU(3)_T. By the rotations of fields, the CP phases in this model can be put in λ_U1 and y^B_a. The λ_U1 appear in the 2× 3 matrix Y^u and transforms accordingly as in (<ref>), while for y^B_a we can define X^B_ab≡ y^B_ay^B *_b which transforms as X^B → U_d̅^*X_B U_d̅^T . Note that the CKM phase for a three-generation model, such as the SM, is given by arg Tr(X_u^2X_d^2X_uX_d ), where X_u,d≡Y^u,dY^u,d † and Y^u,d are 3× 3 matrices. As such, using the effective 3× 3 matrix (<ref>), the CKM phase in the model in section <ref> can be identified with arg( y_1^By_2^B *) for our choice of basis. The leading-order flavor invariants that can have a non-zero phase are given by Tr( (Y^d)^T Σ_qY^uΣ_ud) , Tr(X_B M Σ_ud) , Tr((Y^d)^TΣ_qY^uM^†) , Tr( Y^uΣ_udMY^u †) , Tr(X_B (Y^d)^TΣ_qY^uΣ_ud) , Tr(X_B M Σ_udM Σ_ud) . Using the form of the condensates in (<ref>), the imaginary part of the leading-order invariant (<ref>) is given by | Re (λ_U1) y_d-y_2^' y_s| /ℐIm ( λ_U1 )/λ_U2y_d/y_s≲ 10^-8 Im ( λ_U1 )/λ_U2 , where we have defined ℐ= √(1+(Re (λ_U1) y_d-y_2^' y_s)^2/λ_U2^2 y_s^2 +16 π ^4 m_U^2/c_3^2 λ_U2^2 y_s^2 ^2) , which is always ≳ 1. Note that, since y_2^'∼ y_c and the combination Re (λ_U1) y_d-y_2^' y_s ∼ 10^-6 appears with a factor of 1/16π^2 (see the Lagrangian in (<ref>), for example), the quantity in (<ref>) contributes to the strong CP phase by an amount ∼ 10^-10 for λ_U1∼λ_U2, close to the current experimental bounds. Note that (<ref>) and (<ref>) do not involve Y^u,d, and the phases in y^B_1,2 can be rotated away into Y^u,d without changing the traces in (<ref>) and (<ref>), and hence are purely real. It can be explicitly checked that both (<ref>) and (<ref>) vanish for the model in section <ref>. Furthermore, the quantity in (<ref>) vanishes for the model in section <ref>. The next correction comes from (<ref>) and is given by (1-1/ℐ)| Re (λ_U1) y_d-y_2^' y_s| y_1^'/1+c_3^2 ^2 (Re (λ_U1) y_d-y_2^' y_s)^2/16 π ^4 m_U^2Im(λ_U1)≲10^-10 Im(λ_U1) , which does not contribute significantly to the corrections to the strong CP phase. Finally, we discuss the trace in (<ref>). Schematically, its imaginary part is given by Im( y_1^By_2^B *)ℐ_1+ Im( y_1^By_2^B *λ_U1^*)ℐ_2+ Im(λ_U1)| y_1^B|^2ℐ_3 , where we have introduced ℐ_1,2,3≪ 1, which are certain products of Yukawa couplings and sinusoidal functions appearing in (<ref>). Since y^B_1,2∼ 10^-8, the expression (<ref>) is numerically ≪ 10^-10, and cannot contribute significantly to the strong CP phase. Note that this is the first trace where the CKM phase ∼ arg( y_1^By_2^B *) appears. We next consider the model with the third generation charged under a different SU(2)_T gauge group, as discussed in section <ref>, which has the flavor symmetries SU(2)_q× SU(4)_u̅× SU(4)_d̅ (corresponding to F_q=2, F_u̅=F_d̅=3). In this case, we have multiple phases appearing in the Yukawa couplings in (<ref>). We now denote the H_FS Yukawa couplings by Y^u, Y^d, which transform as in (<ref>) and (<ref>). In addition, we define X^u,d_ab≡Y^u,d *_3aY^u,d_3b which transforms as X^u,d→ U_u̅,d̅X^u,d U_u̅,d̅^† . Note that Y^u and X^u also include the Yukawa couplings of the terms containing the quark doublet and the vector-like quark U̅. The leading-order flavor invariants are then given by Tr (X^d *MΣ_ud) , Tr (X^d *(Y^d)^TΣ_qY^uM^†) , Tr (X^d *(Y^d)^TΣ_qY^uΣ_ud) , Tr (X^uΣ_udM) , Tr (X^u *(Y^u)^TΣ_qY^dM^*) , Tr (X^u *(Y^u)^TΣ_qY^dΣ_ud^T) . The explicit form of the condensates Σ_ud, Σ_q was computed for a different model in appendix <ref>. For the extra SU(2) model discussed in section <ref>, we additionally have the Yukawa couplings of ψ_q_1,2 to the ψ_b̅, ψ_t̅ quarks. Since the corresponding CKM mixing angles are small compared to those between the first and second generation fermions, the quarks ψ_b̅ and ψ_t̅ only condense with each other, i.e., the resulting Σ_ud can be approximated as a 4× 4 matrix, consisting of an upper left block given by (<ref>), approximately -1 in the ψ_t̅ ψ_b̅ entry and almost zero (≲ 10^-3) elsewhere. The sign of ψ_t̅ ψ_b̅ is chosen so that Σ_ud=0. The condensation of ψ_q_1,2 is not expected to change, and hence Σ_q is still given by (<ref>). Using this form for Σ_ud and Σ_q, the quantities (<ref>), (<ref>), (<ref>), and (<ref>) vanish for our model. The imaginary part of (<ref>) is given by (m_U/)^2| Re (λ_U1) y_d-y_2^' y_s| /ℐ (ℐ+1)Im ( Y^u *_3U̅Y^u_32 )/λ_U2^2 y_s^2 ≲ 10^-11 1/λ_U2^2(m_U/)^2 , where we have taken Y^u_3U̅ and Y^u_32 to be of order the SM Yukawa couplings of q_3 to the first and the second generation fermions, respectively. Furthermore, this quantity appears with a factor of 1/16π^2, as can be seen from the number of Yukawa couplings appearing in (<ref>). We thus require m_U≲10 λ_U2, which is easily satisfied for our model. Note that in computing (<ref>), we assumed all the condensates between ψ_t̅ and ψ_b̅ are zero, except those between each other. The correction upon relaxing this assumption is given by m_U/Im ( Y^u *_3U̅(Σ_ud)_3U̅ ) ≲ 10^-11(m_U/) , where we have used a conservative estimate for (Σ_ud)_3U̅∼ 10^-3. Thus, we see that our approximation works well for the estimate (<ref>). Finally, we consider the imaginary part of the trace in (<ref>), which is given by y_d Im( Y^u_13Y^u_32Y^u_33 )+ | Re (λ_U1) y_d-y_2^' y_s| /ℐIm( Y^u_23Y^u_32Y^u_33 )/λ_U2≲ 10^-11 +10^-12 1/λ_U2 , where we have taken the Yukawa couplings Y^u to be their SM values. This quantity is further accompanied by a factor of (1/16π^2)^2, and hence does not generate a significant correction for λ_U2≳ 10^-3. Upon relaxing the assumption that (Σ_ud)_3i=0, i≠ 3, we obtain a correction to (<ref>) given by ∑_i=1,2(Σ_ud)_3i y_d,sIm(λ_UiY^u *_3U̅+y^'_iY^u *_32+Y^u_i3Y^u *_33) ∼ 10^-9+10^-15λ_U2+ 10^-16λ_U1 , where we have taken the Yukawa couplings Y^u to be their SM values and (Σ_ud)_3j∼ 10^-3, j=1,2. With the additional factor of (1/16π^2)^2, (<ref>) does not generate any significant correction. Finally, we can also check determinant-like invariants in addition to the trace-like invariants discussed above. Since each determinant expression reduces to a product of separate determinants, we only need to check the determinants of the individual matrices appearing in the trace-like quantities. In particular, Σ_ud=Σ_q=0 for the expressions given in (<ref>) (and their generalization for the extra SU(2) model) which also serves as a consistency check on the analysis in appendix <ref> since the determinants are real and positive (=1). Furthermore, we can work in the basis in which the down-type Yukawa couplings are diagonal and positive, thus implying Y^d=0, and since u̅_1 does not couple to any of the quark doublets, Y^u=0. As such, the determinant of the product of any of the Yukawa matrices or the condensates is also positive (and real), and therefore the determinant-like invariants do not give any corrections to the strong CP phase. We thus see that the corrections to the strong CP problem appearing at higher-loop order remain small in both of our phenomenologically viable models discussed in section <ref>. JHEP
http://arxiv.org/abs/2408.11693v1
20240821151342
Multi-scale interactions in turbulent mixed convection drive efficient transport of Lagrangian particles
[ "Andrew P. Grace", "David H. Richter" ]
physics.flu-dyn
[ "physics.flu-dyn", "math-ph", "math.MP" ]
[ Hong-Kun Xu August 26, 2024 =================== § ABSTRACT When turbulent convection interacts with a turbulent shear flow, the cores of convective cells become aligned with the mean current, and these cells (which span the height of the domain) may interact with motions closer to the solid boundary. In this work, we use coupled Eulerian-Lagrangian direct numerical simulations of a turbulent channel flow to demonstrate that under conditions of turbulent mixed convection, interactions between motions associated with ejections and low-speed streaks near the solid boundary, and coherent superstructures in the interior of the flow interact and lead to significant vertical transport of strongly settling Lagrangian particles. We show that the primary suspension mechanism is associated with strong ejection events (canonical low-speed streaks and hairpin vortices characterized by u'<0 and w'>0), whereas secondary suspension is strongly associated with large scale plume structures aligned with the mean shear (characterized by w'>0 and θ'>0). This coupling, which is absent in the limiting cases (pure channel flow or free convection) is shown to lead to a sudden increase in the interior concentration profiles as Ri_τ increases, resulting in concentrations that are larger by roughly an order of magnitude at the channel midplane. § INTRODUCTION It is well-known through numerical simulation, experiment, and theory that fluid turbulence significantly influences the transport of heavy Lagrangian particles with appreciable inertia. For example, turbulence generated near Earth's surface can lead to long range transport (>2000 km) of giant dust grains (>100 μm) <cit.>, while simple scaling theories predict a much shorter displacement due to their strong settling behaviour. Understanding and predicting the ultimate fate of heavy particles is particularly interesting to a wide range of scientific disciplines due to the climatic and health impacts of atmospheric aerosols and particles, and accurate modeling of the primary mechanisms responsible for this transport are of key interest. In this “mysterious long-range transport" problem, <cit.> hypothesized several mechanisms which may influence particle transport including strong convection and high wind speeds, on which we place our primary focus. Interestingly, under convective conditions and moderate to strong mean shear, coherent, roll-like structures arise, and are well documented in Earth's atmosphere <cit.>. In idealized studies (discussed more below), this phenomenon is often referred to as “mixed convection". In this work, our goal is to investigate the consequences of mixed convection on the suspension and transport of strongly settling inertial particles. When turbulence production by both convection and shear are comparable, the flow is termed “mixed convection" (the limiting cases being a pure channel flow and free convection), and it is this dynamic regime in which we focus our study. Though our knowledge of free and forced convection is extensive (see <cit.>, for example), the mixed convection literature is comparatively small. For example, the first study focused on the role of shear in Rayleigh-Bénard (RB) convection was undertaken by <cit.>, and since then, there have been numerous studies focused on the impacts of mixed convection of the vertical transport of heat in channel flows <cit.>, and atmospheric boundary layers <cit.>. A primary hallmark of mixed convection is that convective plumes generated near a solid boundary become aligned with the background flow within the interior of the flow, leading to large, coherent streamwise rollers, often referred to as superstructures. Studies focused on the role of convection on the transport of inertial particles is relatively few. For example, see the recent work by <cit.> who focused on developing a stochastic model for the lifetime of small particles in an experimental chamber designed to study clouds in RB flow <cit.>. To the authors' knowledge, there may only be one study focused on the role of mixed convection in particle transport <cit.>. In that work, the authors focused on the role of the particles in heat transfer throughout a turbulent boundary layer, and while the authors traversed a relatively large range of Richardson number (the key parameter quantifying the strength of convective turbulence to shear generated turbulence), they were restricted to relatively low Reynolds number (Re_τ=180), and ignored particle settling. In this work, we aim to describe a new mechanism by which strongly settling inertial particles are efficiently mixed through the boundary layer, in a way that isn't present for either pure channel flow or free convection. Specifically, we are interested in the interactions between the near-boundary structures (associated with low-speed streaks and ejection events, see e.g., <cit.>), and the interior superstructures generated by convective plumes, and how they couple to lead to vertical transport of isothermal inertial particles settling under the action of gravity. To investigate the dynamics, we use a series of coupled Eulerian-Lagrangian direct numerical simulations to simulate settling inertial particles channel flow ranging from free convection to pure shear. § TECHNICAL BACKGROUND §.§ Carrier Phase In this letter, we use the NCAR Turbulence with Lagrangian Particles Model <cit.> to simulate one-way coupled inertial particles emitted from the lower solid boundary a turbulent closed-channel flow. This code has been validated and used in multiple studies focused on inertial particle settling and transport in turbulent boundary layers <cit.>. For the carrier phase, we use direct numerical simulations (DNS) to solve the three-dimensional, incompressible Navier-Stokes equations under the Boussinesq approximation in a turbulent channel flow setup of streamwise length L_x, spanwise extent L_y, and total height 2h. At the upper and lower boundaries, a no-slip boundary condition is enforced, while the domain is periodic in the x and y directions. The background state of the carrier phase is established by accelerating the flow with an imposed pressure gradient, -dP/dx>0 (note that x̂ is the unit vector in the streamwise direction) and allowing the flow to become turbulent. The magnitude of the pressure gradient allows us to define a friction velocity u_τ = √(τ_w/ρ_a), where τ_w is the stress at the lower boundary and ρ_a is the fluid density. The governing parameters of the carrier phase are: Ri_τ = Ra/PrRe_τ^2, Ra = gα_θΔ T(2h)^3/νκ, Re_τ = u_τ h/ν, Pr = ν/κ. Respectively, these are the Richardson number, the Rayleigh number, the friction Reynolds number, and the Prandtl number. In these parameters, α_θ is the isobaric thermal expansion coefficient. Throughout this work, we assume Pr=0.715 for all cases, where ν is the kinematic viscosity for dry air, and κ is the thermal diffusivity of dry air. As mentioned previously, Ri_τ is an important parameter as it characterizes the relative importance of turbulence generated by convection (through Ra) to that generated by friction along the solid boundary (through Re_τ). Practically speaking, <cit.> noted the appearance of coherent superstructures in both the temperature field and vertical fluctuating velocity field within the regime 1< Ri_τ < 1000. In this work, we aim to investigate the regime Ri_τ∼𝒪(10-100) as it allows us to attain computationally feasible values for Re_τ while also allowing for the formation of the coherent roll structures. Values for Ra, Re_τ, and Ri_τ used throughout this work can be found in Table <ref>. In the table, case names correspond to the dynamic regime of the flow, i.e. CF (channel flow), FC (free convection), and MC (mixed convection). Note there is a case named MC-Sc, which corresponds to a mixed convection case with a lower Schmidt number value (discussed next). §.§ Dispersed Phase The applications of this work are towards coarse dust transport in the atmospheric surface layer. These dust particles range in size, but even coarse and giant grains (roughly 30-100 μ m) are significantly smaller than the local Kolmogorov scale, which can be in the range of several millimetres. These particles are also significantly denser than the carrier phase, and their volume fractions are low once they are above the emission layer, so we may ignore added mass and Basset-History forces, as well as two-way coupling and particle-particle interactions. Given these assumptions, we apply the point-particle approximation and apply the conservation of momentum for a rigid spherical particle subjected to linear hydrodynamic drag and gravity. The one-way coupled point particle approach also has the added benefit that each particle is independent from each other particle, effectively removing the volume fraction as a governing parameter. This allows us to increase the number of particles to ensure convergence of the statistics of interest without affecting the flow. Using the local Kolomogorov scales to non-dimensionalize the particle equations of motion, we arrive at St_ηdv_p/dt =Ψ(u_f(x_p(t),t)- v_p) - Sv_ηẑ Here, v_p = (v_1,v_2,v_3) is the three dimensional velocity vector for each particle, x_p is the location of each particle in space, u_f(x_p(t),t) = (u,v,w) is the three dimensional instantaneous flow velocity evaluated at the location of the particle. St_η and Sv_η are the governing parameters of the particle equations of motion written in terms of the local Kolmogorov scales, which can be defined in channel flow, mixed convection, and free convection. We also report the governing parameters in terms of the viscous scales of the flow, St^+ and Sv^+, which are defined in channel flow and mixed convection, and St_* and Sv_* which are defined in mixed and free convection. These parameters are defined as St_η =τ_pϵ^1/2/ν^1/2, Sv_η =v_g/(νϵ)^1/4, St^+ =τ_pu_τ^2/ν, Sv^+ =v_g/u_τ, St_* =τ_pw_*/h, Sv_* =v_g/w_*, and their values used throughout this work can be found in table <ref>. In the expressions above, ϵ is the domain averaged dissipation, w_* = (gα h ⟨ w'θ'⟩_s)^1/3 is the Deardorf convective velocity scale (⟨ w'θ'⟩_s is the surface heat flux), and τ_p is the Stokes relaxation timescale of the particles. The Stokes settling velocity is defined as v_g = τ_p g_p where g_p is the gravitational acceleration applied to the particle. In simulations, g_p need not be equivalent to g (the gravitational acceleration applied to the fluid), thus allowing for turbulence, settling, and inertial properties to be specified independently. Ψ = 1 + 0.15Re_p^0.687 is the Schiller-Neumann correction to the drag force, and Re_p is the particle Reynolds number. As we keep the particle diameter fixed for all cases in this work, the particle Reynolds number remains small, meaning that Ψ≈ 1. Next, we must consider is the boundary conditions for the particles, specifically the scheme by which we are to lift the particles into the domain. Here we take an approach focused on simplicity; following <cit.> and <cit.>, we add a Brownian-like jump term to the particle equations of motion. Using this term, particles take a discontinuous jump with zero mean and unit variance √(2κ_p dt) where, for practical purposes, dt is the timestep of the model and κ_p is a parameter modeling the particle diffusivity. Mathematically, the location of the particle centroid is advanced according to the following equation: dx_p = v_p dt + (Sc/2)^-1/2 dt^1/2dξ, where dξ is a Weiner process. Particles are initialized in a thin reservoir of thickness D held at a fixed concentration beneath the lower solid boundary and are emitted through the lower surface. This approach effectively creates a Dirichlet condition for the particle concentration which is maintained throughout the simulation. For the upper boundary, particles reflect creating a Neumann no-flux condition. This model introduces several new parameters, specifically the particle Schmidt number, Sc = ν/κ_p and the fixed reservoir concentration, 𝒞. A schematic highlighting the salient features of the model is shown in figure <ref>. For a more realistic (but significantly more complex) approach to particle emission, see <cit.>, for example. § RESULTS To examine the consequences of mixed convection on the suspension of settling particles, we first provide a comparison of the coherent structures present in the turbulent flow by examining the fluctuating vertical fluid velocity for a pure turbulent channel flow (CF; figure <ref> (a-b)), free convection (FC; figure <ref> (c-d)), and mixed convection (MC; figure <ref> (e-f)). For each of these cases, the left column shows y-z slices along the centre of the domain, while the right column shows x-y slices at the mid-plane, indicated by small schematics at the top of the figure columns. Finally, for the mixed and free convection cases, we have indicated regions where w'θ' > 0.12κΔ T(2h)^-1Ra^1/3 with contour shading. This criterion is derived from the scaling relationship discussed in <cit.>, who studied turbulent channel flows under unstable stratification for similar Re_τ and Ra. This criterion serves as a quantitative indicator of large positive turbulent heat fluxes in the domain. By isolating regions where the turbulent heat flux is larger than this value (the shaded regions), we can identify structures in the flow that exhibit very strong updrafts and downdrafts. The CF case, figure <ref>(a-b), exhibits characteristic flow structures that vary in vertical scale across the domain, including small scale instabilities associated with low speed streaks near the solid boundaries, and larger features within the interior that scale with h. Conversely, in the FC case, shown in <ref>(c-d), we can see evidence of organized convective cells in both the horizontal and vertical slices, which tend to scale with the full domain height 2h and dominate the interior motion. Importantly, there is much less activity near the solid boundary, as the boundary stresses induced by the convective plumes are not significant at this Ra <cit.>. The important insight is that MC, figure <ref>(e-f), shares aspects of both the CF and FC cases. For example, we can see large scale structures within the interior of the MC case (strong updrafts and downdrafts), figure <ref>(e), reminiscent of the domain size convective cells from the FC case. However, in the horizontal, figure <ref>(f), these interior plumes becomes strongly aligned in the streamwise direction, with weaker fluctuations between the coherent roll structures. Furthermore, we can see that the near boundary structures (also associated with strong heat fluxes), are qualitatively similar to the CF case. To summarize, MC exhibits large scale convective interior plumes that scale with 2h, characteristic of FC, but also exhibits significant activity near the solid boundaries, characteristic of CF. However, due to the mean shear, the convective plumes align in the streamwise direction, creating large scale superstructures. We now highlight the role that these structures have on the transport of particles into the interior of the channel in MC, as compared to FC and CF. Figure <ref> shows the snapshots of the high heat flux contours discussed in <ref>, except here we color the contours based on direction of the vertical velocity fluctuation (red indicates positive fluid velocities while blue indicates negative fluid velocities). Overlaid are particles in a slab of non-dimensional thickness 0.02. It is clear from slices in the CF case, <ref>(a), that particles are ejected by the structures near the boundary layer, resulting in some clustering in the mid-plane, figure <ref>(b) <cit.>. When compared to the FC case, figures <ref>(c-d), particles are suspended much higher in the domain when they coincide with a strong updraft, shown in figure <ref>(c). However, these updrafts are much less efficient at generating ejections near the boundary, resulting in far fewer particles at the mid-plane, shown in figure <ref>(d). However, since both the near-boundary instabilities and interior convective motions are present in MC, figure <ref>(e-f), these structures can interact, and we see a marked increase in the number of particles at the mid-plane in the domain, as well as strong spatial clustering within the longitudinally aligned updraft. This observation suggests a coupling between the canonical near-wall ejections and the large scale convective plumes in the interior, which act cooperatively to lift particles away from the solid wall. In CF and MC, particles are initially ejected via the near-wall instabilities, which we refer to as primary ejection. In the event a primary ejection aligns with a large scale interior plume, particles are entrained into the plume and experience significant vertical transport, termed secondary ejection. Conversely, at these values of St^+ and Sv^+, particles that do not align with a convective plume after primary ejection instead settle back towards the lower boundary. Finally, since the near-boundary instabilities in the FC case are much weaker for this Ra, the mechanism responsible for primary ejection is much weaker (as the interior convective instabilities are much less efficient at removing them from this layer), resulting in far fewer particles suspended in the domain interior. Figures <ref> and <ref> serve to qualitatively highlight the multi-scale flow features and associated the particle response within the domain. In figure <ref>, we use conditional averaging techniques to show slab-wise correlation coefficients for each case discussed in table <ref>. We define the conditional correlation coefficient as R(α,β|γ) = ⟨αβ|γ⟩/√(⟨α^2|γ⟩⟨β^2|γ⟩), where α and β are dummy variables used for demonstration in (<ref>), and ⟨·⟩ indicates a slab-wise ensemble average. Figure <ref>(a) and (b) show profiles of the correlation coefficient between the vertical particle velocity and the Reynolds stresses conditioned on ejection events (i.e. u'<0 and w'>0, where a prime indicates a fluctuating quantity) across the bottom half of the domain, <ref>(a), and in the bottom 20% of the domain, <ref>(b). Following the notation of <cit.>, we use (u'w')^II to represent this conditioning. Profiles in figure <ref>(a) show a moderate to strong correlation between particle velocities and Reynolds stresses in the interior for CF, MC, and MC-Sc, and weaker correlation for FC. We can see by consulting figure <ref>(b) that there is rapid increase in correlation near the bottom boundary. This demonstrates that particle velocities are uncorrelated with ejections very near the solid boundary due to the artificial Brownian diffusion (i.e. z/h<0.01), followed by a substantial increase in their correlation as they approach primary ejection regions from below. These ejection events manifest themselves as hairpin vortices associated with low-speed streaks, and are present in both CF, MC, and MC-Sc (MC-Sc actually shows a slightly stronger correlation, as expected given the lower artificial diffusion). Moreover, the correlation is markedly lower in FC, as those same low speed streak structures are absent in that case, as discussed before. Figure <ref>(c) shows the particle vertical velocities conditioned on positive heat fluxes (i.e. w'>0 and θ'>0). We can see there is a strong correlation in the interior MC and MC-Sc, and a weaker correlation in FC. The correlation is here is likely linked to the spatial coherence of the interior plumes. Moreover, in MC and MC-Sc, these superstructures are typically associated with regions of low horizontal velocity <cit.>, which explains the strong correlation in the interior, shown in figure <ref>(a). These figures provide quantitative evidence to the hypothesis that primary suspension occurs because of the ejection events associated with hairpin voritices and Reynolds stress, and secondary suspension within the interior occurs as particles are entrained by streamwise oriented convective rolls which are responsible for the bulk heat flux. Importantly, this coupling is only present in mixed convection, and absent in both limiting cases, and has implications for the global profiles of concentration throughout the domain. The net effect of the coupling present in MC is that there is a significant and non-monotonic increase in the interior concentration relative to both limiting cases, shown in figure <ref>(d). While the concentration profiles for MC and CF are coincident below z/h≈ 0.25, we see a departure of MC above this height, leading to progressively larger differences in the interior concentration, and even up to and order of magnitude at the mid-plane. Moreover, we have included a case with Ri_τ =5, demonstrating that the interior entrainment by secondary suspension remains weak until Ri_τ becomes large enough that the interior convective plumes align — a process that happens very rapidly between Ri_τ = 5 and Ri_τ = 43. Beyond this value of Ri_τ, we note the similarity in shape between MC, MC-Sc, and CF in the interior, even as the absolute concentration changes. These results highlight the cooperative action of primary suspension (by ejections) and secondary suspension (by aligned convective plumes): both working together leads to more efficient distribution throughout the full boundary layer than either one alone. § CONCLUSIONS AND DISCUSSION In this work, we have provided the first demonstration of the role of turbulent mixed convection in the suspension of strongly settling Lagrangian particles. Through the use of coupled Eulerian-Lagrangian direct numerical simulations of turbulent mixed convection, achieved here with Ra≈ 10^7 and Re_τ≈ 500, we provided evidence for a distinct, multi-scale mechanism that can serve as an efficient means of inertial particle suspension (characterized by Sv_η∼𝒪(1) and St_η∼𝒪(1)). We showed that when particles are ejected near the solid boundary (termed primary suspension), they may become entrained within streamwise aligned interior convective plumes (termed secondary suspension), leading to significant vertical transport, and clustering. By considering correlation coefficients of vertical particle velocities conditioned on ejections and regions of strong positive heat fluxes, our observations (at Ri_τ≈ 40) suggest that the action of the near-boundary ejections and the streamwise aligned interior plumes couple together to lead to an efficient suspension mechanism for Lagrangian particles. This occurs despite the large particle settling velocity and inertia, leading to an increase in midplane concentration by roughly an order of magnitude when compared to the limiting cases. Importantly, this cooperative action occurs due to the strong alignment of the convective plumes and is effectively absent in the limiting cases (pure channel flow and free convection), and when convection is present, but weak enough that convective plumes do not become aligned (observed at Ri_τ≈ 5 here). These results have important implications for both fundamental and applied studies on the influence of mixed convection on particles, including a potentially strong influence on clustering and dispersion, as well as determining particle residence times. Moreover, it is known that these fluid structures appear ubiquitously at the field scale in the planetary boundary layer <cit.>, so more work should take place to understand these cooperative dispersed phase transport mechanisms in the natural environment. § ACKNOWLEDGEMENTS The authors would like to acknowledge Grant No. W911NF2220222 from the U.S. Army Research Office, and the Center for Research Computing at the University of Notre Dame. The authors report no conflict of interest. jfm
http://arxiv.org/abs/2408.11683v1
20240821150629
Faster Quantum Simulation Of Markovian Open Quantum Systems Via Randomisation
[ "I. J. David", "I. Sinayskiy", "F. Petruccione" ]
quant-ph
[ "quant-ph" ]
Criteria of absolutely separability from spectrum for qudit-qudits states Nung-sing Sze August 26, 2024 ========================================================================= § ABSTRACT When simulating the dynamics of open quantum systems with quantum computers, it is essential to accurately approximate the system's behaviour while preserving the physicality of its evolution. Traditionally, for Markovian open quantum systems, this has been achieved using first and second-order Trotter-Suzuki product formulas or probabilistic algorithms. In this work, we introduce novel non-probabilistic algorithms for simulating Markovian open quantum systems using randomisation. Our methods, including first and second-order randomised Trotter-Suzuki formulas and the QDRIFT channel, not only maintain the physicality of the system's evolution but also enhance the scalability and precision of quantum simulations. We derive error bounds and step count limits for these techniques, bypassing the need for the mixing lemma typically employed in Hamiltonian simulation proofs. We also present two implementation approaches for these randomised algorithms: classical sampling and quantum forking, demonstrating their gate complexity advantages over deterministic Trotter-Suzuki product formulas. This work is the first to apply randomisation techniques to the simulation of open quantum systems, highlighting their potential to enable faster and more accurate simulations. § INTRODUCTION Simulating the dynamics of quantum systems is among the earliest proposed applications for quantum computers, independently suggested by Feynman and Manin <cit.>. The fundamental premise is that quantum computers can potentially offer a computational advantage over classical computers in simulating quantum dynamics by leveraging core principles of quantum mechanics such as superposition, entanglement, and quantum parallelism <cit.>. The pioneering algorithm for simulating the dynamics of a closed quantum system was introduced by Lloyd <cit.>. This type of simulation, known as Hamiltonian simulation, aims to construct an approximation of a unitary evolution generated by a system's Hamiltonian, with the approximation being efficiently implementable on a quantum computer up to a chosen precision. The efficiency of this implementation is primarily determined by the number of quantum gates required, with a goal of achieving optimal gate complexity in relation to all dependent parameters <cit.>. The most prevalent method for Hamiltonian simulation employs Trotter-Suzuki (TS) product formulas <cit.> to approximate the unitary evolution generated by the Hamiltonian. Extensive research has been conducted on these TS product formulas <cit.> to evaluate their efficiency in simulating quantum dynamics. Despite their utility, significant advancements have been made in the field, resulting in novel quantum algorithms that enhance precision and improve the scaling of gate complexity compared to TS product formulas <cit.> . Notable among these algorithms are Linear Combination of Unitaries (LCU) <cit.>, Quantum Signal Processing (QSP) <cit.>, truncated Taylor series <cit.>, and randomisation-based approaches such as randomised TS product formulas <cit.> and QDRIFT <cit.>. While substantial progress has been made in simulating closed quantum systems, there has been comparatively less advancement in developing algorithms for simulating open quantum systems (OQS). An OQS is characterised by its interaction with the environment, allowing for the exchange of energy and information <cit.>. Typically, the focus in OQS research is on the system's dynamics without needing a comprehensive understanding of the environment's dynamics, which is foundational to the theoretical framework of OQS <cit.>. This study is confined to Markovian OQS, where systems exhibit no memory effects, and the dynamics are governed by the Gorini-Kossakowski-Sudarshan-Lindblad (GKSL) master equation <cit.>. The dynamical evolution of an OQS is represented by a quantum channel (or dynamical map), which is a Completely Positive and Trace Preserving (CPTP) map generated by the GKSL generator. Efficiently simulating the dynamics of an OQS on a quantum computer thus requires approximating the quantum channel that describes the system's evolution while preserving its CPTP nature. One of the principal challenges in simulating OQS is ensuring that the constructed approximation remains a CPTP map, maintaining the physicality of the evolution. First and second-order TS product formulas have been utilised to simulate OQS <cit.>, as they guarantee that the approximations are CPTP maps. However, higher-order TS product formulas are infeasible due to their recursive construction resulting in non-CPTP maps. Alternative algorithms for simulating OQS have been proposed <cit.>, most of which are probabilistic, introducing a non-zero probability of failure. Despite their precision and gate complexity improvements, the quest for non-probabilistic algorithms for OQS simulation remains critical. Recent literature on Hamiltonian simulation suggests that algorithms based on randomisation <cit.> might offer a viable pathway to developing non-probabilistic algorithms for OQS simulation, enhancing precision and gate complexity. This work proposes the use of randomisation to simulate Markovian OQS. Specifically, we develop randomised TS product formulas up to the second order for this purpose. These randomised formulas improve the scaling of gate complexity concerning the number of terms in the generator of the quantum channel, thereby facilitating faster simulation of Markovian OQS. We derive error bounds and number of step bounds for these randomisation techniques without relying on the mixing lemma by Campbell and Hastings <cit.>, which is used in Hamiltonian simulation proofs. Additionally, we explore the application of the QDRIFT channel <cit.> for simulating Markovian OQS, providing error and step count bounds for this approach. The QDRIFT channel's gate complexity is independent of the number of terms in the generator, making it particularly suitable for large systems or those with numerous interacting components, such as the 2D dissipative Jaynes-Cummings model with neighbour-neighbour interaction <cit.>. This paper also introduces two novel methods to implement the randomised TS product formulas and the QDRIFT channel on a quantum computer. The first method involves Classical Sampling (CS) to construct a gate set by sampling from a discrete distribution on a classical computer. The second method employs Quantum Forking (QF) <cit.> to perform sampling directly on a quantum computer and implement the randomisation. We demonstrate that the gate complexities for circuits constructed using both CS and QF are efficient relative to the number of terms in the generator, and they outperform the deterministic TS product formulas outlined in Section 2. The structure of this paper is as follows: Section 2 provides preliminary knowledge necessary for understanding Markovian OQS and background information on deterministic TS product formulas for simulating these systems. Section 3 details the methodology for using first-order randomised TS product formulas to approximate OQS evolution and computes the associated precision and step bounds. Section 4 discusses the second-order randomised TS product formula. Section 5 covers the QDRIFT protocol for simulating Markovian OQS. Section 6 describes the implementation of these methods using both CS and QF on a quantum computer. Section 7 compares the gate complexities of circuits for randomised TS product formulas and the QDRIFT channel, constructed using CS and QF, with those for deterministic TS product formulas. Finally, Section 8 summarises the findings and presents concluding remarks. § PRELIMINARIES In this section we shall provide some background information on OQS and quantum channels that will be necessary for this work. We will also recall some basic definitions and results for quantum simulation of Markovian OQS using deterministic TS product formulas. §.§ Background The state space of a d-dimensional quantum systems is ℋ_s≅ℂ^d. A quantum state of a d-dimensional quantum system is described by a density operators ρ∈𝒮(ℋ_s) ⊂ℬ(ℋ_s), where 𝒮(ℋ_s) is the space of states on the Hilbert space this is the space of operators on ℋ_s that satisfy, ρ≥ 0, tr(ρ)=1, ρ=ρ^†, and ℬ(ℋ_s) is the set of all bounded linear operators acting on the Hilbert space ℋ_s. The space of states 𝒮(ℋ_s) can have a matrix representation so that the density operators can be represented by d× d matrices which satisfy the properties in (<ref>). Quantum channels provide a general framework for describing the evolution of quantum states. These are completely positive and trace preserving (CPTP) maps <cit.>, T:𝒮(ℋ_s)→𝒮(ℋ_s). However we are interested in Markovian continuous time evolution. Which is described by a continuous single parameter semigroup of quantum channels {T_t}, which satisfy: T_tT_s=T_t+s, T_0=1 t,s ∈ℝ_+. Also if we introduce time dependence to the state of our system, then the density matrix, ρ(t) which describes the quantum state at some time t ≥ 0 can be written as, ρ(t)=T_t(ρ(0))=T_tρ(0). Every semigroup {T_t} has generator ℒ such that, T_t=e^tℒ=∑_j=0^∞t^j/j!ℒ^j, where ℒ satisfies the master equation, d/dtρ(t)=ℒ(ρ(t)). The generator ℒ is the generator of a continuous one parameter Markovian semigroup {T_t} if and only if it can be written in the celebrated Gorrini-Kossakowski-Sudarshan-Lindblad (GKSL) form <cit.>, ℒ(ρ)= -i[H,ρ] +∑_k=2^Mγ_k(L_kρ L_k^†-1/2{ L_k^†L_k,ρ}), where H=H^†∈ℳ_d(ℂ) is the Hamiltonian and ℳ_d(ℂ) is the set of all d× d matrices with complex entries, γ_k≥ 0 are the decay rates, L_k∈ℳ_d(ℂ) are called the jump operators and M=d^2. It will be useful to write the generator in a more compact form, ℒ(ρ)=ℒ_1(ρ)+∑_k=2^Mγ_kℒ_k(ρ)=∑_k=1^Mγ_kℒ_k(ρ), where γ_1=1 and, ℒ_1(ρ)=-i[H,ρ], ℒ_k(ρ)=L_kρ L_k^†-1/2{ L_k^†L_k,ρ}, for k=2,...,M. Sometimes it will be convenient to absorb the decay rates γ_k into each ℒ_k, by defining ℒ̂_k=γ_kℒ_k the generator can be written as, ℒ(ρ)=∑_k=1^Mℒ̂_k(ρ). Throughout this work we will need to construct approximations of a quantum channel and measure the precision of our approximation to the ideal quantum channels. Since quantum channels are superoperators which act on the space of operators we need to use a superoperator norm. This work will make use of the diamond norm as a measure of the precision of our approximation <cit.>. The diamond norm of a superoperator V: ℬ(ℋ_s) →ℬ(ℋ_s) is, V=sup_A; A_1=1(V⊗1)(A)_1, where 1: ℬ(ℋ_s) →ℬ(ℋ_s) is the identity superoperator, A ∈ℬ(ℋ_s) is an operator and ·_1 is the trace norm and it is defined as, A_1=tr(√(AA^†)), for some operator A. In most of the literature on simulating open quantum systems <cit.>, the 1→ 1 Schatten norm <cit.> is used as the measure of the precision of the approximation to the ideal quantum channel. However, the diamond norm improves over the 1→ 1 Schatten norm as it takes into account entanglement with respect to a reference system. Using the diamond norm we can immediately find an upper-bound on the generator ℒ in equation (<ref>), ℒ =∑_k=1^Mk≤∑_k=1^Mk. If we define Λ:=max_k{k} then, ℒ≤∑_k=1^Mk≤∑_k=1^MΛ =MΛ. The diamond norm can be related to the trace norm via the following inequality. Given two superoperators V and an operator A we have by definition, A_1≤V. This inequality will play an important role in bounded the distances between states in quantum simulation. The following lemma, the proof of which can be found in Appendix <ref>, will help derive error bounds between a quantum channel T_t and its approximation. Given two quantum channels T and V and some positive integer N, T^N-V^N≤ NT-V. Now that we have outlined some background information about OQS, in the next sub-section we will briefly describe digital quantum simulation and show how we can simulate Markovian OQS using deterministic TS product formulas. §.§ Deterministic Digital Simulation of Markovian Open Quantum Systems The main goal of digital quantum simulation of Markovian OQS is to find novel ways of construct an approximation T̃_t of the total evolution T_t=exp(tℒ) such that for a given GKSL generator ℒ, a precision ϵ >0, a simulation time t≥ 0 and a distance measure dist(·,·) such that, dist(T_t,T̃_t)≤ϵ and T̃_t can be implemented on a quantum computer efficiently. The most common way to do this is by using Trotter-Suzuki (TS) product formulas <cit.> to approximate the total evolution T_t=exp(t∑_k=1^Mℒ̂_k) as some product of simpler channels such that the total evolution is approximated up to a precision ϵ≥ 0, when using the diamond norm as the distance measure for the quantum channels. The same approach was used in <cit.> however they use the 1→ 1 Schatten norm instead of the diamond norm. We start by dividing the time t ≥ 0 into N∈ℕ steps so that we have a small time step τ =t/N. Next we construct approximations of the total channel T_t using TS product formulas that approximate the channel for a small time step T_τ and then simulate the TS product formula N times. For example, we can approximate the evolution T_τ up to first order by using the following deterministic TS product formula, S_1^(det)(τ)=∏_k=1^Me^τℒ̂_k, where we shall refer to the exponentials of the form exp(τk)=exp(τγ_kℒ_k) as constituent channels. Similarly we can approximate T_τ up to second order using the formula, S_2^(det)(τ)=∏_k=1^Me^τ/2ℒ̂_k∏_k'=M^1e^τ/2ℒ̂_k'. We refer to these product formulas as deterministic because the ordering of the exponentials in S_1^(det) and S_2^(det) is known before hand. We state two theorems that outline how TS product formulas are used to approximate the total evolution T_t. (First Order Deterministic TS Product Formula): Given the generator ℒ, as in equation (<ref>), of a quantum channel T_t and some time t≥ 0. Define the first order deterministic TS product formula as, S_1^(det)(τ)=∏_k=1^Me^τℒ̂_k. Let N∈ℕ and Λ :=max_kℒ̂_k. Then, T_t-S_1^(det)(t/N)^N≤ϵ, where, ϵ≥t^2Λ^2M^2/N, N ≥t^2Λ^2M^2/ϵ. The proof of Theorem <ref>. can be found in Appendix <ref> (Second Order Deterministic TS Product Formula): Given the generator ℒ, as in equation (<ref>), of a quantum channel T_t and some time t≥ 0. Define the second order deterministic TS product formula as, S_2^(det)(τ)=∏_k=1^Me^τ/2ℒ̂_k∏_k'=M^1e^τ/2ℒ̂_k'. Let N∈ℕ and Λ :=max_kℒ̂_k. Then, T_t-S_2^(det)(t/N)^N≤ϵ, where, ϵ≥M^3t^3Λ^3/3N^2, N≥M^3/2t^3/2Λ^3/2/√(3ϵ). The proof of Theorem <ref>. can be found in Appendix <ref>. Now that we have shown that we can approximate T_t by TS product formulas, all that is left is to construct a quantum circuit that implements this product formula on a quantum computer. Since our product formula is a quantum channel, one needs to use a unitary dilation, for example the Stinespring representation of the channel <cit.>, to construct a quantum circuit. However, in this work when we draw quantum circuits we will only show the action of the channel on the state as this keeps the diagrams concise but it should be noted that unitary dilation is needed to implement them in practice. To make clear how the diagrams should be interpreted, we note that wires in our circuits correspond to density matrices of a subsystem and gates correspond to quantum channels, thereafter the usual rules of quantum circuits may be inferred. As an illustrative example consider the deterministic product formula S_2^(det)(t/N)^N it approximates the total evolution T_t up to a precision ϵ. This means that an output of the quantum circuit that implements S_2^(det) is a density matrix ρ̃(t) that is a distance ϵ/2 from the density matrix ρ(t). One can easily see this by using the definition of the trace distance between states i.e. d_tr(ρ(t),ρ̃(t)) and inequality (<ref>), d_tr(ρ(t),ρ̃(t)) =1/2ρ(t)-ρ̃(t)_1, = 1/2T_t(ρ(0))-S_2^(det)(t/N)^Nρ(0)_1, ≤1/2T_t-S_2^(det)(t/N)^N, ≤ϵ/2, where ϵ≥ (MtΛ)^3/3N^2 as in Theorem <ref>. Figure <ref>. shows how we can use S_2^(det)(t/N)^N to simulate the evolution T_t. At this point we need to analyse the gate complexity of the quantum circuits that we have constructed that will implement the deterministic TS product formulas. To do this we start by defining the gate complexity of our circuits as the number of simple channels that are implemented in each quantum circuit. For the case of deterministic TS product formulas these simple channels are just the exponentials of the form exp(τk). For example, consider the product formula S_1^(det)(τ) in (<ref>), this formula is the product of M exponentials. If we consider the quantum circuit that implements S_1^(det)(τ)^N to approximate ρ(t) to a precision ϵ, then we have N=⌈ t^2Λ^2M^2/ϵ⌉ applications of S_1^(det)(τ) which implies that we have to implement ⌈ t^2Λ^2M^2/ϵ⌉ M exponentials. Where ⌈·⌉ denotes the ceiling function and it is the smallest integer greater than its argument. If we denote the gate complexity for the circuit that implements S_1^(det) as g_1^(det) then the complexity is given by, g_1^(det)=O(t^2Λ^2M^3/ϵ). The second order formula S_2^(det)(τ), contains 2M exponentials. The quantum circuit that implements S_2^(det)(τ)^N to approximate ρ(t) to a precision ϵ will contain N=⌈ M^3/2t^3/2Λ^3/2/√(3ϵ) ⌉ applications of S_2^(det)(τ). This tells us that we will have to implement 2 ⌈ M^3/2t^3/2Λ^3/2/√(3ϵ) ⌉ M exponentials. By denoting the gate complexity of this formula by g_2^(det) we see that, g_2^(det)=O(M^5/2t^3/2Λ^3/2/√(3ϵ)). § FIRST ORDER RANDOMISED TROTTER-SUZUKI FORMULA To define the randomised first order formula, we first need to define two useful formulas. We observe that the product in equation (<ref>), has the constituent channels e^τk arranged from left to right starting from e^τ1 and ending with e^τM, we shall call this the forward direction and define, S_1^→(τ)=S_1^(det)(τ). We also observe that if we choose to reverse this ordering so that the product of constituent channels is arranged from right to left starting with e^τM and ending with e^τ1 then we call this the reversed first order product formula and it is defined as, S_1^←(τ)=∏_k=M^1e^τk. The randomised first order Trotter-Suzuki formula can then be defined as a convex combination of first order Trotter-Suzuki formulas in both the forward and reversed orders i.e. S_1^(ran)(τ)=1/2( S_1^→(τ)+ S_1^←(τ) ) Now consider the form of the first order deterministic Trotter-Suzuki formula i.e. S_1^(det) in equation (<ref>), we know that this approximates the total channel T_t up to first order with a second order error term. However using the formula S_1^(ran), we shall see that we obtain an improvement in both the precision ϵ and the number of steps N. The following theorem shows the error bound and gate complexity for the first order randomised Trotter-Suzuki formula. It should be noted that this result has been proven for the simulation of closed quantum systems (Hamiltonian simulation) <cit.>, however the proof relies on the mixing lemma developed by Campbell and Hastings <cit.>. This lemma is not applicable to open quantum systems we present here a proof for the error bound and complexity of the first order randomised formula that does not rely on the mixing lemma. Given the generator ℒ, as in equation (<ref>), of a quantum channel T_t and some time t≥ 0. Define the first order randomised product formula as in equation (<ref>). Let N∈ℕ and Λ :=max_kℒ̂_k. Then, T_t-S_1^(ran)(t/N)^N≤ϵ, where, ϵ≥M^3t^3Λ^3/3N^2, N≥(tΛ M)^3/2/√(3ϵ). Making use of Lemma 1. we can write, T_t-S_1^(ran)(t/N)^N =T_t/N^N-S_1^(ran)(t/N)^N ≤ NT_τ-S_1^(ran)(τ), where τ=t/N. Now we can bound T_τ-S_1^(ran)(τ), we start by performing a Taylor expansion of T_τ and writing out explicitly the terms up to second order, T_τ=exp(τ∑_k=1^Mk) =1+τ∑_j=1^Mj+τ^2/2(∑_j=1^Mj)^2+∑_n=3^∞τ^n/n!ℒ^n, =1+τ∑_j=1^Mj+τ^2/2∑_j=1^Mj^2+τ^2/2∑_j,k=1 j≠ k^Mjk+∑_n=3^∞τ^n/n!ℒ^n. Then, if we consider S_1^(ran)(τ)=1/2( S_1^→(τ)+ S_1^←(τ) ), we Taylor expand S_1^→(τ) and S_1^←(τ), S_1^→(τ) =∏_k=1^Me^τk, =∑_j_1,...,j_M=0^∞τ^j_1+...+j_M/j_1!...j_M!1^j_12^j_2...M^j_M, =∑_p=0^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!1^j_12^j_2...M^j_M, =1+τ∑_j=1^Mj+τ^2/2∑_j=1^Mj^2+τ^2∑_k,l=1 k<l^Mkl+∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!1^j_12^j_2...M^j_M, and, S_1^←(τ) =∏_k=M^1e^τk, =∑_j_1,...,j_M=0^∞τ^j_1+...+j_M/j_1!...j_M!M^j_1M-1^j_2...1^j_M, =∑_p=0^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!M^j_1M-1^j_2...1^j_M, =1+τ∑_j=1^Mj+τ^2/2∑_j=1^Mj^2+τ^2∑_k,l=1 k>l^Mkl+∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!M^j_1M-1^j_2...1^j_M. Then S_1^(ran)(τ) can be written as, S_1^(ran)(τ) =1+τ∑_j=1^Mj+τ/2∑_j=1^Mj^2+τ^2/2∑_k,l=1 k<l^Mkl+τ^2/2∑_k,l=1 k>l^Mkl+ 1/2∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!1^j_12^j_2...M^j_M+1/2∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!M^j_1M-1^j_2...1^j_M, =1+τ∑_j=1^Mj+τ/2∑_j=1^Mj^2+τ^2/2∑_k,l=1 k≠ l^Mkl+ 1/2∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!1^j_12^j_2...M^j_M+1/2∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!M^j_1M-1^j_2...1^j_M. The difference between the total channel T_τ and S_1^(ran)(τ) is, T_τ -S_1^(ran)(τ)=∑_n=3^∞τ^n/n!ℒ^n -1/2∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!1^j_12^j_2...M^j_M-1/2∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!M^j_1M-1^j_2...1^j_M Now we bound the difference between T_τ and S_1^(ran)(τ), T_τ-S_1^(ran)(τ) ≤∑_n=3^∞τ^n/n!ℒ^n +1/2∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!1^j_12^j_2...M^j_M+1/2∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!M^j_1M-1^j_2...1^j_M Now to complete the bound we need to bound the three terms in equation (<ref>). We start with the first term. Noting that, ℒ^n≤ℒ^n≤ M^nΛ^n, then, ∑_n=3^∞τ^n/n!ℒ^n≤∑_n=3^∞τ^n/n!M^nΛ^n. For the second term in equation (<ref>) we use the fact that the diamond norm is sub-multiplicative and i≤Λ for all i=1,...,M to show that, 1^j_12^j_2...M^j_M≤1^j_1...M^j_M≤Λ^j_1+...+j_M. Using equation (<ref>) we bound the second term in equation (<ref>) as, 1/2∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!1^j_12^j_2...M^j_M≤1/2∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!Λ^j_1+...+j_M. In a similar way we find the bound for the third term in equation (<ref>) to be, 1/2∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!M^j_1M-1^j_2...1^j_M≤1/2∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!Λ^j_1+...+j_M. Now using Lemma <ref>. from Appendix <ref>, we can compute the restricted sums in equations (<ref>) and (<ref>). We then get the final bounds on these terms as, 1/2∑_p=3^∞∑_j_1,...,j_M=0 ∑_μj_μ=p^pτ^j_1+...+j_M/j_1!...j_M!1^j_12^j_2...M^j_M≤1/2∑_p=3^∞M^pτ^pΛ^p/p!, and, 1/2∑_p=3^∞∑_j_0,...,j_M=0 ∑_μj_μ=p^pτ^j_0+...+j_M/j_0!...j_M!M^j_1M-1^j_2...1^j_M≤1/2∑_p=3^∞M^pτ^pΛ^p/p!. Substituting the bounds obtained in equations (<ref>), (<ref>) and (<ref>) into (<ref>) yields, T_τ-S_1^(ran)(τ) ≤∑_n=3^∞M^nτ^nΛ^n/n! + ∑_p=3^∞M^pτ^pΛ^p/p!, = 2∑_p=3^∞M^pτ^pΛ^p/p!. Using Lemma F.2 from the supplementary information of <cit.>, which states that, fir some y ≥ 0 ∈ℝ and k∈ℕ, ∑_n=k^∞y^n/n!≤y^k/k!exp(y), and setting y=MτΛ we can get a bound on the infinite sum in (<ref>), T_τ-S_1^(ran)(τ) ≤ 2M^3τ^3Λ^3/3!exp(MτΛ). Replacing τ with t/N in (<ref>) and substituting into (<ref>) yields the bound, T_t-S_1^(ran)(t/N)^N ≤M^3t^3Λ^3/3N^2exp(MtΛ/N). Observing that for large enough N, the factor exp[MΛ (t/N)]≈ 1, allows us to simplify the bound, T_t-S_1^(ran)(t/N)^N ≤M^3t^3Λ^3/3N^2. Letting, ϵ≥M^3t^3Λ^3/3N^2, then we have shown that, T_t-S_1^(ran)(t/N)^N ≤ϵ. From (<ref>) we find the bound on N to be, N ≥M^3/2t^3/2Λ^3/2/√(3ϵ), this completes the proof. § SECOND ORDER RANDOMISED PRODUCT FORMULA The second order randomised SLT product formula is not much more complicated than its deterministic counterpart. It is constructed by considering a convex sum of all permutations of the exponentials in the second order SLT product formula. More precisely, consider the symmetric group Sym(M) which is the group of all permutations of the elements of the set {1,...,M}. For any permutation σ∈Sym(M) we define, S_2^σ(τ):=∏_j=1^Me^τ/2σ(j)∏_j=M^1e^τ/2σ(k), which is a second order SLT product formula whose exponentials are permuted by the permutation σ∈Sym(M). Using this we can construct the approximation to the total channel T_τ by taking a convex combination of S_2^σ for all σ∈Sym(M), S_2^(ran)(τ)=1/M!∑_σ∈Sym(M) S_2^σ(τ), where the 1/M! is present because |Sym(M)|=M!. The following theorem shows the error bound and the bound on N for the second order randomised SLT product formula. The proof of this theorem is done in a similar way to the proof of the randomised product formulas in <cit.>. However, it also relies on the mixing lemma <cit.>, which as stated before is not applicable to open quantum systems, therefore we give a more direct proof. Given the generator ℒ as in equation (<ref>) of a quantum channel T_t and some time t≥ 0. Define the second order randomised product formula as in equation (<ref>). Let N∈ℕ and Λ:=max_kk. Then, T_t-S_2^(ran)(t/N)^N≤ϵ, where, ϵ≥(2Λ t)^3M^2/N^2, and, N≥(2 Λ t)^3/2M/ϵ^1/2. In proving the following theorem we will need to Taylor expand the formula S_2^(ran), however this may be a complicated and challenging task to do direclty. Instead we want to consider what an arbitrary order term looks like in the expansion. The following lemma, which we shall call the randomisation lemma, will tell us what the s-th order non-degenerate term looks like in the expansion of S_2^(ran), where 0 ≤ s ≤ M. We use the word non-degenerate to describe a product of constituent generators k which is pairwise different. To make out calculations easier we rewrite the second order randomised formula S_2^(ran) in the following general way, S_2^(ran)(τ)=1/M!∑_σ∈Sym(M) exp(q_1τσ(π_1(1)))...exp(q_1τσ(π_1(M)))× exp(q_2τσ(π_2(1)))...exp(q_2τσ(π_2(M))), where τ≥ 0, and q_1,q_2∈ℝ^+ that we will define at a later stage and π_1,π_2∈Sym(M) such that π_1=id and π_2 is defined as, π_2=[ 1 2 ... M; M M-1 ... 1 ]. We now state the randomisation lemma. Given the second order randomised product formula as in equation (<ref>), let s∈ℕ such that 0 ≤ s ≤ M. The s-th order non-degenerate term of S_2^(ran) is, τ^s/s!∑_m_1,...,m_s=1 pairwise different^Mm_1 ... m_s. We start by expanding each exponential in (<ref>) in a Taylor series and we take all possible products of s terms from each of the Taylor expansions. We observe that in (<ref>) the exponentials are arranged in an array of two rows and M columns. Using the indicies κ_1,...,κ_s and l_1,...,l_s to label the rows and columns, respectively, of the exponential from which the terms are chosen. To avoid double counting we ensure that κ_1≤κ_2≤ ... ≤κ_s. Within each row we also want to ensure that we have smaller column indicies first. Since π_1 and π_2 are bijective to get the non-degenerate term we require that l_1,...l_s are pairwise different. The s-th order non-degenerate term is, 1/M!∑_σ∈Sym(M) ∑_κ_1,...,κ_s=1 κ_1≤ ...≤κ_s^2 ∑_π_κ_1(l_1),...,π_κ_s(l_s)=1 pairwise different^M(q_κ_1τσ(π_κ_1(l_1)))...(q_κ_sτσ(π_κ_s(l_s))). A direct calculation shows that, 1/M!∑_σ∈Sym(M) ∑_κ_1,...,κ_s=1 κ_1≤ ...≤κ_s^2 ∑_π_κ_1(l_1),...,π_κ_s(l_s)=1 pairwise different^M(q_κ_1τσ(π_κ_1(l_1)))...(q_κ_sτσ(π_κ_s(l_s))) =1/M!∑_σ∈Sym(M) ∑_κ_1,...,κ_s=1 κ_1≤ ...≤κ_s^2 ∑_π_κ_1(l_1),...,π_κ_s(l_s)=1 pairwise different^M ∑_m_1=σ(π_κ_1(l_1)),..., m_s=σ(π_κ_s(l_s))( q_κ_1τm_1)...( q_κ_sτm_s) =1/M!∑_m_1,...,m_s=1 pairwise different^M ∑_κ_1,...,κ_s=1 κ_1≤ ...≤κ_s^2 ∑_π_κ_1(l_1),...,π_κ_s(l_s)=1 pairwise different^M∑_σ∈Sym(M) m_1=σ(π_κ_1(l_1)),..., m_s=σ(π_κ_s(l_s))( q_κ_1τm_1)...( q_κ_sτm_s). The last sum in equation (<ref>) is a permutation of all pairwise different m_1,...,m_s and we observe that for a fixed m_1,...,m_s there are (M-s)! ways we can permute the rest of the indicies so that m_1,...,m_s is unchanged. Therefore we remove this sum and add a factor (M-s)!, leading to the following expression for the s-th order non-degenerate term, (M-s)!/M!∑_m_1,...,m_s=1 pairwise different^M [ ∑_κ_1,...,κ_s=1 κ_1≤ ...≤κ_s^2 ∑_π_κ_1(l_1),...,π_κ_s(l_s)=1 pairwise different^M (q_κ_1τ)...(q_κ_sτ)]m_1...m_s. Now we need to calculate the sum in the brackets in equation (<ref>). This sum depends solely on the row indicies, so by letting r_1 and r_2 be the number of terms picked from row one and row two respectively, we can express the summand as, (q_1τ)^r_1(q_2τ)^r_2. All that remains is to determine the value of the sums which can be found using combinatorial arguments. The number of ways we can choose l_1,...,l_s pairwise different is given by, M(M-1)...(M-(s+1))=M!/(M-s)!. However, when we apply the permutations π_1 and π_2 we may double count some terms. In particular if κ_i=κ_i+1, we have to pick terms from the row κ_i and we must have l_i <l_i+1. This implies that the ordering of π_κ_i(l_i) and π_κ_i+1(l_i+1) is uniquely determined. Altogether, we have then overcounted by a factor of r_1!r_2!, therefore we have, ∑_κ_1,...,κ_s=1 κ_1≤ ...≤κ_s^2 ∑_π_κ_1(l_1),...,π_κ_s(l_s)=1 pairwise different^M (q_κ_1τ)...(q_κ_sτ) =∑_r_1,r_2=0 r_1+r_2=s^sM!/(M-s)!(q_1τ)^r_1(q_2τ)^r_2/r_1!r_2! =M!/(M-s)![(q_1+q_2)τ]^s/s!, where the last equality is a result of the multinomial theorem. Substituting (<ref>) into (<ref>) we get the s-th order non-degenerate term, (M-s)!/M!∑_m_1,...,m_s=1 pairwise different^M M!/(M-s)![(q_1+q_2)τ]^s/s!m_1...m_s. Simplifying this expression yields, [(q_1+q_2)τ]^s/s!∑_m_1,...,m_s=1 pairwise different^Mm_1...m_s. Since S_2^(ran)(τ) is atleast first order accurate it implies that for s=1 this term should cancel exactly with the first order term in the Taylor expansion of T_τ, this implies that q_1+q_2=1 allowing us to set q_1=q_2=1/2 as in the definition of S_2^(ran) in equation (<ref>). This leads to the desired expression for the s-th order non-degenerate term i.e. τ^s/s!∑_m_1,...,m_s=1 pairwise different^Mm_1...m_s. Since we want to compute the error between our second order randomised formula and the channel T_τ it will be useful to understand the form of the s-th order non-degenerate term of T_τ. The following lemma gives the s-th order non-degenerate term of T_τ. The s-th order non-degenerate term of T_τ is, τ^s/s!∑_m_1,...,m_s=1 pairwise different^Mm_1...m_s. Consider the Taylor expansion of T_τ, T_τ=∑_s=0^∞τ^s/s!ℒ^s=∑_s=0^∞τ^s/s!(∑_k=1^Mk)^s, we can write (∑_k=1^Mk)^s as, (∑_k=1^Mk)^s=∑_m_1,...,m_s=1^Mm_1...m_s. To obtain the non-degenerate term we require that m_1,...,m_s be pairwise different so the non-degenerate term is, τ^s/s!∑_m_1,...,m_s=1 pairwise different^Mm_1...m_s. Noting that the arbitrary s-th order term has both a degenerate and non-degenerate part, we then aim to obtain a bound on the norm of the s-th order degenerate terms of T_τ and S_2^(ran), so that we can bound the error. The lemma below gives the bound on the norm of the s-th order degenerate term. Let ℒ be defined as in equation (<ref>) and let Λ:=max_kk. Define the ideal evolution for a small time step τ≥ 0 as T_τ=exp(τℒ) and define the second order randomised formula S_2^(ran) as in equation (<ref>) and let s be a natural number such that 0≤ s ≤ M. The norm of the s-th order degenerate term of the ideal evolution T_τ is at most, τ^sΛ^s/s![M^s-M(M-1)...(M-(s+1))]. The norm of the s-th order degenerate term of S_2^(ran)(τ) is at most, (τΛ)^s/s![M^s-M(M-1)...(M-(s+1))]. The s-th order term in the Taylor expansion of T_τ is, τ^s/s!∑_m_1,...,m_s=1^Mm_1...m_s and the non-degenerate part of (<ref>) is given by Lemma <ref>. By bounding the diamond norm of the s-th order term and the non-degenerate part and taking their difference we can obtain a bound on the norm of the degenerate term. The bound on the norm of the s-th order term of T_τ is, τ^s/s!∑_m_1,...,m_s=1^Mm_1...m_s ≤τ^s/s!∑_m_1,...,m_s=1^Mm_1...m_s, ≤τ^s/s!∑_m_1,...,m_s=1^MΛ^s, =τ^s/s!Λ^sM^s. Now we can bound the s-th order non-degenerate term of T_τ as, τ^s/s!∑_m_1,...,m_s=1 pairwise different^Mm_1...m_s ≤τ^s/s!∑_m_1,...,m_s=1 pairwise different^Mm_1...m_s, ≤τ^s/s!∑_m_1,...,m_s=1 pairwise different^MΛ^s, =τ^s/s!Λ^s[M(M-1)...(M-(s+1))], where the last equality is obtained by computing the sum which is equal to the number of all possible ways we can arrange m_1,...,m_s so that they are pairwise different. This implies that the bound on the norm of the degenerate term of T_τ is, τ^sΛ^s/s![M^s-M(M-1)...(M-(s+1))]. The norm of the s-th order degenerate term of S_2^(ran)(τ) is bounded in the same way as the norm of the s-th order degenerate term of T_τ. We consider the s-th order non-degenerate term of S_2^(ran)(τ) in equation (<ref>) calculated in Lemma <ref>. The bound on the norm is, τ^s/s!∑_m_1,...,m_s=1 pairwise different^Mm_1 ... m_s≤(Λτ)^s/s!M(M-1)...(M-(s+1)). To bound the s-th order term of S_2^(ran)(τ) we applying the following strategy. First, consider the general expression for S_2^(ran)(τ) in equation (<ref>), replace each summand of the generator ℒ by Λ and replace each q_k by 1/2. Next, we expand all exponentials into their Taylor series and extract the s-th order term. In other words, we extract the s-th order term of, 1/M!∑_σ∈Sym(M)exp(MΛτ), to get, (MΛτ)^s/s!. Taking the difference of (<ref>) and (<ref>) yields the bound on the norm of the s-th order degenerate term of S_2^(ran)(τ) i.e. (Λτ)^s/s![M^s-M(M-1)...(M-(s+1))], which completes the proof. Now we wish to bound the error for an for a fixed order term in the difference between the ideal evolution and the second order randomised formula. The lemma below will show this error bound. Given a generator ℒ as in equation (<ref>) and the ideal evolution T_τ for some small time step τ≥ 0 as well as the second order randomised formula S_2^(ran)(τ) defined in equation (<ref>). The error of the approximation, T_τ-S_2^(ran)(τ)=exp(τℒ)-1/M!∑_σ∈Sym(M)S_2^σ(τ), is at most, 0, for 0≤ s≤ 2, (Λτ)^s M^s-1/(s-2)!, for s> 2. For s≤ 2 the formula S_2^σ(τ) is exact so it cancels with all the second order terms and since we sum convexly in the second order randomised formula. All second order terms in, 1/M!∑_σ∈Sym(M)S_2^σ(τ), cancel with all second order terms in T_τ. Therefore the error is zero for s≤ 2. For s>2 we need to bound the error in of the s-th order term in the difference T_τ-S_2^(ran)(τ). We know from Lemma <ref>. and Lemma <ref>. that the s-th order non-degenerate terms of T_τ and S_2^(ran)(τ) are the same and they will cancel when we take the difference T_τ-S_2^(ran)(τ). This means that the only terms we need to consider are the s-th order degenerate terms, from Lemma <ref>. we see that the bound on the error of the difference is the sum of the bounds of the degenerate terms i.e. 2(τΛ)^s/s![M^s-M(M-1)...(M-(s+1))]. Now we need to obtain a bound for the factor [M^s-M(M-1)...(M-(s+1))]. Using <cit.> we can obtain the following bound, [M^s-M(M-1)...(M-(s+1))]≤[ s; 2 ]M^s-1=s!/2!(s-2)!M^s-1. This leads us to the error bound for s>2, 2(τΛ)^s/s![M^s-M(M-1)...(M-(s+1))]≤ 2(τΛ)^s/s!s!/2!(s-2)!M^s-1=(τΛ)^sM^s-1/(s-2)!, which is the desired bound. We are now able to prove Theorem <ref> and obtain the bound on the precision ϵ and number of steps N for the second order randomised formula S_2^(ran). (of Theorem <ref>.) We start by applying Lemma <ref>. to (<ref>) which yields, T_t-S_2^(ran)(t/N)^N≤ NT_τ-S_2^(ran)(τ). From Lemma <ref> we have that the bound on T_τ-S_2^(ran)(τ) is, for s=3, T_τ-S_2^(ran)(τ)≤∑_s=3^∞(Λτ)^s M^s-1/(s-2)! Making use of Lemma F.2 from the supplementary information of <cit.>, which states that, for some y ≥ 0 ∈ℝ and k ∈ℕ, ∑_n=k^∞y^n/n!≤y^k/k!exp(y), we can bound the sum in equation by setting y=Λτ M and k=3, T_τ-S_2^(ran)(τ)≤(Λτ)^3M^2/(3-2)!exp(Λτ M). Using the fact that τ=t/N we have, T_τ-S_2^(ran)(τ)≤(Λ t)^3M^2/N^3exp(Λ M t/N), for large enough N we can write exp(Λ M t/N) ≈ 1, hence the bound is, T_τ-S_2^(ran)(τ)≤(Λ t)^3M^2/N^3. Substituting this into (<ref>) yields, T_t-S_2^(ran)(t/N)^N≤ N (Λ t)^3M^2/N^3=(Λ t)^3M^2/N^2 Now let ϵ≥ 0 such that, ϵ≥(Λ t)^3M^2/N^2, which leads to the bound, N≥(Λ t)^3/2M/ϵ^1/2. Which completes the proof. § THE QDRIFT CHANNEL FOR SIMULATING OQS In this section we will outline how we can use the QDRIFT channel <cit.> to simulate Markovian OQS. Consider the generator ℒ in equation (<ref>), this form shall be used throughout the rest of this section. We start by defining the following quantity Γ, which is the sum of all the decay rates in ℒ i.e. Γ=∑_k=1^Mγ_k. To define the QDRIFT channel we must define the small time step ω=t Γ/N. The QDRIFT channel, probabilistically implements a constituent channel e^ωℒ_k with some probability p_k which depends on the decay rate γ_k in the generator in the following way, p_k=γ_k/Γ, it is evident from the definition of p_k that ∑_k=1^Mp_k=1 and that for larger γ_k the QDRIFT channel is more likely to apply e^ωℒ_k. While this process is random, the probabilities p_k, have a bias built into them so that with many repetitions the evolution stochastically drifts towards the ideal evolution T_t. Since each constituent channel is sampled independently the the process is entirely Markovian and we can consider the evolution resulting from a single random operation. The QDRIFT channel, shall be denoted by ℰ^(QD)_ω has the following form, ℰ^(QD)_ω(ρ)=∑_k=1^Mp_ke^ωℒ_k. We now state the following theorem which outlines how the QDRFIT channel approximates T_t, we shall save the discussion of how to implement this QDRIFT channel for a later section. Given the generator ℒ as in equation (<ref>) of a quantum channel T_t and some time t≥ 0. Let N∈ℕ and Ω := max_kℒ_k, Then, T_t-(ℰ^(QD)_ω)^N=T_t-(∑_k=1^Mp_ke^ωℒ_k)^N≤ϵ, where, ϵ≥t^2Γ^2Ω^2/N, and, N ≥t^2Γ^2Ω^2/ϵ. Using Lemma <ref>. we see that, T_t-(ℰ^(QD)_ω)^N =exp(tℒ)-(ℰ^(QD)_ω)^N, =exp(t/N∑_k=1^Mγ_kℒ_k)^N-(∑_k=1^Mp_kexp(ωℒ_k) )^N ≤ Nexp(t/N∑_k=1^Mγ_kℒ_k)-(∑_k=1^Mp_kexp(ωℒ_k) ). To complete the proof it remains to find a bound on exp(t/N∑_k=1^Mγ_kℒ_k)-(∑_k=1^Mp_kexp(ωℒ_k) ). We start by expanding exp(t/N∑_k=1^Mγ_kℒ_k) in a Taylor series, exp(t/N∑_k=1^Mγ_kℒ_k) =1+t/N∑_k=1^Mγ_kℒ_k+∑_ν=2^∞(t/N)^ν/ν!(∑_k=1^Mγ_kℒ_k)^ν =1+t/Nℒ+∑_ν=2^∞(t/N)^ν/ν!ℒ^ν. We can also expand the exponential in the term, ∑_k=1^Mp_kexp(ωℒ_k), in a Taylor series as follows, ∑_k=1^Mp_kexp(ωℒ_k) =∑_k=1^Mp_k(∑_ν=0^∞ω^ν/ν!ℒ_k^ν), =∑_k=1^Mp_k(1+ωℒ_k+∑_ν=2^∞ω^ν/ν!ℒ_k^ν), =1+∑_k=1^Mp_kωℒ_k+∑_k=1^Mp_k∑_ν=2^∞ω^ν/ν!ℒ_k^ν, =1+∑_k=1^Mγ_k/Γωℒ_k+∑_k=1^Mγ_k/Γ∑_ν=2^∞ω^ν/ν!ℒ_k^ν, =1+ω/Γℒ+∑_k=1^Mγ_k/Γ∑_ν=2^∞ω^ν/ν!ℒ_k^ν. We can now compute the norm exp(t/N∑_k=1^Mγ_kℒ_k)-(∑_k=1^Mp_kexp(ωℒ_k) ) as follows, if we use the fact that ω=tΓ/N then we see that the zeroth and first order terms in (<ref>) and (<ref>) cancel leaving us with, exp(t/N∑_k=1^Mγ_kℒ_k)-(∑_k=1^Mp_kexp(ωℒ_k) )=∑_ν=2^∞t^ν/N^νν!ℒ^ν-∑_k=1^Mγ_k/Γ∑_ν=2^∞ω^ν/ν!ℒ_k^ν. Using the sub-additive and sub-multiplicative properties of the diamond norm one obtains, exp(t/N∑_k=1^Mγ_kℒ_k)-(∑_k=1^Mp_kexp(ωℒ_k) )≤∑_ν=2^∞t^ν/N^νν!ℒ^ν+∑_k=1^Mγ_k/Γ∑_ν=2^∞ω^ν/ν!ℒ_k^ν. At this point we wish to find bounds on the diamond norm of the generator ℒ and the superoperators ℒ_k. By definition we have that ℒ_k≤Ω which yields, ℒ_k^ν≤Ω^ν. For the generator ℒ we have, ℒ=∑_k=1^Mγ_kℒ_k≤∑_k=1^Mγ_kℒ_k≤∑_k=1^Mγ_kΩ=ΓΩ, which implies that, ℒ^ν≤Γ^νΩ^ν. Substituting (<ref>) and (<ref>) into (<ref>) produces, exp(t/N∑_k=1^Mγ_kℒ_k)-(∑_k=1^Mp_kexp(ωℒ_k) ) ≤∑_ν=2^∞t^νΓ^νΩ^ν/N^νν!+∑_k=1^Mγ_k/Γ∑_ν=2^∞ω^ν/ν!Ω^ν, =∑_ν=2^∞t^νΓ^νΩ^ν/N^νν!+Γ/Γ∑_ν=2^∞t^νΓ^νΩ^ν/N^νν! =2∑_ν=2^∞t^νΓ^νΩ^ν/N^νν!. Making use of Lemma F.2 from the supplementary information of <cit.>, we can bound the sum in (<ref>) as, exp(t/N∑_k=1^Mγ_kℒ_k)-(∑_k=1^Mp_kexp(ωℒ_k) ) ≤ 2 t^2Γ^2Ω^2/2!N^2exp(tΓΩ/N). Now for large enough N we can approximate the exponential by 1 i.e. exp(tΓΩ/N)≈ 1 which gives the bound, exp(t/N∑_k=1^Mγ_kℒ_k)-(∑_k=1^Mp_kexp(ωℒ_k) ) ≤t^2Γ^2Ω^2/N^2. Using the bound obtained in (<ref>) in the inequality (<ref>) yields, T_t-(ℰ^(QD)_ω)^N ≤ N t^2Γ^2Ω^2/N^2=t^2Γ^2Ω^2/N. Now choosing ϵ≥ 0 such that, ϵ≥t^2Γ^2Ω^2/N, gives the desired bound, T_t-(ℰ^(QD)_ω)^N ≤ϵ. Also, from (<ref>) we have, N ≥t^2Γ^2Ω^2/ϵ, which completes the proof. § IMPLEMENTATION ON A QUANTUM COMPUTER In the previous sections we have derived bounds for the error ϵ and the number of steps N for each of the randomised formulas as well as the QDRIFT channel. In this section we show how they can be implemented on a quantum computer. First, we discuss how to construct a quantum circuit that implements the randomised formulas and QDRIFT using Classical Sampling (CS) to construct a gate set. Then, we will discuss how Quantum Forking (QF) <cit.> can be used to implement the randomised formula S_1^(ran) and the QDRIFT channel on a quantum computer without the need for classical sampling. §.§ Implementation of S_1^(ran), S_2^(ran) and QDRIFT Channel Using Classical Sampling §.§.§ Implementation of S_1^(ran) with CS Consider the first order randomised product formula S_1^(ran)(τ)^N, to implement this formula we need N applications of S_1^(ran)(τ). However, the definition of S_1^(ran)(τ) in (<ref>) tells us that when we apply it to a state ρ(0) it will apply the channel S_1^→(τ) with probability 1/2 and S_1^←(τ) with probability 1/2. This gives us a way to construct a circuit for S_1^(ran)(τ), if we define some random variable j_l∈{0,1} for l=1,...N where p(j_l=0)=1/2=p(j_l=1), then by assigning S_1^→(τ)≡ S_1^(0)(τ) and S_1^←(τ)≡ S_1^(1)(τ), we can construct a gate set, denoted by G_1^(ran), by iteratively sampling each j_l and appending S_1^(j_l)(τ) to G_1^(ran) which can then be applied to an initial state ρ(0) and the circuit outputs the state ρ̃(t) which approximates ρ(t) to an precision ϵ=(MtΛ)^3/3N^2. The pseudocode for the algorithm that can construct this gate set is shown in Algorithm <ref>. The circuit diagram in Figure <ref> summarises the algorithm for implementing S_1^(ran)(τ)^N. In Figure <ref> we see that each channel S_1^(j_l) depends on the value of each j_l obtained from classical sampling. We would want to obtain the gate complexity for the circuit implemented by G_1^(ran). To do this we would like to count the number of simple channels, which in this case are the number of exponentials exp(τk), in the gate set. Since there are ⌈ (MtΛ)^3/2/√(3ϵ)⌉ TS product formulas in G_1^(ran) each containing M exponentials the gate complexity scales as O(M^5/2(tΛ)^3/2/√(3ϵ)). §.§.§ Implementation of S_2^(ran) with CS In a similar way to the implementation of S_1^(ran) if we consider the formula for the second order randomised formula S_2^(ran)(τ)^N we need N applications of S_2^(ran)(τ) as in (<ref>). By definition S_2^(ran)(τ) applies S_2^σ(τ), for some permutation σ∈Sym(M), with a probability 1/M!. But since the sum in (<ref>) is over all possible permutations in Sym(M) it is equally likely that we apply a S_2^σ(τ) for any permutation σ. So to construct a quantum circuit that will implement S_2^(ran)(τ)^N we define σ_l, for l=1,2,...,N, which will be some permutation from Sym(M). Then we define an oracle function which we call SAMPLE_PERMUTATION() which will sample from Sym(M), with every permutation in Sym(M) having a probability 1/M! of being sampled, and the oracle will return one of the permutations. Then we denote the gate set that we apply to initial state ρ(0) as G_2^(ran). Initially this set will be empty. Then we iteratively obtain σ_l using the function SAMPLE_PERMUTATION() i.e. σ_l=SAMPLE_PERMUTATION(), and append S_2^σ_l to the gate set G_2^(ran) and repeating this process for l=1,2,...,N. Then if we construct a quantum circuit with the gate set G_2^(ran) with some initial sate ρ(0), the output of the circuit will be the state ρ̃(t) which approximates ρ(t) to a precision ϵ=(Λ t)^3M^2/N^2. Algorithm <ref>, shows the pseudocode for how one can implement S_2^(ran)(τ)^N. Figure <ref>, is a visualisation of the algorithm as a quantum circuit where each σ_l is some permutation sampled from Sym(M). To see how the gate complexity for a circuit constructed with G_2^(ran) scales, we count the number of exponentials. We will have ⌈ (Λ t)^3/2M/√(ϵ)⌉ TS product formulas in G_2^(ran) and for each of these there are 2M exponentials. The gate complexity will then scale as O((Λ t)^3/2M^2/√(ϵ)). §.§.§ Implementation of QDRIFT Channel with CS Consider the QDRFIT channel ℰ_ω^(QD) as in (<ref>), by definition it will apply exp(ωℒ_k) with some probability p_k with k=1,2,...,M. This gives us an easy way to construct a quantum circuit that implements (ℰ_ω^(QD))^N using classical sampling. We start by defining a random variable j_l∈{1,...,M}, with l=1,...,N, and the probability distribution p(j_l=k)=p_k with k=1,...,M. Then we denote the gate set by G_QD and define the classical oracle function SAMPLE() which samples from the distribution p(j_l=k)=p_k for k=1,...,M and returns a value from the set {1,...,M}. To construct the circuit we iteratively use SAMPLE() to find a value j_l and append exp(ωℒ_j_l) for l=1,...,N to the gate set G_QD. Once we have constructed the gate set we use it to construct a quantum circuit with initial sate ρ(0), the circuit then approximates ρ(t) to a precision ϵ=(tΓΩ)^2/N. Algorithm <ref> outlines the pseudocode for constructing the gate set G_QD to implement (ℰ_ω^(QD))^N. Figure <ref>. shows the quantum circuit that implements the QDRIFT channel, we see here that each exponential depends on the outcome of sampling from the distribution p_k. Since the G_QD contains only exponentials the gate complexity just scales here with the number of elements in G_QD i.e. O((tΓΩ)^2/ϵ). §.§ Quantum Circuit Implementation of S_1^(ran) and QDRIFT via Quantum Forking §.§.§ Quantum Circuit Implementation of S_1^(ran) with QF In this section, we present a method for implementing the randomised TS formula S_1^(ran)(τ)^N on a quantum computer without the need for classical sampling. We will make use of the quantum forking procedure <cit.> to construct a quantum circuit that directly implements S_1^(ran)(τ), which can be seen in Figure <ref>. The circuit makes use of controlled swap channels denoted by the usual circuit notation for the controlled-SWAP operation as seen in Figure <ref>. The controlled swap channel will only swap the states if the state that it is controlled on is in the state |1⟩⟨$|. We performNrepetitions of the circuit in Figure <ref>. to obtainS_1^(ran)(τ)^N. The following lemma shall show how the circuit in Figure <ref> implementsS_1^(ran)(τ)using quantum forking. Given some small time step τ≥ 0 and an initial state ρ(0). The circuit in Figure <ref> will implement the first order randomised TS product formula S_1^(ran)(τ). We can show directly that the circuit in Figure <ref> implements the first order randomised TS formula S_1^(ran). Consider the initial state in the circuit, ρ_prep⊗ρ(0) ⊗ρ_ϕ=1/2( |0⟩⟨⊗|ρ(0) ⊗ρ_ϕ +|1⟩⟨⊗|ρ(0) ⊗ρ_ϕ), here the state ρ_ϕ is any state that is easy to prepare and does not have any effect on the outcome of our algorithm. Next, we apply the controlled swap channel which yields, 1/2( |0⟩⟨⊗|ρ(0) ⊗ρ_ϕ +|1⟩⟨⊗|ρ_ϕ⊗ρ(0)). Then S_1^→(τ) and S_1^←(τ) are applied, 1/2( |0⟩⟨⊗|S_1^→(τ)ρ(0) ⊗ S_1^←(τ)ρ_ϕ +|1⟩⟨⊗|S_1^→(τ)ρ_ϕ⊗ S_1^←(τ)ρ(0)), then the second controlled swap channel is applied yielding, 1/2( |0⟩⟨⊗|S_1^→(τ)ρ(0) ⊗ S_1^←(τ)ρ_ϕ +|1⟩⟨⊗|S_1^←(τ)ρ(0)⊗ S_1^→(τ)ρ_ϕ). In Figure <ref> the measurement with discard tells us to trace out those respective subsystems. So now we trace out the first and last subsystems, which gives us, 1/2( tr(|0⟩⟨)|⊗ S_1^→(τ)ρ(0) ⊗tr(S_1^←(τ)ρ_ϕ) +tr(|1⟩⟨)|⊗ S_1^←(τ)ρ(0)⊗tr(S_1^→(τ)ρ_ϕ)). Since S_1^→(τ)ρ_ϕ and S_1^←(τ)ρ_ϕ are valid states its trace will be one so we have, 1/2(S_1^→(τ)ρ(0) +S_1^←(τ)ρ(0) )=1/2(S_1^→(τ) +S_1^←(τ) )ρ(0)=S_1^(ran)(τ)ρ(0), which completes the proof. Now that we understand how to implementS_1^(ran)for a small time step using quantum forking. We need to show that if we repeat this circuitN=⌈(MtΛ)^3/2/√(3ϵ)⌉times we will implementS_1^(ran)(τ)^N. The circuit in Figure <ref> illustrates how one can repeat the circuit in Figure <ref>, it relies on two important operations. The first important operation in Figure <ref> is the measurement operation with discard correspond to tracing out the respective register, and the second important operation is represented by the grey bar with a dotted line, this is called a barrier. It represents the process of reseting the register to a desired state after measuring and discarding the result. The following theorem will outline how the circuit in Figure <ref> implementsS_1^(ran)(τ)^N. Given an initial state ρ(0), some time t≥ 0, a precision ϵ > 0 and N=⌈ (MtΛ)^3/2/√(3ϵ)⌉. The circuit in Figure <ref> implements the first order randomised TS product formula S_1^(ran)(τ)^N with an output state ρ̃(t) which satisfies d_tr(ρ(t),ρ̃(t))≤ (MtΛ)^3/6N^2. We begin by using Lemma <ref>, which tells us that after the first circuit block in Figure <ref> the state of the system register is S_1^(ran)(τ)ρ(0) and after we measure, discard and reset the ancillary registers the full register is, ρ_prep⊗ S_1^(ran)(τ)ρ(0) ⊗ρ_ϕ. Now using Lemma <ref> again, but with the register above as the input and repeating the process of measuring, discarding and then resetting the ancillary registers we have, ρ_prep⊗ S_1^(ran)(τ)^2ρ(0) ⊗ρ_ϕ. Repeating this process above N-2 times but this time not resting the ancillary registers, as we have reached the end of the circuit, we have the final state of the system register as S_1^(ran)(τ)^Nρ(0), which is the desired output of the circuit. Now if we define ρ̃(t)=S_1^(ran)(τ)^Nρ(0) and making use of Theorem <ref> we have, d_tr(ρ(t),ρ̃(t)) =1/2ρ(t)-ρ̃(t), =1/2T_tρ(0)-S_1^(ran)(τ)^Nρ(0), ≤1/2T_t-S_1^(ran)(τ)^N, ≤(MtΛ)^3/6N^2, which completes the proof. We can now determine the gate complexity for the quantum circuit that implementsS_1^(ran)(t/N)^N. Unlike the usual definition of gate complexity where one counts the number of elementary gates used from a universal set. We have chosen to count the number of simple quantum channels implemented in the circuit. For example in the circuit in Figure <ref>, the channelS_1^→(t/N)containsMexponentials and there areMexponentials inS_1^(←)(t/N). There are also two controlled-SWAP channels in the circuit which we will need to count as well. Therefore to implementS_1^(ran)(t/N), we will require2M+2simple channels. If we need to repeat this circuitN=⌈(MtΛ)^3/2/√(3ϵ) ⌉times then in total we will need(2M+2)⌈(MtΛ)^3/2/√(3ϵ) ⌉number of simple channels to implementS_1^(ran)(t/N)^N. If we define the gate complexity for the first order randomised TS product formula asg_1^(ran)then the gate complexity scales as follows, g_1^(ran)=O((M+1) (MtΛ)^3/2/√(3ϵ))=O(M^5/2(tΛ)^3/2/√(3ϵ)). We can observe that the scaling for gate complexity of the randomised first order TS product formula is the same as the scaling for the gate complexity of the second order deterministic formulag_2^(det)in (<ref>). §.§.§ Inefficiency Of A Quantum Circuit Implementation Of S_2^(ran) with QF It may seem possible then to implement the second order randomised formulaS_2^(ran)in a similar way, all that may be required is to generalise the forking circuit as done in <cit.>. However,S_2^(ran)is a convex sum over all possible permutations of a set ofMnumbers this means there areM!terms in the convex sum. Using quantum forking would require2(M!)controlled-SWAP gates leading to a gate complexity that scales with a factor orM!, which is extremely inefficient! This is why we do not construct a quantum circuit to implementS_2^(ran)and implement it only via classical sampling. §.§.§ Quantum Circuit Implementation of QDRIFT Channel with QF In this section we will outline how we can implement the QDRIFT channel(ℰ_ω^(QD))^N. Since the QDRIFT channel is also a convex sum of quantum channels we will once again make use of quantum forking <cit.> to construct the circuit. We will also make use of the controlled-SWAP channels just as we did in the circuit for implementingS_1^(ran). Figure <ref> shows the circuit that will implementℰ_ω^(QD), and if we repeat this circuitN=⌈(tΓΩ)^2/ϵ⌉times as shown in Figure <ref>. The following lemma below will show that the circuit implementsℰ_ω^(QD). Given ω≥ 0 and an initial state ρ(0). The circuit in Figure <ref> implements the QDRIFT channel ℰ_ω^(QD). We can show that the circuit in Figure <ref> directly implements ℰ_ω^(QD). The initial state of all the registers in the circuit is, ρ_prep⊗ρ(0) ⊗ (ρ_ϕ)^⊗ M-1=∑_k=1^Mp_k(|k⟩⟨⊗|ρ(0) ⊗ρ_ϕ⊗ ... ⊗ρ_ϕ), where the states ρ_ϕ are once again just easy to prepare states that have no effect on the outcome of the circuit. The controlled-SWAP channels in Figure <ref> are applied only when the ancillary register ρ_prep is in the state |k⟩⟨k|. Applying all the controlled-SWAP channels produces the state, p_1(|1⟩⟨⊗|ρ(0) ⊗ρ_ϕ⊗ ... ⊗ρ_ϕ)+p_2(|2⟩⟨⊗|ρ_ϕ⊗ρ(0) ⊗ ... ⊗ρ_ϕ)+... ...+p_M-1(|M-1⟩⟨⊗|ρ_ϕ⊗ ... ⊗ρ(0) ⊗ρ_ϕ)+p_M(|M⟩⟨⊗|ρ_ϕ⊗ρ_ϕ⊗ ... ⊗ρ(0)). Next, we apply the channels exp(ωℒ_k) to each register, p_1(|1⟩⟨⊗|e^ωℒ_1ρ(0) ⊗ e^ωℒ_2ρ_ϕ⊗ ... ⊗ e^ωℒ_Mρ_ϕ)+p_2(|2⟩⟨⊗|e^ωℒ_1ρ_ϕ⊗ e^ωℒ_2ρ(0) ⊗ ... ⊗ e^ωℒ_Mρ_ϕ)+... ...+p_M(|M⟩⟨⊗|e^ωℒ_1ρ_ϕ⊗ e^ωℒ_2ρ_ϕ⊗ ... ⊗ e^ωℒ_Mρ(0)). Now we apply the controlled-SWAP channels once more which produces the state, p_1(|1⟩⟨⊗|e^ωℒ_1ρ(0) ⊗ e^ωℒ_2ρ_ϕ⊗ ... ⊗ e^ωℒ_Mρ_ϕ)+p_2(|2⟩⟨⊗|e^ωℒ_2ρ(0) ⊗ e^ωℒ_1ρ_ϕ⊗ ... ⊗ e^ωℒ_Mρ_ϕ)+... ...+p_M(|M⟩⟨⊗|e^ωℒ_Mρ(0) ⊗ e^ωℒ_2ρ_ϕ⊗ ... ⊗ e^ωℒ_1ρ_ϕ). By tracing out the ancillary registers we see that tr(|k⟩⟨)|=⟨k|=⟩1 and the fact that tr(exp(ωℒ_k)ρ_ϕ)=1 for any k=1,...,M we have the output state, ∑_k=1^Mp_ke^ωℒ_kρ(0)=ℰ_ω^(QD)(ρ(0)), which shows that the circuit implements the QDRIFT channel, completing the proof. The following theorem outlines how we can implement the QDRIFT channel using the circuit in Figure <ref> and it shows that the output of the circuit is bounded with respect to the trace distance to the final stateρ(t). Given an initial state ρ(0), some time t≥ 0 and N∈ℕ. The circuit in Figure <ref> implements the QDRIFT channel (ℰ_ω^(QD))^N with an output state ρ̃(t) which satisfies d_tr(ρ(t),ρ̃(t))≤ (tΓΩ)^2/2N. We prove this theorem in a similar way to Theorem <ref>. Using Lemma <ref> we observe that after a single application of the circuit in Figure <ref> the output state after measuring, discarding and resetting all ancillary registers is, ρ_prep⊗ℰ_ω^(QD)(ρ(0))⊗ (ρ_ϕ)^⊗ (M-1). Applying the circuit block in Figure <ref> in the same manner N=⌈ (tΓΩ)^2/ϵ⌉ times yields the state, (ℰ_ω^(QD))....(ℰ_ω^(QD))_N-times(ρ(0))=(ℰ_ω^(QD))^N(ρ(0))=ρ̃(t). Where we have defined the output state as ρ̃(t), and making use of Theorem <ref> we have that the trace distance between the state ρ̃(t) and ρ(t) is, d_tr(ρ(t),ρ̃(t)) =1/2ρ(t)-ρ̃(t), = 1/2T_tρ(0)-(ℰ_ω^(QD))^N(ρ(0)), ≤1/2T_t-(ℰ_ω^(QD))^N, ≤(tΓΩ)^2/2N. We are now able to find the gate complexity of the circuit that implements the QDRIFT channel. If we consider the circuit in Figure <ref> we see that to implementℰ_ω^(QD)we need2(M-1)controlled-SWAP channels and we implementMsimpler channels i.e.exp(ωℒ_k)fork=1,...,M. This means that in total to implementℰ_ω^(QD)we need3M-2simple channels. Now if we implementℰ_ω^(QD),N=⌈(tΓΩ)^2/ϵ⌉times then in total to implement(ℰ_ω^(QD))^Nwe need(3M-2)⌈(tΓΩ)^2/ϵ⌉simple channels. If we denote the gate complexity for the QDRIFT channel asg^(QD)then the gate complexity scales as, g^(QD)=O((3M-2)(tΓΩ)^2/ϵ)=O(M(tΓΩ)^2/ϵ). Here we see that the gate complexity for the circuit implementation of the QDRIFT channel depends only linearly onM, which is better than the first order randomised formula, however we have a quadratic dependence ont, which is not as good as the first order randomised formula which has a dependence ont^3/2. § COMPARISON OF GATE COMPLEXITIES In this section we will discuss and compare the gate complexities, for all the simulation methods outline in this paper to the gate complexities of the deterministic TS product formulas. Table <ref> summarises the gate complexities for each method. As we mentioned earlier the goal in using randomisation to simulate Markovian OQS is to achieve faster simulation, by improving the gate complexities dependence on the number of terms in the generatorℒ, which is denoted byM. In Table <ref> we see that the randomised formulas (both CS and QF) and the QDRIFT channel all improve on the gate complexities dependence onM, when compared to the deterministic TS product formulas. The first order randomised TS formula (CS) scales the same as the second order deterministic formula. This is an improvement over the first order deterministic TS formula as we see an improvement in the scaling of the gate complexity with respect to the number of terms in the generatorM, as it now scales asM^5/2instead ofM^3. We also observe that we achieve the same scaling when implementing the first order randomised formula with quantum forking, meaning that there is no additional cost to performing the sampling directly on the quantum computer. The second order randomised TS (CS) gives us a quadratic dependence onMwhich is much better than both the deterministic TS product formulas and even the first order randomised TS formula. However, as discussed in section <ref> we cannot use QF to implementS_2^(ran)due to the fact that we will require2(M!)controlled-SWAP channels which implies that the gate complexity will depend onM!which is very inefficient. For the QDRIFT channel (CS) we observe that there is no dependence onMin the gate complexity which makes this method ideal for systems with a large number of termsM. For example, two and three dimensional Jaynes-Cummings models with neighbour-neighbour interactions <cit.> and two dimensional Heisenberg models with boundary driving and local dissipation <cit.>. One may argue however that theMdependence is hidden in the factorΓ, however for most systems we will computeΓclassically before we construct the gate setG_QD. Therefore this would not add a factor ofMto the gate complexity. For the implementation of the QDRIFT channel with QF we see that we require2(M-1)controlled-SWAP channels which means that the gate complexity will now scale linearly withMwhich is still better than all other TS product formulas (deterministic and randomised). However, while this scaling withMis better than other methods, the quadratic scaling intis what makes this method only applicable for short simulation times, for longer simulations times a randomised product formula will perform better. § CONCLUSION In this work we have shown that we can achieve faster simulation of Markovian OQS using randomisation. We have constructed randomised TS product formulas for simulating the dynamics of OQS. We have also proven directly that these formulas approximate the ideal evolutionT_t. We were able to prove all the results without using the Campbell and Hastings mixing Lemma <cit.>, which is vital to proving these results in the Hamiltonian simulation setting. With the first order randomised TS product formula, we achieve an improved precision, that is now quadratic int, and we also have an improved gate complexity that scales the same as the second order deterministic TS product formula, where it scales withM^5/2. For this formula we have provided two methods for implementing on a quantum computer. The first relies on CS and the second relies on QF, in both scenarios the gate complexities scale more efficiently inMthan the first order deterministic product formula. The second order randomised TS product formula has a much better scaling with respect toMwith the gate complexity depending quadratically onM. For the second order randomised TS product formula we can only efficiently implement this using CS. This is because the gate complexity of a circuit that will implementS_2^(ran)will depend onM!. This work also proves that one can use the QDRIFT protocol <cit.> to simulate Markovian OQS. We see that for the quantum circuit implementation using CS the gate complexity does not depend onMmaking this method ideal for systems with many terms in the generator. However the quadratic dependence ontmeans that this method is only viable for short simulation times. We have also shown that we can use QF to construct a quantum circuit that directly implements the QDRIFT channel on a quantum computer. For the QF implementation of QDRIFT we see that the gate complexity scales linearly withMwhich is still much better than all the product formula methods. An open problem that can be addressed in future is to find optimal convex mixtures of second order TS product formulas can produce improved precision and or better scaling of the gate complexity in terms ofM. A second open problem will be to see if one can apply these results to the simulation of quantum channels that describe non-Markovian dynamics of an OQS <cit.>. ieeetrequationsection § PROOFS OF THEOREMS AND LEMMAS IN SECTION 2 (Lemma 1) We prove this by induction. For N=0,1 the equality in equation (<ref>) holds and to show this would be trivial, so for the base case in our inductive proof we choose N=2, T^2-V^2 =T^2-TV+TV-V^2 =T(T-V)+(T-V)V ≤T(T-V)+(T-V)V =≤TT-V+T-VV By recalling that for any quantum channel T by definition T=1, this allows us to write, T^2-V^2 ≤T-V+T-V=2T-V. Hence we have verified that the inequality in equation (<ref>) holds for N=2, we now assume that it holds for N=m and show that it is true for N=m+1. T^m+1-V^m+1 =T^m+1-TV^m+TV^m-V^m+1 =T(T^m-V^m)+(T-V)V^m ≤TT^m-V^m+T-VV^m ≤ mT-V+T-VV^m ≤ (m+1)T-V. Therefore by induction the inequality in (<ref>) holds true for all integers N≥0. (of Theorem 1.) To begin the proof we define the parameter τ=t/N to be a small time step, and we recognise that, T_t-S^(det)_1(t/N)^N =T_t/N^N-S^(det)_1(t/N)^N, =T_τ^N-S^(det)_1(τ)^N, ≤ N T_τ-S^(det)_1(τ), where the last inequality was obtained using Lemma <ref>. The inequality above tells us that we need to find a bound on the distance between T_τ and S_1^(det)(τ). This is done by Taylor expanding both T_τ and S_1^(det)(τ) and looking at the remainder terms of their difference. T_τ-S_1^(det)(τ)=∑_l=2^∞R_l(τ)-W_l(τ), where R_l(τ) is the l-th order remainder term of the Taylor expansion of T_τ and W_l(τ) is the l-th order remainder term of the Taylor expansion of S_1^(det)(τ). Since S_1^(det)(τ) is first order it cancels with the first order terms in the Taylor expansion of T_τ, hence the remainder terms start from second order. Using equation (<ref>) we find, T_τ-S_1^(det)(τ) =∑_l=2^∞R_l(τ)-W_l(τ), ≤∑_l=2^∞R_l(τ)+W_l(τ). To bound the term R_l(τ) we observe that it has the following expression, R_l(τ)=τ^lℒ^l/l!, which allows us to write, R_l(τ) =τ^lℒ^l/l!, ≤τ^l/l!ℒ^l, where we have used the submultiplicativity of the diamond norm in the last line. To complete this bound we must bound the generator ℒ, ℒ=∑_k=1^Mℒ̂_k≤∑_k=1^Mℒ̂_k≤ MΛ. This allows us to complete the bound on R_l(τ) as, R_l(τ)≤τ^lM^lΛ^l/l!. To bound the term W_l(τ) we need to find the general expression for the Taylor expansion of S_1^(det))(τ). Taylor expanding each exponential in equation (<ref>) leads to, S_1^(det)(τ)=∑_j_1,...,j_M=0^∞τ^j_1+...+j_M/j_1!...j_M!1^j_12^j_2...M^j_M. The M infinite sums can be written as one infinite sum and M finite sums, ∑_l=0^∞∑_j_1,...,j_M=0; ∑_μj_μ=l^lτ^j_1+...+j_M/j_1!...j_M!1^j_12^j_2...M^j_M. The remainder term W_l(τ) is given by, W_l(τ)=∑_j_1,...,j_M=0; ∑_μj_μ=l^lτ^j_1+...+j_M/j_1!...j_M!1^j_12^j_2...M^j_M. W_l(τ) can then be bounded as follows, W_l(τ) ≤∑_j_1,...,j_M=0; ∑_μj_μ=l^lτ^j_1+...+j_M/j_1!...j_M!1^j_12^j_2...M^j_M, ≤∑_j_1,...,j_M=0; ∑_μj_μ=l^lτ^j_1+...+j_M/j_1!...j_M!Λ^j_1+...+j_M, where the last inequality is obtained by the fact that k≤Λ for all k=1,...,M. To complete the bound we must compute the restricted sum in equation (<ref>). We make use of Lemma <ref> found in Appendix <ref> to do this which leads to, W_l(τ)≤τ^lM^lΛ^l/l!. Using equations (<ref>) and (<ref>), we can bound the distance, T_τ-S^(det)_1(τ) ≤∑_l=2^∞R_l(τ)+W_l(τ), ≤∑_l=2^∞τ^lM^lΛ^l/l! + τ^lM^lΛ^l/l!, =2 ∑_l=2^∞τ^lM^lΛ^l/l!. Making use of Lemma F.2 from the supplementary information of <cit.>, which states that, for some y ≥ 0 ∈ℝ and k ∈ℕ, ∑_n=ky^n/n!≤y^k/k!exp(y), we can bound the sum in equation (<ref>), T_τ-S^(det)_1(τ) ≤2τ^2Λ^2M^2/2exp(τΛ M), =t^2Λ^2M^2/N^2exp(tΛ M/N), where we have replaced τ by t/N. Noting that exp(tΛ M/N)≈ 1 for large enough N we can write, T_τ-S^(det)_1(τ)≤t^2Λ^2M^2/N^2. Substituting equation (<ref>) into equation (<ref>) we get the desired result, T_t-S^(det)_1(t/N)^N≤ N t^2Λ^2M^2/N^2 = t^2Λ^2M^2/N. (proof of Theorem 2.) Theorem 2 is proved in a similar manner to Theorem 1. Start by defining τ=t/N and applying Lemma <ref>. to equation (<ref>), T_t-S_2^(det)(τ)^N≤ NT_τ-S_2^(det)(τ). The distance T_τ-S_2^(det)(τ) is bounded by considering the remainder terms in the Taylor expansions of the difference T_τ-S_2^(det)(τ). Similarly to the proof of theorem 1 we let R_l(τ) and W_l(τ) be the l-th order remainder terms of the Taylor expansions of T_τ and S_2^(det)(τ) respectively. This leads to, T_τ-S_2^(det)(τ)=∑_l=3^∞R_l(τ)-W_l(τ). Here the index l starts from 3 since S_2^(det)(τ) is second order accurate, this is what it means for S_2^(det)(τ) to be a second order product formula. Using the subadditivity of the diamond norm we obtain, T_τ-S_2^(det)(τ)≤∑_l=3^∞R_l(τ)+W_l(τ). The term R_l(τ) has the some form in equation (<ref>) and so has the same bound in equation (<ref>). To find the bound on W_l(τ) we need to find a general expression for the Taylor expansion of S_2^(det)(τ), this is found to be, S_2^(det)(τ)=∑_j_1,...,j_M=0 k_1,...,k_M=0^∞(τ/2)^j_1+...+j_M+k_1+...+k_M/j_1!...j_M!k_1!...k_M!∏_q=1^Mq^j_q∏_p=M^1p^k_p. This can be rewritten to only contain one infinite sum, S_2^(det)(τ)=∑_l=0^∞∑_j_1,...,j_M=0 k_1,...,k_M=0 ∑_μj_μ+∑_νk_ν=l^l(τ/2)^∑_μj_μ+∑_νk_ν/j_1!...j_M!k_1!...k_M!∏_q=1^Mq^j_q∏_p=M^1p^k_p. Allowing us to write the remainder term as, W_l(τ)=∑_j_1,...,j_M=0 k_1,...,k_M=0 ∑_μj_μ+∑_νk_ν=l^l(τ/2)^∑_μj_μ+∑_νk_ν/j_1!...j_M!k_1!...k_M!∏_q=1^Mq^j_q∏_p=M^1p^k_p. Subadditivity of the diamond norm leads to the bound, W_l(τ)≤∑_j_1,...,j_M=0 k_1,...,k_M=0 ∑_μj_μ+∑_νk_ν=l^l(τ/2)^∑_μj_μ+∑_νk_ν/j_1!...j_M!k_1!...k_M!∏_q=1^Mq^j_q∏_p=M^1p^k_p, we need to bound the diamond norm of the product of the summands k in equation (<ref>). This is done using submultiplicativity of the diamond norm, ∏_q=1^Mq^j_q∏_p=M^1p^k_p ≤∏_q=1^Mq^j_q∏_p=M^1p^k_p, ≤∏_q=1^Mq^j_q∏_p=M^1p^k_p, ≤Λ^j_1+...j_M+k_1+...+k_M, where the last inequality is obtained using the fact that k≤Λ for all k. Using the inquality (<ref>) with equation (<ref>) yields, W_l(τ)≤∑_j_1,...,j_M=0 k_1,...,k_M=0 ∑_μj_μ+∑_νk_ν=l^l(τ/2)^∑_μj_μ+∑_νk_ν/j_1!...j_M!k_1!...k_M!Λ^∑_μj_μ+∑_νk_ν. We can compute this restricted sum by using Lemma B.1. with the following replacements x→τΛ/2, p→ l and M→ 2M. This leads to, W_l(τ)≤2^lM^l τ^l Λ^l/2^l l!=M^l τ^l Λ^l/l!. Now using equations (<ref>) and (<ref>) we can bound the norm, T_τ-S_2^(det)(τ) ≤∑_l=3^∞R_l(τ)+W_l(τ) ≤∑_l=3^∞M^lτ^lΛ^l/l!+M^l τ^l Λ^l/l! =2∑_l=3^∞M^l τ^l Λ^l/l!. Using equation (<ref>) we can write the bound above as, T_τ-S_2^(det)(τ) ≤ 2 M^3τ^3Λ^3/3!exp(MτΛ) = M^3t^3Λ^3/3N^3exp(MtΛ/N). For large enough N, exp(MtΛ/N)≈ 1 which yields, T_τ-S_2^(det)(τ) ≤M^3t^3Λ^3/3N^3. Now using equations (<ref>) and (<ref>) we can bound the norm, T_t-S_2^(det)(t/N)^N≤ N M^3t^3Λ^3/3N^3= M^3t^3Λ^3/3N^2. Let ϵ≥ 0 and, ϵ≥M^3t^3Λ^3/3N^2, then we have that, T_t-S_2^(det)(t/N)^N≤ϵ. Given the bound on the precision ϵ we can find a bound on N as, N≥M^3/2t^3/2Λ^3/2/(3ϵ)^1/2, completing the proof. § RESULTS FOR COMPUTING RESTRICTED SUMS The following result allows us to compute restricted sums. Given some p,M∈ℕ and some x∈ℝ, ∑_j_1,...,j_M=0; ∑_μj_μ=p^px^j_1+...+j_M/j_1!...j_M!=M^px^p/p!. Consider the exponential function exp(x) for some real number x, e^Mx=e^xe^x...e^x_M - times. By Taylor expanding each exponential on the left hand side of equation (<ref>) we get, e^xe^x...e^x_M - times =∑_j_1,...,j_M=0^∞x^j_1+...+j_M/j_1!...j_M!, =∑_p=0^∞∑_j_1,...,j_M=0; ∑_μj_μ=p^px^j_1+...+j_M/j_1!...j_M!. But, exp(x)^M=exp(Mx), which has the following Taylor expansion, exp(Mx)=∑_p=0^∞M^px^p/p!. Equating equations (<ref>) and (<ref>) and comparing terms, leads us to the desired result, ∑_j_1,...,j_M=0; ∑_μj_μ=p^px^j_1+...+j_M/j_1!...j_M!=M^px^p/p!.
http://arxiv.org/abs/2408.11003v1
20240820165514
DEEPEAST technique to enhance power in two-sample tests via the same-attraction function
[ "Yiting Chen", "Min Gao", "Wei Lin", "Andrew Jirasek", "Kirsty Milligan", "Xiaoping Shi" ]
stat.ME
[ "stat.ME" ]
An Overlooked Role of Context-Sensitive Dendrites Mohsin Raza^1,2, Ahsan Adeel^1,3* Received: date / Accepted: date ================================================= § ABSTRACT Data depth has emerged as an invaluable nonparametric measure for the ranking of multivariate samples. The main contribution of depth-based two-sample comparisons is the introduction of the Q statistic <cit.>, a quality index. Unlike traditional methods, data depth does not require the assumption of normal distributions and adheres to four fundamental properties: affine invariance, maximality at the center, monotonicity relative to the deepest point, and vanishing at infinity <cit.>. Many existing two-sample homogeneity tests, which assess mean and/or scale changes in distributions often suffer from low statistical power or indeterminate asymptotic distributions. To overcome these challenges, we introduced a DEEPEAST (depth-explored same-attraction sample-to-sample central-outward ranking) technique for improving statistical power in two-sample tests via the same-attraction function. We proposed two novel and powerful depth-based test statistics: the sum test statistic and the product test statistic, which are rooted in Q statistics, share a `common attractor' and are applicable across all depth functions. We further proved the asymptotic distribution of these statistics for various depth functions, in addition to the minimum statistics valid for all depth functions. To assess the performance of power gain, we apply three depth functions: Mahalanobis depth <cit.>, Spatial depth <cit.>, and Projection depth <cit.>. All of these functions are implemented in the R package ddalpha. Through two-sample simulations, we have demonstrated that our sum and product statistics exhibit superior power performance, utilizing a strategic block permutation algorithm and compare favourably with popular methods in literature. Our tests are further validated through analysis on Raman spectral data, acquired from cellular and tissue samples, highlighting the effectiveness of the proposed tests highlighting the effective discrimination between health and cancerous samples. Non-parametric tests, multivariate two-sample problem, data depth, Q Statistics, statistical power § INTRODUCTION Prostate cancer is one of the most common diseases in Canada and the second most common cancer in the world. Its potential to be fatal underscores the critical importance of early diagnosis <cit.>. Additionally, a significant challenge in optimizing treatment protocols is the lack of consideration for individual patient radiosensitivity when prescribing radiation doses. Consequently, there is a pressing need to develop methods for monitoring radiation response in individuals undergoing radiation therapy. Various techniques have been explored for this purpose <cit.>. In recent years, Raman spectroscopy (RS) has been investigated as a potential augmentative tool for biochemical analysis of tumour response <cit.>. RS provides detailed `fingerprint' biochemical information on various biomolecules (e.g., protein, lipid, DNA, etc.) through a vibrational inelastic light scattering process <cit.>. Recent studies have indicated that RS can offer predictive capabilities regarding tumour proliferation status <cit.>. Moreover, when RS is combined with group and basis-restricted non-negative matrix factorization along with random forest strategies, this enhanced technique can yield valuable ranked information about the biochemical dynamics within irradiated tumours <cit.>. Despite the potential of Raman Spectroscopy (RS) in cancer diagnosis, several systematic issues in data processing need to be addressed. These include data interference and subjective determination errors <cit.>. Challenges such as baseline variability between sample acquisitions are prevalent. Notably, approximately 10% of Raman spectra suffer interference from cosmic rays, leading to spikes and potential false peaks in the spectra. Furthermore, the analysis of most Raman spectra relies on manual evaluation, resulting in subjective determination errors due to the lack of a uniform and efficient automated method. Our research aims to statistically ascertain the significance of the peak at 1524 cm^-1 in Raman spectra acquired on prostate biopsy samples, which is an indicator of prostate cancer <cit.>. Identifying the presence of this peak accurately is crucial for initial screening and efficient analysis of spectral data acquired on patient samples. The comparison of different shapes of spectral data can be formulated as a homogeneity test for two multivariate samples with distributions F and G, i.e. testing H_0: F=G vs H_1: F G. Some existing two-sample tests have clear drawbacks, such as the need for strong assumptions or reduced statistical power due to increased variance from inefficient pairwise comparisons, which hinders their ability to differentiate between distributions effectively. For instance, parametric methods like Multivariate Analysis of Variance (MANOVA) <cit.> require an assumption of normality, which can be a significant limitation. In contrast, one-dimensional non-parametric tests, such as the Cramér test <cit.>, Wilcoxon Rank-Sum test <cit.>, and Energy Distance test <cit.>, bypass the need for normal data assumption. These tests are more adaptable to various data distributions. However, extending these approaches to the multivariate scenario presents challenges. For example, the multivariate extension of Cramér's test, as proposed by <cit.>, introduces point-to-point (PtP) distances to evaluate the distance between pairs of points. While innovative, this approach can inadvertently increase variance in pairwise comparisons, potentially reducing the test's power, especially in scenarios with finite dimensions. An alternative is the extension of Wilcoxon's Rank-Sum test <cit.>, which aligns with the idea of PtP distance. However, a promising development in this field is the concept of data depth, D(x; F), which measures the centrality of a point x in a distribution F(x) within a d-dimensional space. This approach maps x from R^d to the interval [0,1], providing a point-to-sample (PtS) central-outward ranking. Despite this advancement, the Depth-based Rank Statistic (DbR) <cit.>, which is based on the PtS central-outward ranking, still incorporates further PtP comparisons through univariate Kruskal-Wallis type tests. This addition can increase variance and subsequently decrease the test's power, as evidenced in simulations discussed in Section 3. Innovative and powerful two-sample tests are essential for detecting varied shapes of spectra. In this regard, we explore the depth-based quality index Q(F,G) <cit.>, measuring the relative “outlyingness” of distribution F in comparison to distribution G, which is defined as Q(F,G)=P{D (X;F)≤ D(Y;F)|X∼ F, Y∼ G}, where F is the reference distribution. When the two distributions, F and G, are unknown, the empirical distributions of F_m and G_n can be employed, assuming sample sizes of X and Y are m and n, respectively. The Q index can be estimated by the Q statistic Q(F_m, G_n)= 1/nm∑_i=1^m∑_j=1^n I(D(x_i,F_ m) ≤ D(y_j,F_m)), where I(·) is an indicator function that takes 1 if true and 0 otherwise. The detailed description of the properties of data depth and the Q index by <cit.> highlights the importance of using Q statistics in homogeneity tests. The Q(F_m, G_n) statistic serves as a sample-to-sample (StS) central-outward ranking, averaging the depth-based PtS central-outward ranking. The method provides a more accurate comparison than the PtP distance, as it encompasses broader information about the distribution rather than focusing on single points. In addition to its enhanced power, the depth-based StS central-outward ranking benefits from the free distribution assumption and adheres to four fundamental properties of data depth: affine invariance, centroid maximality, monotonicity about the deepest point, and vanishing at infinity <cit.>, which allows for the acquisition of valuable ranked information on the biochemical dynamics inside the RS. While the depth-based StS central-outward ranking effectively handles multivariate data comparisons, it may lose certain information, such as data direction, by the one-dimensional projection of the data depth. This loss can be critical in achieving higher statistical power. Addressing this, a recent advancement by <cit.> suggests preserving power by considering the maximum of two Q statistics (Q(F_m, G_n)-1/2)^2 and (Q(G_n, F_m)-1/2)^2. This insight has led us to explore a new approach to pairwise StS central-outward ranking, derived from Q statistics. Our method maintains the direction of the StS central-outward ranking by analyzing the sign of the partial derivatives of the Q statistics. Q statistics sharing the same sign indicate a unified direction of change under the alternative hypothesis. By considering making a combination of two Q statistics with the property of “same-attraction”, where the two Q statistics have the same limit under the null hypothesis and approach to the same value under the alternative hypothesis, we can enhance the power of a test based on the derived Q statistic. In physics, an attractor refers to a set of numerical values toward which a system tends to evolve, regardless of its starting conditions. This can present the long-term behavior of the system. For example, consider all objects near a black hole; they are attracted in the same direction. We apply this concept of attractors, with the “same attractor” referring to the convergence of distributions over time toward a specific limit. Specifically, the statistic Q(F_m, G_n) is attracted to 1/2 under H_0, while under H_α, it is attracted to the limit of 0 or 1. Here, the term “attraction” denotes the direction of movement. While considering the maximum of Q(F_m, G_n) and Q(G_n, F_m) is one such combination, it is not the most efficient one because their partial derivatives with respect to each Q statistic are not strictly positive or negative, and may be zero. Our two new combinations, Q(F_m, G_n)+Q(G_n, F_m) and Q(F_m, G_n)× Q(G_n,F_m), promise a greater improvement in power since their partial derivatives have the same sign and are almost never zero; for more details, see Section 2.1. We have named our proposed technique DEEPEAST, short for depth-explored same-attraction StS central-outward ranking. The structure of our paper is as follows: Section 2 introduces the concepts of same attractive Q statistics. We also present a strategic block permutation algorithm accompanied with theoretical justification. We showed the general form of asymptotic distribution of the same attractive Q statistics applicable across all depth functions, as well as a specific one-dimensional case in Euclidean depth, which is related to the Craig distribution <cit.>. Inspired by <cit.>, our proof utilizes a second-order approximation to the Q statistics, Hoeffding decomposition <cit.> and Cox-Reid expansion method <cit.>, contrasting with existing methods that yield asymptotically normal distributions through first-order approximations <cit.>. In addition, our proof differs significantly from that in <cit.>, which establishes an asymptotic chi-squared distribution for Q(F_m, G_n)+Q(G_n, F_m)-1 using the Halfspace depth <cit.> in one-dimensional Euclidean space, whereas we obtain the Craig distribution for one-dimensional Euclidean depth. Furthermore, we justified that the power can be obtained under the alternative hypothesis. In Section 3, we employ the strategic block permutation algorithm for various depth functions, conducting numerous simulations and offering a detailed comparison with other popular multivariate data methods in the literature. Section 4 applies the DEEPEAST technique to compare samples across differentiated spectra. Finally, Section 5 summarizes our findings and discusses potential future work. § DEEPEAST TECHNIQUE The use of Q statistics as StS central-outward ranking leads to a natural question: How can we ensure functional consistency across all Q statistics to enhance statistical power? This section is dedicated to address this question. Let us consider the scenario where we aim to combine L Q statistics, denoted as Q_1,…, Q_L. We present the combined function as 𝒢(Q_1, …, Q_L) . To optimally gauge similarity within the same distribution and dissimilarity across different distributions, the combined function 𝒢 should ideally satisfy two properties: selfsame and coordinate, which are crucial for ensuring both the efficacy and reliability of the function in different statistical context. We have formalized and detailed these properties in Definition <ref>, providing a framework for evaluating and applying the combined Q statistics function in practical scenarios. §.§ Definitions and properties Assume the following properties for Q_1,…, Q_L: (i) P1. Selfsame: Q_1, …, Q_L share the asymptotic “same” null distribution. (ii) P2. Coordinate: The partial derivative ∂𝒢(Q_1, …, Q_L)/∂ Q_ℓ is non-negative (≥ 0) or non-positive (≤ 0) almost surely for all ℓ=1, …, L under the alternative hypothesis. Note that a same-attraction function 𝒢(Q_1, …, Q_L) is strictly same-attraction if the inequalities in P2 are strict almost surely, meaning: ∂𝒢(Q_1, …, Q_L)/∂ Q_ℓ > 0 or <0. It can be shown that a collection of same-attraction functions 𝒢_s(Q_1, …, Q_L), 1≤ s≤ S is closed under countable additions. This means that the sum of a number of same-attraction functions remains to be a same-attraction function. However, it is important to note that this closure may not apply to subtraction. As indicated above, the definition of a strictly same-attraction function imposes a constraint on the sign of the partial derivatives. They must be consistently positive or negative and cannot equal zero. This characteristic potentially makes a strictly same-attraction function more effective than a non-strict same-attraction function in most cases, as zeros do not maintain the directionality of the StS central-outward ranking. Nevertheless, there are exceptions to this generalization, as illustrated in Example <ref>. Therefore, a more strong criterion is needed to determine the benefit in power of one same-attraction function to another. This criterion is elaborated in Proposition <ref>. Consider two same-attraction functions, 𝒢_1(Q_1, …, Q_L) and 𝒢_2(Q_1, …, Q_L). For a specified type I error probability α, we defined two decision rules: 𝒢_1(Q_1, …, Q_L)>c_α,1 and 𝒢_2(Q_1, …, Q_L)>c_α,2, such that P_H_0[𝒢_r(Q_1, …, Q_L)>c_α,r] = α for r=1,2. If the inequality 𝒢_1(Q_1, …, Q_L)/𝒢_2(Q_1, …, Q_L)≥c_α,1/c_α,2 holds under that alternative hypothesis H_1, then 𝒢_1(Q_1, …, Q_L) is more powerful than 𝒢_2(Q_1, …, Q_L). The proof of Proposition <ref> hinges on the fact that P_H_1[𝒢_1(Q_1, …, Q_L)>c_α,1]≥ P_H_1[𝒢_2(Q_1, …, Q_L)>c_α,2]. If the conditions hold asymptotically, then the more powerful test is also in terms of asymptotics. Among a family of same-attraction functions, the optimal same-attraction function can be found through the following property. Consider G^* a set of all possible combinations of function 𝒢(Q_1, …, Q_L), the most powerful test statistics G^0 can be selected according to taking the maximum of equations below: _G ∈ G^*G(Q_1, …, Q_L)/c_α, G =G^0, where the c_α, G are defined as P_H_0[G>c_α, G]=α with type I error probability α under null hypothesis. This criterion provides a framework for comparing the efficacy of various same-attraction functions. The following examples demonstrate its application in evaluating the power of different types of same-attraction functions. [Maximum statistic <cit.>] Consider the maximum statistic M_m,n=max(Q_1,Q_2), where Q_1=[1/12(1/m+1/n)]^-1 (Q( F_m, G_n)-1/2 )^2 and Q_2=[1/12(1/m+1/n)]^-1 (Q( G_n, F_m)-1/2 )^2. Both Q_1 and Q_2 are selfsame (P1), as they follow the same asymptotic null chi-squared distribution <cit.>. The coordinate (P2) is also met under H_1 since for r=1,2, ∂ M_m,n/∂ Q_r = 1 if M_m,n=Q_r 0 otherwise . It is worth noting that M_m,n is non-differentiable at Q_1=Q_2 with zero probability almost surely. Thus, by Definition <ref>, M_m,n qualifies as a same-attraction function. The maximum statistic M_m,n is more powerful than either Q_1 or Q_2, which can be further verified by the Proposition <ref>. Let 𝒢_1=M_m,n, 𝒢_2=Q_1 or Q_2. We observed that 𝒢_1≥𝒢_2. Moreover, both M_m,n and 𝒢_2 converge in distribution to χ^2_1 under H_0, leading c_α,1=c_α,2, where P(χ^2_1>c_α,1)=α. Therefore, 𝒢_1/𝒢_2≥c_α,1/c_α,2 =1, indicating that 𝒢_1 is asymptotically more powerful than 𝒢_2. [Weighted average statistic <cit.>] The weighted average statistic, W_m,n (w_1, w_2), is defined as W_m,n (w_1, w_2)= w_1Q_1+w_2Q_2, where w_1, w_2 > 0, w_1+w_2=1, and Q_1, Q_2 are defined in (<ref>). Contrasting with the maximum statistic in Example <ref>, for r=1,2 we observe: ∂ W_m,n (w_1, w_2) /∂ Q_r=w_r>0. Thus, W_m,n (w_1, w_2) qualifies as a strictly same-attraction function. Despite being strictly same-attraction, W_m,n (w_1, w_2) is asymptotically less powerful than M_m,n. This is due to the fact that M_m,n≥ W_m,n (w_1, w_2) and that W_m,n (w_1, w_2) converges in distribution to χ^2_1. [Minimum statistic <cit.>] The minimum statistic, M_m,n^*, is defined as M_m,n^*=-min(Q_1, Q_2), where Q_1=[1/12(1/m+1/n)]^-1/2 (Q( F_m, G_n)-1/2 ) and Q_2=[1/12(1/m+1/n)]^-1/2 (Q( G_n, F_m)-1/2 ). Both Q_1 and Q_2 fulfill the selfsame property (P1). The coordinate (P2) is also met under the alternative hypothesis H_1, as for r=1,2, ∂ M_m,n^* /∂ Q_r = -1 if min( Q_1, Q_2)=Q_1 0 otherwise . Thus, M_m,n^* is classified as a same-attraction. Moreover, M_m,n^* and M_m,n have the same asymptotic power. This equivalence is demonstrated by M_m,n^* converging in distribution to |𝒩(0,1)| and (M_m,n^*)^2 converging to χ^2_1. Detailed explanations and proofs can be found in Appendices <ref>, <ref>. [Sum statistic <cit.>] The sum statistic S_m,n was firstly proposed by <cit.> and later studied by <cit.> and is defined as S_m,n=-mn/m+n (Q(F_m, G_n)+Q(G_n, F_m)-1), where F_m and G_n represent the empirical distributions of F and G, respectively. Both Q( F_m, G_n) and Q( G_n, F_m) are selfsame. The partial derivatives of S_m,n with respective to both Q(F_m, G_n) and Q(G_n, F_m) are -mn/m+n which are less than zero. Hence, S_m,n qualifies as a strictly same-attraction function. We note that the rate of convergence of S_m,n is very different from that of Q_1 or Q_2. Determining the asymptotic distribution of S_m,n poses a challenge. <cit.> derived an asymptotic approximation of χ^2_1 for the Halfspace depth in one-dimensional Euclidean space. Consequently, based on a specific case of one-dimensional Euclidean depth <cit.>, we set c_α, 1 =1.6566, where P(S_m,n > c_α, 1) →0.05. This leads to the conclusion that the sum statistic S_m,n is asymptotically more powerful than the maximum statistic M_m,n in certain scenarios. For instance, consider the condition: Q(F_m, G_n)+Q(G_n, F_m) ≤ q^+, (0≤ q^+<1). This condition often applies in cases of mean shifts. Then, S_m,n/√(M_m,n) = -(1/m+1/n)^-1/2 (Q(F_m, G_n)+Q(G_n, F_m)-1)/√(12)max(|Q(F_m, G_n)-1/2|, |Q(G_n, F_m)-1/2| ) ≥2 (1-q^+) (1/m+1/n)^-1/2/√(12), ≥1.6566/√(3.84), as (1/m+1/n)^-1/2→∞, suggesting that S_m,n is asymptotically more powerful than M_m,n under the specified condition in (<ref>). Using an alternative method based on the Hoeffding decomposition <cit.> and the Cox-Reid expansion <cit.>, we derive a general form of asymptotic distribution capable for all depth functions. For more details, see Section 2.3. [Product statistic <cit.>] The product statistic P_m,n is defined as P_m,n=-mn/m+n(Q(F_m, G_n) Q(G_n, F_m)-1/4) Consider the partial derivatives of P_m,n, we have ∂ P_m,n/∂ Q(F_m, G_n)=-mn/m+n Q(G_n, F_m) <0 and ∂ P_m,n/∂ Q(G_n, F_m)=-mn/m+n Q(F_m, G_n) <0 almost surely as Q( F_m, G_n) and Q( G_n, F_m) are almost surely positive. Hence, P_m,n is strictly same-attraction. The asymptotic distribution of P_m,n for univariate Euclidean depth can be obtained in a manner similar to that of S_m,n ; detailed explanations are provided in Section 2.3. Setting α=0.05 and c_α, 1 =0.9384, we find P(P_m,n>c_α, 1) → 0.05. Additionally, under the condition specified in (<ref>), the inequality Q(F_m, G_n)× Q(G_n, F_m)≤ (q^+)^2/4<1/4 holds. Consequently, we derive P_m,n/√(M_m,n) =-(Q(F_m, G_n) Q(G_n, F_m)-1/4) (1/m+1/n)^-1/2/√(12)max(|Q(F_m, G_n)-1/2|, |Q(G_n, F_m)-1/2| ) > 2(1/4-(q^+)^2/4) (1/m+1/n)^-1/2/√(12) ≥0.9384/√(3.84), as (1/m+1/n)^-1/2→∞, indicating that P_m,n is asymptotically more powerful than M_m,n under condition in (<ref>). It is also noteworthy that S_m,n and P_m,n are comparable, as they have similar asymptotic distributions. Moreover, we could visualize the rejection region through figures. The Figure <ref> illustrates the rejection region of Q(F_m,G_n), Q(G_n, F_m), M_m,n, S_m,n, and P_m,n, respectively under univariate Euclidean depth. We conducted 1000 simulations with m=n=100. Purple dots are null Q statistics under F=G=𝒩(0,1); blue triangles are alternative Q statistics under F=𝒩(0,1) and G=𝒩(0.8,1.2). The rejection regions are shaded with colored lines. For a larger detailed version of Figure <ref>, see Appendix <ref>. To determine if a test is good, we need to control for both types of errors. In other words, the shaded area should not include the null Q statistics (purple) for reducing the type I error, but should include almost all of the alternative Q statistics (blue) for decreasing the type II error or increasing the power. As shown from Figure <ref>, it turns out that S_m,n and P_m,n have almost the same performance and they all outperform others except M_m,n, but there is a type I error control problem for M_m,n. In the subsequent section, we introduce a permutation algorithm, which is non-parametric and can be applied to other depth measures such as Mahalanobis depth <cit.>, Halfspace depth <cit.>, Spatial depth <cit.>, and Projection depth <cit.>. §.§ Strategic Block Permutation Algorithm Acknowledging the permutation tests are inherently time-intensive, we employ the Strategic Block Permutation Algorithm <cit.>. Initially, the raw test statistic T such as S_m,n and P_m,n is calculated from the empirical distributions F_m and G_n. We divide all samples in F_m into b_1 blocks of size s, i.e., b_1=m/s, and all samples in G_n into b_2 blocks, i.e., b_2=n/s. Assume that the total number of blocks for all samples x_1, …, x_m, y_1, …, y_n is N=b_1+b_2. Combining all N sample blocks together denotes all sample blocks as Z=(Z_1, …, Z_b_1, Z_b_1+1, …, Z_b_1+b_2 ), where the first b_1 sample blocks come from F_m and the second b_2 sample blocks come from G_n. Then, by randomizing all the blocks, there is a total of N! permutations, denoting the set of all permutations as (π(1),…,π(N)). After randomizing all blocks, we have Z̃=(Z_π(1),…, Z_π(N)). Considering the first b_1 blocks as F̃_m and the next b_2 blocks as the G̃_n, we derive the new test statistic T^* from the F̃_m and G̃_n. By calculating all T^* values, in a one-sided test, the p-value is calculated based on the proportion of all T^* for which T^* >T. The pseudo-code for implementing the Strategic Block Permutation Algorithm to compute the p-value is as follows: At a predetermined significance level of α, for the Sum statistic, we reject the null hypothesis if p-value_S<α. As with the Product statistic, we reject the null hypothesis if p-value_P<α. In the subsequent section, we will show the asymptotic distributions of S_m,n and P_m,n for general multidimensional depth functions. §.§ Asymptotic distribution For convenience, we denote I(x,y,F) =I(D(x;F)≤ D(y;F)), I(x,y,F_m,G_n) = I(x,y,F_m) -I(x,y,G_n), ρ_s(y;F_m,F) =E_x[(D(y;F_m)-D(x;F_m))^s|D(x;F)=D(y;F)], s=1,2,3. Inspired by <cit.>, we rely on the Hoeffding decomposition and the Cox-Reid expansion to study the asymptotic distributions of S_m,n and P_m,n. First, we apply the following Hoffding decomposition as follows: -m+n/mnS_mn=Q(F_m,G_n)+Q(G_n,F_m)-1=M_mn1+M_mn2+R_mn, where M_mn1=∫∫ I(x,y,F_m,G_n)dF_m(x)dG(y), M_mn2=∫∫ I(x,y,F_m,G_n)dF(x)dG_n(y), R_mn=-m+n/mnS_mn-M_mn1-M_mn2. Here M_mn1 and M_mn2 are the two main terms, which are conditionally independent given some conditions related to F_m and G_n, and R_mn is the higher order term. Meanwhile, we will also apply the Cox-Reid expansion to the conditional probabilities to obtain exact expansions of M_mn1 and M_mn2, where ∫ I(x,y,F_m)dF(x)=F_D(X;F)(D(Y;F))+f_D(X;F)(D(Y;F))ρ_1(Y;F_m,F) +∂/∂(D(Y;F))[f_D(x;F)(D(Y;F))ρ_2(Y;F_m,F)]+O_p(m^-3/2). Note in particular that we consider the third-order expansion different from the second-order expansion in <cit.>. Similar to the P_mn, we observe that -m+n/mnP_mn=Q(F_m, G_n)× Q(G_n, F_m)-1/4 = (Q(F_m, G_n)-1/2)×(Q(G_n, F_m)-1/2)+1/2(Q(F_m, G_n)+Q(G_n, F_m)-1). For the expansion of -m+n/mnP_mn in (<ref>), we use Lemma <ref> in Appendix E to expand the first term, and for the second term, we use Appendix F to expand -m+n/mnS_m,n in (<ref>). To construct the complete proof, we list assumptions as follows: E[(sup_x∈ R^d|D(x;F_m)-D(x;F)|)^α]=O(m^-α/2). E(Σ_ip_ix(F_m)p_iy(F_m))=o(Δ_m) (where Δ_m→ 0) if there exists c_i such that p_ix(F_m)>0 and p_iy(F_m)>0 for p_iZ(F_m)=:P(D(Z;F_m)=c_i|F_m), i=1,2,…. For i≠ k, j≠ l, let ρ_1(x_i;F_m,F)⊥ρ_1(x_k;F_m,F)|Λ_m and ρ_1(y_j;F_m,F)⊥ρ_1(y_l;F_m,F)|Λ_m, where ⊥ denotes independence. Assume that E[f_D(y;F)(D(x;F))ρ_1(x;F_m,F)|Λ_m] =0, E[f_D(x;F)(D(y;F))ρ_1(y;F_m,F)| Λ_m] =0, 1/n∑_j=1^n∂/∂(D(y_j;F))[f_D(x;F)(D(y_j;F))ρ_2(y_j;F_m,F)] =O_P(m^-3/2), 1/m∑_i=1^m∂/∂(D(x_i;F))[f_D(y;F)(D(x_i;F))ρ_2(x_i;F_m,F)] =O_P(m^-3/2), sup_ζ{[∂^2/∂^2 D(y;F)(f_D(x;F)(D(y;F))ρ_3(y;F_m,F))]|_D(y;F)=ζ} =O_P(m^-3/2), sup_ζ{[∂^2/∂^2 D(x;F)(f_D(y;F)(D(x;F))ρ_3(x;F_m,F))]|_D(x;F)=ζ} =O_P(m^-3/2), where F_D(x;F)(·) be the distribution function of D(x;F), f_D(x;F)(·) be the derivative of F_D(x;F)(·), and Λ_m is the condition related to F_m, which may vary for different depth functions. For i≠ k and j≠ l, suppose that E[I(x_i,y_j,F_m,G_n)|Λ_mn]=0 and I(x_i,y_j,F_m,G_n)⊥ I(x_k,y_l,F_m,G_n)|Λ_mn, where Λ_mn is the condition related to F_m and G_n, and it may vary for different depth functions. Under alternative hypothesis, F≠ G, such that ||θ(F)-θ(G)||≠0, with θ(F) and θ(G) are the parameters of the distributions F and G, respectively, we further suppose -[Q(F_m,G_n)+Q(G_n,F_m)-1] and -[Q(F_m,G_n)Q(G_n,F_m)-1/4] can be approximated by q(||θ(F)-θ(G)||)+o_p(1), where q(x) is a monotonically increasing function of x. Assumptions <ref> and <ref> origin from <cit.>, but we require α=4 for the third-order expansion. Assumption <ref> relates to the Cox-Reid expansion, while Assumption <ref> relates to the Hoeffding decomposition; see more details in <cit.>. Assumptions <ref>-<ref> apply to a wide range of depths, such as Euclidean depth, Mahalanobis depth, Halfspace depth, Projection depth, and Spatial depth. See Appendix <ref> for these verifications. Assumption <ref> ensures that the power can be obtained under the alternative hypothesis. For the verification of Assumption <ref>, see Appendix <ref>. Under the null hypothesis F=G and m/n→ c>0 (c is a constant), then under Assumptions <ref>-<ref>, it has S_m, n = -m n/m+n{-1/m∑_i=1^mf_D(y;F)(D(x_i;F))(ρ_1(x_i;F_m,F)-ρ_1(x_i;G_n,F)) +1/n∑_j=1^nf_D(x;F)(D(y_j;F))(ρ_1(y_j;F_m,F)-ρ_1(y_j;G_n,F))} +o(m n/m+nΔ_m)+O_P(m^-1/2), and P_m, n = m n/m+n{[1/n∑_j=1^n(F_D(x;F)(D(y_j;F))-1/2)+1/m∑_i=1^m(1/2-F_D(y;F)(D(x_i;F)))]^2} -mn/2(m+n){-1/m∑_i=1^mf_D(y;F)(D(x_i;F))(ρ_1(x_i;F_m,F)-ρ_1(x_i;G_n,F)) +1/n∑_j=1^nf_D(x;F)(D(y_j;F))(ρ_1(y_j;F_m,F)-ρ_1(y_j;G_n,F))} +o(m n/m+nΔ_m)+O_P(m^-1/2). The proof of Theorem <ref> is presented in Appendix <ref>, as well as the relevant lemmas in Appendix <ref>. Under the null hypothesis F=G, the asymptotic distribution of S_m,n and P_m,n in one-dimensional Euclidean depth <cit.> follows the related Craig distribution <cit.>: S_m,n→ -Z_1 Z_2, P_m,n→ Z_3^2-1/2 Z_1 Z_2, where Z_1 ∼𝒩(0,1), Z_2 ∼𝒩(0,2/√(3)π), Z_3 ∼𝒩(0,1/12), Cov(Z_1, Z_2)=-1/π, and Z_3 is independent of Z_1 and Z_2. The proof of Remark <ref> can be found in Appendix <ref>. Moreover, in the Appendix <ref>, we compare the approximated density and the simulated density. It turns out that the approximation is very accurate. For the power of permutation test, we have the following Theorem <ref>. Assume Assumption <ref> holds, under the alternative hypothesis, the power of the permuted Sum or Product tests at the significance level α approaches 1 as both the block size s and the number of repetitions 𝒞 in the Strategic Block Permutation Algorithm go to infinity. The proof of Theorem 2 is presented in Appendix <ref>. § SIMULATION STUDIES: TWO-SAMPLE TEST We consider power comparisons between the Sum statistic S_m,n (<ref>) and the Product statistic P_m,n (<ref>) alongside of other existing methods: M_m,n (<ref>), M_m,n^* (<ref>), Depth-Based Rank Statistics (DbR) <cit.>, and Modified Depth-Based Rank Statistics (BDbR) <cit.> within the context of multivariate data, focusing on multivariate distributions and employing various depth functions. There are various possible depth functions, here we present simulations on three popular depth functions: Mahalanobis depth, spatial depth and projection depth. Under the null hypothesis, we assume F=G=N(0,I_2×2), where N(0,I_2×2) denotes the bivariate normal distribution with a mean vector 0 and a two-by-two identity covariance matrix I_2× 2. Our objective is to assess the power of different test statistics under three distinct scenarios: changes in scale, mean, and both scale and mean. We set the significance level at α=0.05. For M_m,n, M_m,n^*, DbR and BDbR, critical values were determined as the upper 95% quantiles from 1000 replications under the null hypothesis (F =G= 𝒩(0,1)). Power was then calculated as the proportion of instances across 1000 repetitions where the test statistic exceeded these critical values. For permutation tests based on P_m,n and S_m,n using the Strategic Block Permutation algorithm, we set the threshold for the p-value to be the lower 5% quantile of the simulated 1000 p-values under the null hypotheses, with the number of repetitions 𝒞=200 and block size s=25. The power of the permutation test is then calculated using the proportion of times in 1000 repetitions that the statistic is less than the lower 5% quantile. For the case of scale change, we consider the distributions F=N(0,I_2×2) and G=N(0,I_2×2+0.5Ĩ_2×2), where Ĩ_2×2 =((0,1)^⊤, (1,0)^⊤). Power comparisons, depicted in Figure <ref>, are based on Mahalanobis depth, spatial depth and projection depth, for each scenario. It is evident from the Figure <ref> that all three depth functions exhibit similar trends in the power of the various test statistics under examination. Notably, P_m,n and S_m,n are comparable and outperform all other tested statistics across all depth functions, attributed to their efficacy in detecting scale changes. Additionally, M_m,n, M_m,n^*, and BDbR test statistics show similar levels of performance. In assessing the performance of a change in mean on test statistic power, we consider two specific distributions: F=N(0,I_2×2) and G=N((0.3,0.3)^⊤, I_2×2). Power comparisons for this setting are illustrated in Figure <ref>. The results reveal that P_m,n and S_m,n statistics not only demonstrate comparable performance to each other but also consistently outperform the other tested statistics. In the scenario where both mean and scale change, we consider the distributions F=N(0,I_2×2) and G=N((0.2,0.2)^⊤, I_2×2+0.4Ĩ_2×2). The power comparisons, as shown in Figure <ref>, further validate the superior performance of the P_m,n and S_m,n statistics in this multivariate setting. In conclusion, both P_m,n and S_m,n demonstrate comparable and notably high efficacy across various multivariate depth functions. These statistics have proven to be promising tools, particularly in scenarios involving changes in mean, scale, or both, within multivariate distributions. Their consistent performance across different scenarios highlights their potential as versatile and robust choices for statistical testing in multivariate analyses. § RAMAN SPECTRUM As introduced earlier in the Introduction section, the classification of different shapes in a range of spectra is crucial in the context of prostate cancer. Our dataset comprised 48 spectra, 46 of which were usable (two were discarded due to detector saturation), each containing 1019 wavenumbers. The dataset includes a column of wavenumber values, ranging from 147 cm^-1 to 1870 cm^-1, alongside a corresponding column of intensity values measured in counts. Our primary focus is on the spectral shapes that exhibit a peak at 1524 cm^-1, a wavenumber indicative of carotenoids presence, which is a crucial biomarker for the diagnosis and prognosis of prostate cancer <cit.>. For the analysis, we concentrated on the wavenumber range from 1503 cm^-1 to 1544 cm^-1, centered at 1524 cm^-1. This range encompasses 27 wavenumber values, referred to as the data dimension d. Our approach involved a two-stage classification process. In the first stage, we employed a linear regression model to fit the quadratic values to the spectral intensities, using the R-squared value as a measure of fit. A spectrum with a peak should resemble a quadratic curve, as illustrated in Figure <ref>. Spectra were then categorized into two groups based on an R-squared threshold of 0.5. Specifically, the quadratic function used was (x-x_0)^2/27^2, where x ranges from 1 to 27 and x_0 is the central point at 14. The R-squared value determines the fit of this quadratic model to the index values of x, with a threshold of 0.5 employed to distinguish between two possible peak types. Spectra with R-squared values below 0.5 were classified into Group 1, while those with higher values were categorized into Group 2. In total, 35 spectra were assigned to Group 1 and 11 spectra to Group 2. In the second stage of our analysis, aimed at more accurately distinguishing between the two spectral shapes, we conducted a two-sample test considering various dimensions. Initially, based on the classifications from the first stage, we focused on the central 27 points around 1524 cm^-1, denoted as 27M. To assess the impact of smaller dimensions on test power, we also considered 15 points in the middle (15M), 5 points to the left of the center (5L), and 5 points to the right of the center (5R). We consider different dimensions to examine their effect as collecting more counts in raw data asks for additional costs and requires a larger budget. For each dimension, we computed the p-values for S_m,n and P_m,n with a block size s=2 and repetition number 𝒞=1000 , and compared these with M_m,n, M_m,n^*, DbR, and BDbR as shown in the Table <ref>. It is important to note that in larger dimensions such 27M and 15M, Mahalanobis and Spatial depths were not applicable due to the non-invertibility of the sample covariance matrix, and thus are omitted from these comparisons. Additionally, we investigated the effects of logarithmic transformation on the depth measures. To distinguish between the original and log-transformed depths, we added an “O" suffix for the original depths and an “L" suffix for log-transformed depths in our notation. For example, “MO" signifies Mahalanobis depth applied to the original counts, while “ML" refers to Mahalanobis depth on log-transformed counts. Using significance level α=0.1 and comparing the resulting p-values, we observed that P_m,n, S_m,n, and BDbR consistently ranked among the top three, with projected depths showing greater dominance in higher dimensions. Conversely, Mahalanobis depths were more influential in lower dimensions. We also noted that logarithmic transformations tended to slightly improve consistency in inference. In addition to the methods previously discussed, we employed the concept of a scale curve, introduced by <cit.>, to compare the dispersion or scale of two samples. The scale curve quantifies the volume of the α-trimmed region of distribution F, denoted as D_α(F), which is defined as D_α (F)={x∈R^d: D(x; F) ≥α}. Consequently, we plotted the volume of this convex region V(α; F_m) against the 1-α scale. Figure <ref> displays the scale curves derived from the Mahalanobis depth, illustrating both the raw (left) and log-transformed (right) intensity values for the 5L spectral dimension. This analysis further validates the differences between the two samples. Additionally, adjusting the R-squared threshold in the initial spectral classification step results in varying compositions of spectra in each group, as detailed in Table <ref>. It is noteworthy that the spectra in Group 2 exhibit more complexity compared to those in Group 1, as depicted in Figure <ref>. For instance, setting the R-squared threshold at 0.4 yields small p-values for all test statistics (as seen in Table <ref>), suggesting a significant difference between the two groups. This significance is attributed to the reclassification of some spectra from Group 1 into Group 2, leading to a smaller yet still discernible difference. Conversely, when the threshold is increased to 0.6, most p-values become larger (refer to Table <ref>), indicating that the differences between the groups are not statistically significant. This shift results from some spectra originally in Group 2 being categorized into Group 1, further narrowing the differences. The comprehensive analysis conducted on various spectral shapes firmly establishes the efficacy of our proposed statistical method. This method not only effectively differentiates between distinct types of tumor samples but also demonstrates superior performance compared to other competing methodologies. The ability to discern subtle variations in spectral data is crucial in the context of tumor sample classification, and our approach has proven to be a robust tool in this regard. Through the use of scale curves, depth functions, and strategic permutation testing, our method offers a reasonable and precise means of detecting and categorizing spectral differences. This is particularly vital in the diagnosis and prognosis of conditions like prostate cancer, where accurate identification of biomarkers such as carotenoids is essential. The success of this method in outperforming other statistical techniques underscores its potential as a valuable asset in medical spectral analysis and related fields. § CONCLUSIONS AND LIMITATIONS This paper, motivated by the challenge of detecting shapes in spectral data, introduces two novel test statistics, S_m,n and P_m,n, and proposes the DEEPEAST technique to infer homogeneity of two samples. We have defined the concept of the same-attraction function, which encompasses our new statistics, and conducted comparative analyses with existing similar statistics. The strategic block permutation algorithm was developed to facilitate the comprehensive application of the DEEPEAST technique across various depth functions. Their asymptotic distributions were derived by applying Hoeffding decomposition and Cox-Reid expansion under reasonable assumptions. Our extensive simulated power comparisons reveal that P_m,n and S_m,n exhibit superior performance in multivariate distributions. Furthermore, the application of the DEEPEAST technique in spectral sample comparisons conclusively shows that P_m,n and S_m,n outperform other test statistics in distinguishing differences between two samples of varying dimensions. This paper presents significant theoretical findings on the asymptotic distributions of S_m,n and P_m,n. However, in practical applications, results may deviate from these theoretical distributions due to finite sample sizes. To address this, our proposed algorithm offers a method for obtaining approximate p-values. Additionally, to assess the performance across different depth functions, we selected three representative and widely used depth functions in our simulation study. The consistently strong performance of S_m,n and P_m,n across these depth functions highlights the robustness of these tests, making it highly unlikely that they will show inconsistency across all depth functions. It is also important to note that, although some information may be lost when projecting multivariate data into a one-dimensional space, our two novel test statistics are designed to retain a significant portion of that information. The simulation and real data analyses of both test statistics confirm the significant increase in information gain in practice, which is also supported by the theoretical basis. If the recovery of additional information is necessary for specific data analysis, appropriate dimensionality reduction techniques can be applied, see <cit.>. Moreover, our research significantly contributes to two-sample testing, opportunities exist to extend this approach to multi-sample testing and apply it to various other fields. This paper creates an intriguing and challenging avenue for the theoretical research, promising to further enhance the use and applicability of our findings in broader statistical contexts. § AUTHOR CONTRIBUTIONS Y.C., W.L., A.J., and X.S. designed research; Y.C., W.L., M.G., A.J., and X.S. performed research; A.J. and K.M. collected the prostate sample and RS; Y.C., W.L., A.J., and X.S. analyzed data; and Y.C., W.L., M.G., A.J., and X.S. wrote the paper. § ACKNOWLEDGEMENT Dr. Shi’s work was supported by the Natural Sciences and Engineering Research Council of Canada under Grant RGPIN-2022-03264, the Interior Universities Research Coalition and the BC Ministry of Health, the NSERC Alliance International Catalyst Grant ALLRP 590341-23, and the University of British Columbia Okanagan (UBC-O) Vice Principal Research in collaboration with UBC-O Irving K. Barber Faculty of Science. § APPENDIX § DETAILS OF FIGURE 1 § PROOF OF MINIMUM STATISTIC Based on the works of <cit.>, and <cit.>, we obtain the following expansion of the Q statistic: Q( G_n, F_m)-1/2 = 1/2-Q( F_m, G_n)+o_p(n^-1/2)+o_p(m^-1/2). <cit.> showed that [1/12(1/m+1/n)]^-1/2 (Q(F_m, G_n)-1/2 ) 𝒩(0,1). Therefor, we have min(Q(F_m, G_n), Q(G_n, F_m)) = min (Q(F_m, G_n)-1/2, Q(G_n, F_m)-1/2 )+1/2 = -| Q(F_m, G_n)-1/2 | +1/2, under H_0 Hence, M_m,n^* = [1/12(1/m+1/n)]^-1/2 (1/2- | Q(F_m, G_n)-1/2 | - 1/2 ) =[1/12(1/m+1/n)]^-1/2| Q(F_m, G_n)-1/2 | | 𝒩(0,1) | § EQUIVALENCE OF MINIMUM AND MAXIMUM STATISTICS Under null hypothesis, M_m,n^*=[1/12(1/m+1/n)]^-1/2| Q(F_m, G_n)-1/2 |. Then (M_m,n^*)^2=[1/12(1/m+1/n)]^-1( Q(F_m, G_n)-1/2 ) ^2. By <cit.>, [1/12(1/m+1/n)]^-1/2 (Q(F_m, G_n)-1/2 ) 𝒩(0,1), which means [1/12(1/m+1/n)]^-1( (Q(F_m, G_n)-1/2 ) )^2 χ^2_1. Therefore, (M_m,n^*)^2χ^2_1. § VERIFICATION OF ASSUMPTION 1-4 Euclidean depth: The univariate Euclidean depth of any point x∈ R in a one-dimensional distribution F is defined as ED(x;F)=1/1+(x-μ_F)^2, where μ_F is the mean of the distribution F. The empirical version of ED(X;F) is ED(X;F_m)=1/(1+(X-X)^2), where X=∑_i=1^mX_i/m. Thus, |ED(X;F_m)-ED(X;F)|^α=|(X-μ_F)(2X-X-μ_F)/(1+(X-X)^2)(1+(X-μ_F)^2)|^α Assume X is strongly consistent estimators of μ_F and E(X-μ_F)^α=O(m^-α/2). By Hölder's inequality, Assumption <ref> holds. Assumption <ref> holds if the depth is continuous or we require a rate. Similar to <cit.>, we convert the Euclidean depth to the Euclidean distance d(x;F)=|x-μ_F| as follows: F_ED(X;F)(ED(Y;F))=1-F_d(X;F)(d(Y;F)). Since ED(X;F_m) does not depend on the mean, without loss of generality, we further assume that μ_F=0. We assume that the density function f_d(X;F)(·) is bounded and symmetric about the Y-axis from above. Let the condition Λ_m be X̅. Since f_d(X;F)(d(Y;F)) is an even function of Y and ρ_1(Y;F_m,F)|Λ_m is equal to zero or an odd function of Y, so E_Y[f_d(X;F)(d(Y;F))ρ_1(Y;F_m,F)|Λ_m]=0. Thus, (<ref>) and (<ref>) hold. Since ρ_2(Y;F_m,F)|Λ_m is an even function of Y. Thus, ∂/∂(d(y_j;F))[f_d(x;F)(d(y_j;F))ρ_2(y_j;F_m,F)] is an odd function. Then, it has E{1/n∑_j=1^n∂/∂(d(y_j;F))[f_d(x;F)(d(y_j;F))ρ_2(y_j;F_m,F)]}=0. Furthermore, ∂/∂(d(y_j;F))[f_d(x;F)(d(y_j;F))ρ_2(y_j;F_m,F)] conditionally independent for j=1,⋯,n and ρ_2(Y;F_m,F)=O_p(m^-1), we use Cauchy's theorem to bound the partial derivatives in terms of integrals as well as Cauchy-Schwarz Inequality, leading to E{1/n∑_j=1^n∂/∂(d(y_j;F))[f_d(x;F)(d(y_j;F))ρ_2(y_j;F_m,F)]}^2=O(m^-3). Therefore, (<ref>) and (<ref>) hold. Since ρ_3(Y;F_m,F)=O_p(m^-3/2), we use Cauchy's theorem to bound the partial derivatives in terms of integrals, where the integrating function is O_p(m^-3/2), leading to sup_ξ{∂^2/∂^2d(Y;F)[f_d(X;F)(d(Y;F))ρ_3(Y;F_m,F)]|_d(Y;F)=ξ}= O_p(m^-3/2) Therefore, Assumption <ref> holds. To check Assumption <ref>, we consider the condition Λ_mn to be F_m and G_n. For i≠ k ∈{1,⋯,m} and j ≠ l∈{1,⋯,n}, there is E{E[I(X_i,Y_j,F_m, G_n)I(X_k,Y_l,F_m,G_n)|Λ_mn]} =E{E[I(X_i,Y_j,F_m,G_n)|Λ_mn]E[I(X_k,Y_l,F_m,G_n)|Λ_mn]}. Moreover, noting that ED(X_i;F_m) and ED(Y_j;F_m), ED(X_i;G_n) and ED(Y_j;G_n) are conditionally independent and have the same distribution given Λ_mn, we have E[I(X_i,Y_j,F_m,G_n)|Λ_mn]=P((ED(X_i;F_m)≤ ED(Y_j;F_m))|Λ_mn)-P(ED(X_i;G_n)≤ ED(Y_j;G_n)|Λ_mn)=0. Mahalanobis depth: For any point x∈ R^d, the Mahalanobis depth of point x is defined follows: MD(x;F)=1/1+(x-μ_F)'Σ^-1_F(x-μ_F), where F is a d-dimensional distribution and μ_F and Σ_F are the mean and covariance of F, respectively. The empirical version of MD(X;F) is MD(X;F_m)=1/1+(X-X)'Σ_F_m^-1(X-X), where X and Σ_F_m are the sample mean and sample covariance, respectively. <cit.> have already discussed Assumption <ref> and <ref> for the Mahalanobis depth. Next, we consdider Assumption <ref> and <ref>. Similarly, we also change the Mahalanobis depth to the Mahalanobis distance d(x;F)=Σ^-1/2_F(x-μ_F)_2, and obtain F_MD(X;F)(MD(Y;F))=1-F_d(X;F) (d(Y;F)). Meanwhile, we also assume that the density function f_d(X;F)(·) is bounded and symmetric about the Y-axis from above and that μ_F=0. And, we consider the condition Λ_m to be X̅ and Σ_F_m. Due to ρ_1(Y;F_m,F)|Λ_m is equal to zero or an odd function of Y, we have E_Y[f_d(X;F)(d(Y;F))ρ_1(Y;F_m,F)|Λ_m]=0 . Again, since ∂/∂(d(y_j;F))[f_d(x;F)(d(y_j;F))ρ_2(y_j;F_m,F)] is an odd function and conditionally independent for j=1,⋯,n, ρ_2(Y;F_m,G_n)=O_p(m^-1) and ρ_3(Y;F_m,G_n)=O_p(m^-3/2), we also apply Cauchy's theorem to bound the partial derivatives in terms of integrals and Cauchy-Schwarz Inequality, so does the rest of the argument. Halfspace depth: For any point x∈ R^d, the Halfspace depth, also known for the Tukey depth, is defined as HD(x;F)=inf{P(H_x): H_x is a closed half space containing x }, where P is the probability measure corresponding to F. If we replace the probability P by the empirical probability, we obtain the sample version of HD(x, F), denoted HD(x, F_m). Since f_HD(X;F)(HD(Y;F)) is a constant, ρ_2(Y;F_m,G_n)=O_p(m^-1) and ρ_3(Y;F_m,G_n)=O_p(m^-3/2). The remaining arguments are the same. Projection depth: For any point x∈ R^d, the Projection depth is defined as P D(x ; F)=1 /(1+O(x ; F)), where O(x; F)=sup _u ∈ S^d-1|u^' x-μ(F_u)| / σ(F_u), S^d-1={u:u=1}, μ(F) and σ(F) are location and scale of F, respectively, and u^' X ∼ F_u with X ∼ F. The empirical version of PD(X;F) is PD(X;F_m)=1/1+sup _u ∈ S^d-1|u^' X-μ(F_mu)| / σ(F_mu), where F_mu is the empirical distribution of u^' X_1,⋯,u^' X_m. Assume that μ(a X+b)=a μ(X)+b and σ(a X+b)=|a| σ(X) for any scalars a, b ∈R^1. The rest of the verification is the same. Spatial depth: For any point x∈ R^d, the Spatial depth is defined as SD(x;F)=1-E_X(x-X)/x-X, X∼ F. The empirical version of SD(X;F) is SD(X;F_m)=1-1/m∑_i=1^m(X-X_i)/X-X_i, which does not depend on the mean of F. We also assume that X is continuous and its density is symmetric around mean zero, Esup_x∈ R^d∑_i=1^m((x-X_i)/x-X_i-E(x-X_i)/x-X_i)^α=O(m^α/2)  (α>0), and the density of SD(X;F) denoted as f_SD(X;F)(·) is bounded from above. Again, (<ref>)-(<ref>) follows by ρ_2(Y;F_m,F)=O_p(m^-1) and ρ_3(Y;F_m,F)=O_p(m^-3/2). § VERIFICATION OF ASSUMPTION 5 To justify Assumption 5, without loss of generality we consider θ(F)=E(X)=μ_1≠θ(G)=E(Y)=μ_2, where X and Y are adhering to normal distributions F and G, respectively, each with a variance of 1. By Taylor expansions of X and Y around their expectation E(X)=μ_1 and E(Y)=μ_2 respectively, we have m+n/mn S_m,n = -[E{1-F_χ^2[(Y-μ_1)^2]}-E{[F_χ^2[(X-μ_2)^2]}] +o_p(1), =-1+2F_χ^2[(μ_2 -μ_1)^2]+o_p(1), m+n/mn P_m,n = -[E{1-F_χ^2[(Y-μ_1)^2]} E{1-F[(X-μ_2)^2]}-1/4] =-{1-F_χ^2[(μ_2-μ_1)^2]}^2+1/4 +o_p(1), where F_χ^2 denotes the distribution function of χ^2_1. It is evident that both -1+2F_χ^2[(μ_2 -μ_1)^2] and -{1-F_χ^2[(μ_2-μ_1)^2]}^2+1/4 are monotonically increasing functions of (μ_2 -μ_1)^2. For multivariate distributions, we draw upon Theorem 6.1 in <cit.> to extend our analysis. The statistics -[Q(F_m,G_n)+Q(G_n,F_m)-1] and -[Q(F_m,G_n)Q(G_n,F_m)-1/4] can be represented as 1-Q(F,G)-Q(G,F)+o_p(1) and 1/4-Q(F,G)Q(G,F)+o_p(1), respectively. The focus then shifts to demonstrating that Q[(1-β)F+β G,(1-β)G+β F]+Q[(1-β)G+β F,(1-β)F+β G]>Q(F,G)+Q(G,F) and Q[(1-β)F+β G,(1-β)G+β F]Q[(1-β)G+β F,(1-β)F+β G]>Q(F,G)Q(G,F) for 0<β<1. As argued by <cit.>, Q decreases if there is a location shift or a scale increase, or both in terms of contamination: Q(F,G) <Q[F,(1-β)G+β F] <Q[(1-)F+{(1-β)G+β F},(1-β)G+β F] =Q[(1-β)F+β G,(1-β)G+β F], where =β/(1-β). A similar argument holds for G(G,F), leading to Q(G,F)<Q[G,(1-β)F+β G]<Q[(1-β)G+β F,(1-β)F+β G]. § PROOF OF LEMMAS Before proving the theorem, we first introduce the relevant lemmas and its proof. Let Y=X_0+ϵ X_1 for some fixed ϵ>0. If the joint density of X_0 and X_1 satisfie (i) lim _x_0 →-∞∂^2/∂ x_0^2 f(x_0, x_1)=0, (ii) sup _x_0| ∂^2/∂ x_0^2f(x_0, x_1)|<K g(x_1), where ∫ |x_1|^3 g(x_1) d x_1<∞, (iii) E(X_1^2 | X_0)<∞ a.e., then F_Y(y)=F_X_0(y)-ϵ f_X_0(y) E(X_1 | X_0=y)+1/2ϵ^2 ∂/∂ y{f_X_0(y) E(X_1^2 | X_0=y)}+O(ϵ^3), where F_X(·) is the distribution function of X and f_X(·) is the derivative of F_X(·). Under Assumption <ref>, for j≠ l, we have E(I(X_i,Y_j,F_m,G_n)E_XI(X,Y_l,F_m,G_n))=E(E_XI(X,Y_j,F_m,G_n)E_XI(X,Y_l,F_m,G_n)) =0; for i≠ k, E(I(X_i,Y_j,F_m,G_n)E_YI(X_k,Y,F_m,G_n))=E(E_YI(X_i,Y,F_m,G_n)E_YI(X_k,Y,F_m,G_n))=0; for i≠ k,j≠ l, E(I(X_i,Y_j,F_m,G_n)I(X_k,Y_l,F_m,G_n))=0. for i=k and j≠ l, E(I(X_i,Y_j,F_m,G_n)I(X_k,Y_l,F_m,G_n)) = E(I(X_i,Y_j,F_m,G_n)E_Y_lI(X_k,Y_l,F_m,G_n)) = E(E_Y_jI(X_i,Y_j,F_m,G_n)E_Y_lI(X_k,Y_l,F_m,G_n)); and, for i≠ k and j=l, E(I(X_i,Y_j,F_m,G_n)I(X_k,Y_l,F_m,G_n)) = E(I(X_i,Y_j,F_m,G_n)E_X_kI(X_k,Y_l,F_m,G_n)) = E(E_X_iI(X_i,Y_j,F_m,G_n)E_X_kI(X_k,Y_l,F_m,G_n)). The proof of this Lemma is similar to Lemma A7 of <cit.>, except that G_n is added to the given conditions. When j≠ l, based on the independence of Y_j and Y_l and Assumption <ref>, we have E(E_XI(X,Y_j,F_m,G_n)E_XI(X,Y_l,F_m,G_n)) = E{E[(I(X_i,Y_j,F_m,G_n)E_XI(X,Y_l,F_m,G_n))|F_m, G_n, Y_j, Y_l]} = E(I(X_i,Y_j,F_m,G_n)E_XI(X,Y_l,F_m,G_n)) = E{E[(I(X_i,Y_j,F_m,G_n)E_XI(X,Y_l,F_m,G_n))|F_m,G_n]} = E{E_X_i, Y_j[(I(X_i,Y_j,F_m,G_n))|F_m,G_n]E_Y_l[E_X(I(X,Y_l,F_m,G_n))|F_m,G_n]} = 0. Therefore, (<ref>) holds. Similarly, when i≠ k and i≠ k,j≠ l, (<ref>) and (<ref>) also hold respectively. When i=k and j≠ l, it has E(I(X_i,Y_j,F_m,G_n)I(X_k,Y_l,F_m,G_n)) = E{E[(I(X_i,Y_j,F_m,G_n)I(X_i,Y_l,F_m,G_n))|X_i,Y_j,F_m,G_n]} = E{I(X_i,Y_j,F_m,G_n)E_Y_l[I(X_i,Y_l,F_m,G_n)|X_i,Y_j,F_m,G_n]} = E(I(X_i,Y_j,F_m,G_n)E_Y_lI(X_k,Y_l,F_m,G_n)), and E(I(X_i,Y_j,F_m,G_n)I(X_k,Y_l,F_m,G_n)) = E{E[(I(X_i,Y_j,F_m,G_n)I(X_i,Y_l,F_m,G_n))|X_i,F_m,G_n]} = E{E_Y_j[I(X_i,Y_j,F_m,G_n)|X_i,F_m,G_n]E_Y_l[I(X_i,Y_l,F_m,G_n)|X_i,F_m,G_n]} = E(E_Y_jI(X_i,Y_j,F_m,G_n)E_Y_lI(X_k,Y_l,F_m,G_n)) Thus, (<ref>) holds. Likewise, when i≠ k and j=l, (<ref>) also holds. Under the null hypothesis F = G, then under Assumption <ref>, it has E(I(x_1,y_1,F_m,G_n))^2=O(m^-1). Proof of Lemma <ref>. First, we have E(I(x_1,y_1,F_m,G_n))^2 = E[ I(D(x_1;F_m)≤ D(y_1;F_m))-I(D(x_1;G_n)≤ D(y_1;G_n)) ]^2 = E[P_x_1(D(x_1;F_m)≤ D(y_1;F_m), D(x_1;G_n)> D(y_1;G_n)) +P_x_1(D(x_1;F_m)> D(y_1;F_m), D(x_1;G_n)≤ D(y_1;G_n))] = E[P_x_1(D(x_1;F)≤ D(y_1;F)+D(y_1;F_m)-D(y_1;F)-D(x_1;F_m)+D(x_1;F), D(x_1;F)> D(y_1;F)+D(y_1;G_n)-D(y_1;F)-D(x_1;G_n)+D(x_1;F)) ] +E[P_x_1(D(x_1;F)>D(y_1;F)+D(y_1;F_m)-D(y_1;F)-D(x_1;F_m)+D(x_1;F), D(x_1;F)≤ D(y_1;F)+D(y_1;G_n)-D(y_1;F)-D(x_1;G_n)+D(x_1;F)) ] = T_1+T_2. By Lemma <ref> and Assumptions <ref> and <ref>, it has T_1 = E{F_D(x_1;F)(D(y_1;F))+f_D(x_1;F)(D(y_1;F))ρ_1(y_1;F_m,F) +1/2∂/∂(D(y_1;F))[f_D(x_1;F)(D(y_1;F))ρ_2(y_1;F_m,F)]+O_P(m^-3/2)} -E{F_D(x_1;F)(D(y_1;F))+f_D(x_1;F)(D(y_1;F))ρ_1(y_1;G_n,F) +1/2∂/∂(D(y_1;F))[f_D(x_1;F)(D(y_1;F))ρ_2(y_1;G_n,F)]+O_P(n^-3/2)} = E{f_D(x_1;F)(D(y_1;F))(ρ_1(y_1;F_m,F)-ρ_1(y_1;G_n,F)) +1/2∂/∂(D(y_1;F))[f_D(x_1;F)(D(y_1;F))(ρ_2(y_1;F_m,F)-ρ_2(y_1;G_n,F))]} +O(n^-3/2)+O(m^-3/2) = O(m^-1). The proof of T_2 is similar so it is omitted. Then, we obtain the result of Lemma <ref>. Under the null hypothesis F = G, then under Assumptions <ref>-<ref>, it has ∫∫ I(x,y,F_m,G_n)d(F_m-F)(x)d(G_n-G)(y)=O_P(m^-3/2). The proof of this Lemma is similar to Lemma A8 of <cit.>, but here G_n is used instead of F. The main difference lies in utilizing Cox and Reid's theorem up to the second-order expansion. The detailed proof is as follows: Proof of Lemma <ref>. By using integral discretization, it has ∫∫ I(x,y,F_m,G_n)d(F_m-F)(x)d(G_n-G)(y) = 1/mn∑_i=1^m∑_j=1^nI(x_i,y_j,F_m,G_n)-1/m∑_i=1^mE_yI(x_i,y,F_m,G_n) -1/n∑_j=1^nE_xI(x,y_j,F_m,G_n)+E_x,yI(x,y,F_m,G_n). Then, given conditions such as F_m and G_n, using the law of double expectation, we obtain E∫∫ I(x,y,F_m,G_n)d(F_m-F)(x)d(G_n-G)(y) = E[1/mn∑_i=1^m∑_j=1^nI(x_i,y_j,F_m,G_n)-1/m∑_i=1^mE_yI(x_i,y,F_m,G_n)] -E[1/n∑_j=1^nE_xI(x,y_j,F_m,G_n)-E_xyI(x,y,F_m,G_n)] = E{E[(1/nm∑_i=1^m∑_j=1^n(I(x_i,y_j,F_m,G_n)-E_yI(x_i,y,F_m,G_n)))|F_m,G_n]} -E{E[(1/n∑_j=1^n(E_xI(x,y_j,F_m,G_n)-E_x,yI(x,y,F_m,G_n)))|F_m,G_n]} = 0, and Var(∫∫ I(x,y,F_m,G_n)d(F_m-F)(x)d(G_n-G)(y)) = Cov(1/mn∑_i=1^m∑_j=1^nI(x_i,y_j,F_m,G_n), 1/mn∑_k=1^m∑_l=1^nI(x_k,y_l,F_m,G_n)) -2Cov(1/mn∑_i=1^m∑_j=1^nI(x_i,y_j,F_m,G_n), 1/m∑_k=1^mE_yI(x_k,y,F_m,G_n)) -2Cov(1/mn∑_i=1^m∑_j=1^nI(x_i,y_j,F_m,G_n),1/n∑_l=1^nE_xI(x,y_l,F_m,G_n) ) +Cov(1/m∑_i=1^mE_yI(x_i,y,F_m,G_n), 1/m∑_k=1^mE_yI(x_k,y,F_m,G_n)) +Cov(1/n∑_j=1^nE_xI(x,y_j,F_m,G_n), 1/n∑_l=1^nE_xI(x,y_l,F_m,G_n)) +Cov(E_x,yI(x,y,F_m,G_n), E_x,yI(x,y,F_m,G_n)) =: A_mn1-2A_mn2-2A_mn3+A_mn4+A_mn5+A_mn6. Firstly, we have A_mn1 = Cov(1/mn∑_i=1^m∑_j=1^nI(x_i,y_j,F_m,G_n), 1/mn∑_k=1^m∑_l=1^nI(x_k,y_l,F_m,G_n)) = 1/m^2n^2∑_i=k∑_j=lCov(I(x_i,y_j,F_m,G_n), I(x_k,y_l,F_m,G_n)) +1/m^2n^2∑_i=k∑_j≠ lCov(I(x_i,y_j,F_m,G_n),I(x_k,y_l,F_m,G_n)) +1/m^2n^2∑_i≠ k∑_j=lCov(I(x_i,y_j,F_m,G_n), I(x_k,y_l,F_m,G_n)) +1/m^2n^2∑_i≠ k∑_j≠ lCov(I(x_i,y_j,F_m,G_n), I(x_k,y_l,F_m,G_n)) = B_mn1+B_mn2+B_mn3+B_mn4, where B_mn1 = 1/m^2n^2∑_i=k∑_j=lCov(I(x_i,y_j,F_m,G_n), I(x_k,y_l,F_m,G_n)) = 1/mnCov(I(x_1,y_1,F_m,G_n), I(x_1,y_1,F_m,G_n)) = 1/mn[E(I(x_1,y_1,F_m,G_n))^2- (EI(x_1,y_1,F_m,G_n))^2]. By Assumption <ref> and Lemma <ref>, we have B_mn1=O_P(m^-2n^-1), and by (<ref>) in Lemma <ref> and Assumption <ref>, B_mn2 = 1/m^2n^2∑_i=k∑_j≠ lCov(I(x_i,y_j,F_m,G_n),I(x_k,y_l,F_m,G_n)) = 1/m^2n^2∑_i=k∑_j≠ l[E(I(x_i,y_j,F_m,G_n)I(x_k,y_l,F_m,G_n)) -EI(x_i,y_j,F_m,G_n)EI(x_k,y_l,F_m,G_n)] = 1/mn^2∑_j≠ l[E(E_y_jI(x_1,y_j,F_m,G_n)E_y_lI(x_1,y_l,F_m,G_n)) -EI(x_1,y_j,F_m,G_n)EI(x_1,y_l,F_m,G_n)] = n-1/mnE(E_y_1I(x_1,y_1,F_m,G_n)E_y_2I(x_1,y_2,F_m,G_n)). Similar to (<ref>), by Assumption <ref> and (<ref>) in lemma <ref>, we have B_mn3 = 1/m^2n^2∑_i≠ k∑_j=lCov(I(x_i,y_j,F_m,G_n), I(x_k,y_l,F_m,G_n)) = 1/m^2n^2∑_i≠ k∑_j=l[E(I(x_i,y_j,F_m,G_n)I(x_k,y_l,F_m,G_n)) -EI(x_i,y_j,F_m,G_n)EI(x_k,y_l,F_m,G_n)] = 1/m^2n∑_i≠ k[E(E_x_iI(x_i,y_1,F_m,G_n)E_x_kI(x_k,y_1,F_m,G_n)) -EI(x_i,y_1,F_m,G_n)EI(x_k,y_1,F_m,G_n)] = m-1/mnE(E_x_1I(x_1,y_1,F_m,G_n)E_x_2I(x_2,y_1,F_m,G_n)). By Assumption <ref> and (<ref>) of Lemma <ref>, B_mn4 = 1/m^2n^2∑_i≠ k∑_j≠ lCov(I(x_i,y_j,F_m,G_n), I(x_k,y_l,F_m,G_n)) = 1/m^2n^2∑_i≠ k∑_j≠ l[EI(x_i,y_j,F_m,G_n)I(x_k,y_l,F_m,G_n) -EI(x_i,y_j,F_m,G_n)EI(x_k,y_l,F_m,G_n)]=0. Secondly, there is A_mn2 = Cov(1/mn∑_i=1^m∑_j=1^nI(x_i,y_j,F_m,G_n), 1/m∑_k=1^mE_yI(x_k,y,F_m,G_n)) = 1/m^2∑_i=kCov(I(x_i,y_1,F_m,G_n), E_yI(x_k,y,F_m,G_n)) +1/m^2∑_i≠ kCov(I(x_i,y_1,F_m,G_n), E_yI(x_k,y,F_m,G_n)) = C_mn1+C_mn2, where C_mn1 = 1/m^2∑_i=kCov(I(x_i,y_1,F_m,G_n), E_yI(x_k,y,F_m,G_n)) = 1/m^2∑_i=k[E(I(x_i,y_1,F_m,G_n)E_yI(x_k,y,F_m,G_n)) -EI(x_i,y_1,F_m,G_n)E(E_yI(x_k,y,F_m,G_n))] = 1/m[E(E_y_1I(x_1,y_1,F_m,G_n)E_yI(x_1,y,F_m,G_n)) ( by (<ref>)  in Lemma <ref>) = n/n-1B_mn2. Under H_0:F = G, by Lemma <ref>, Assumption <ref> and <ref>, one has E(E_y_1I(x_1,y_1,F_m,G_n)E_yI(x_1,y,F_m,G_n)) = E{[ f_D(y_1;F)(D(x_1;F))(ρ_1(x_1;G_n,F)-ρ_1(x_1;F_m,F)) +1/2∂/∂(D(x_1;F))[f_D(y_1;F)(D(x_1;F))(ρ_2(x_1;G_n,F)-ρ_2(x_1;F_m,F))] +O_P(n^-3/2)+O_P(m^-3/2)]× [f_D(y;F)(D(x_1;F))(ρ_1(x_1;G_n,F)-ρ_1(x_1;F_m,F)) +1/2∂/∂(D(x_1;F))[f_D(y;F)(D(x_1;F))(ρ_2(x_1;G_n,F)-ρ_2(x_1;F_m,F))] +O_P(n^-3/2)+O_P(m^-3/2)]} = E{f_D(y_1;F)(D(x_1;F))(ρ_1(x_1;G_n,F)-ρ_1(x_1;F_m,F)) +∂/∂(D(x_1;F))[f_D(y_1;F)(D(x_1;F))(ρ_2(x_1;G_n,F)-ρ_2(x_1;F_m,F))] +O_P(n^-3/2)+O_P(m^-3/2)}^2 = O(m^-1), and combining with Assumption <ref>, we have E(E_y_1I(x_1,y_1,F_m,G_n)E_yI(x_1,y,F_m,G_n)) -EI(x_1,y_1,F_m,G_n)EI(x_1,y,F_m,G_n)=O(m^-1). Thus, by (<ref>), (<ref>) and (<ref>), it has B_mn2-C_mn1=O(m^-3). By Assumption <ref> and (<ref>) in Lemma <ref>, C_mn2 = 1/m^2∑_i≠ kCov(I(x_i,y_1,F_m,G_n), E_yI(x_k,y,F_m,G_n)) = 1/m^2∑_i≠ k[E(I(x_i,y_1,F_m,G_n)E_yI(x_k,y,F_m,G_n)) -EI(x_i,y_1,F_m,G_n)E(E_yI(x_k,y,F_m,G_n))] = 0. For A_mn3, A_mn3 = Cov(1/mn∑_i=1^m∑_j=1^nI(x_i,y_j,F_m,G_n),1/n∑_l=1^nE_xI(x,y_l,F_m,G_n) ) = 1/n^2∑_j=lCov(I(x_1,y_j,F_m,G_n), E_xI(x,y_l,F_m,G_n)) +1/n^2∑_j≠ lCov(I(x_1,y_j,F_m,G_n), E_xI(x,y_l,F_m,G_n)) = D_mn1+D_mn2, where D_mn1 = 1/n^2∑_j=lCov(I(x_1,y_j,F_m,G_n), E_xI(x,y_l,F_m,G_n)) = 1/n^2∑_j=l[E(I(x_1,y_j,F_m,G_n)E_xI(x,y_l,F_m,G_n)) -EI(x_1,y_j,F_m,G_n)E(E_xI(x,y_l,F_m,G_n))] = 1/n[E(E_x_1I(x_1,y_1,F_m,G_n)E_xI(x,y_1,F_m,G_n)) -EI(x_1,y_j,F_m,G_n)EI(x,y_j,F_m,G_n)] ( by (<ref>)  in Lemma <ref>) = m/m-1B_mn3. By Lemma <ref>, Assumption <ref> and <ref>, we have E(E_x_1I(x_1,y_1,F_m,G_n)E_x_2I(x_2,y_2,F_m,G_n)) = E{[f_D(x_1;F)(D(y_1;F))(ρ_1(y_1;F_m,F)-ρ_1(y_1;G_n,F)) +1/2∂/∂(D(y_1;F))[f_D(x_1;F)(D(y_1;F))(ρ_2(y_1;F_m,F)-ρ_2(y_1;G_n,F))] +O(n^-3/2)+O(m^-3/2)]× [f_D(x_2;F)(D(y_2;F))(ρ_1(y_2;F_m,F)-ρ_1(y_2;G_n,F)) +1/2∂/∂(D(y_2;F))[f_D(x_2;F)(D(y_2;F))(ρ_2(y_2;F_m,F)-ρ_2(y_2;G_n,F))] +O(n^-3/2)+O(m^-3/2)]} = E{f_D(x_1;F)(D(y_1;F))(ρ_1(y_1;F_m,F)-ρ_1(y_1;G_n,F)) +1/2∂/∂(D(y_1;F))[f_D(x_1;F)(D(y_1;F))(ρ_2(y_1;F_m,F)-ρ_2(y_1;G_n,F))] +O(n^-3/2)+O(m^-3/2)}^2 = O(m^-1). and combing this with Assumption 2, we have E(E_x_1I(x_1,y_1,F_m,G_n)E_x_2I(x_2,y_1,F_m,G_n)) -EI(x_1,y_1,F_m,G_n)EI(x_2,y_1,F_m,G_n)=O(m^-1). Thus, by (<ref>), (<ref>) and (<ref>) , it has B_mn3-D_mn1=O(m^-3). and by Assumption <ref> and (<ref>) in Lemma <ref>, D_mn2 = 1/n^2∑_j≠ lCov(I(x_1,y_j,F_m,G_n), E_xI(x,y_l,F_m,G_n)) = 1/n^2∑_j≠ l[E(I(x_1,y_j,F_m,G_n)E_xI(x,y_l,F_m,G_n)) -EI(x_1,y_j,F_m,G_n)E(E_xI(x,y_l,F_m,G_n))] = 0. (by Lemma <ref>) For A_mn4, A_mn4 = Cov(1/m∑_i=1^mE_yI(x_i,y,F_m,G_n), 1/m∑_k=1^mE_yI(x_k,y,F_m,G_n)) = 1/m^2∑_i=kCov(E_yI(x_i,y,F_m,G_n), E_yI(x_k,y,F_m,G_n)) +1/m^2∑_i≠ kCov(E_yI(x_i,y,F_m,G_n), E_yI(x_k,y,F_m,G_n)) = E_mn1+E_mn2, where E_mn1 = 1/m^2∑_i=kCov(E_yI(x_i,y,F_m,G_n), E_yI(x_k,y,F_m,G_n)) = 1/m^2∑_i=k[E(E_yI(x_i,y,F_m,G_n)E_yI(x_k,y,F_m,G_n)) -E(E_yI(x_i,y,F_m,G_n))E(E_yI(x_k,y,F_m,G_n))] = 1/mE(E_yI(x_1,y,F_m,G_n)E_yI(x_1,y,F_m,G_n)) ( by Assumption <ref>) = C_mn1, and by (<ref>) in Lemma <ref> and Assumption <ref>, E_mn2 = 1/m^2∑_i≠ kCov(E_yI(x_i,y,F_m,G_n), E_yI(x_k,y,F_m,G_n)) = 1/m^2∑_i≠ k[E(E_yI(x_i,y,F_m,G_n)E_yI(x_k,y,F_m,G_n)) -E(E_yI(x_i,y,F_m,G_n))E(E_yI(x_k,y,F_m,G_n))] = 0. For A_mn5, A_mn5 = Cov(1/n∑_j=1^nE_xI(x,y_j,F_m,G_n), 1/n∑_l=1^nE_xI(x,y_l,F_m,G_n)) = 1/n^2∑_j=lCov(E_xI(x,y_j,F_m,G_n), E_xI(x,y_l,F_m,G_n)) +1/n^2∑_j≠ lCov(E_xI(x,y_j,F_m,G_n), E_xI(x,y_l,F_m,G_n)) = F_mn1+F_mn2. Similar to (<ref>), by Assumption <ref>, it has F_mn1 = 1/n^2∑_j=lCov(E_xI(x,y_j,F_m,G_n), E_xI(x,y_l,F_m,G_n)) = 1/n^2∑_j=l[E(E_xI(x,y_j,F_m,G_n)E_xI(x,y_l,F_m,G_n)) -E(E_xI(x,y_j,F_m,G_n))E(E_xI(x,y_l,F_m,G_n))] = 1/nE(E_xI(x,y_1,F_m,G_n)E_xI(x,y_1,F_m,G_n)) = D_mn1, and by (<ref>) in Lemma <ref> and Assumption <ref>, F_mn2 = 1/n^2∑_j≠ lCov(E_xI(x,y_j,F_m,G_n), E_xI(x,y_l,F_m,G_n)) = 1/n^2∑_j≠ l[E(E_xI(x,y_j,F_m,G_n)E_xI(x,y_l,F_m,G_n)) -E(E_xI(x,y_j,F_m,G_n))E(E_xI(x,y_l,F_m,G_n))] = 0. For A_mn6, by Assumption 2, we have A_mn6 = Cov(E_x,yI(x,y,F_m,G_n), E_x,yI(x,y,F_m,G_n)) = E(E_x,yI(x,y,F_m,G_n))^2- (E(E_x,yI(x,y,F_m,G_n)))^2=0. Then, combining this with (<ref>)-(<ref>), one has Var(∫∫ I(x,y,F_m,G_n)d(F_m-F)(x)d(G_n-G)(y)) =: A_mn1-2A_mn2-2A_mn3+A_mn4+A_mn5+A_mn6 = B_mn1+B_mn2+B_mn3+B_mn4-2C_mn1-2C_mn2-2D_mn1-2D_mn2 +E_mn1+E_mn2+F_mn1+F_mn2+A_mn6 = B_mn1+(B_mn2-C_mn1)+(B_mn3-D_mn1)+A_mn6 = O(m^-3). So, according to (<ref>) and (<ref>), by applying Markov’s inequality, we obtain the result of (<ref>). <cit.> Under Assumption <ref>, one has ∫∫ I(x,y,F_m,F)dF(x)dG(y)=o(Δ_m). Lemma <ref> is derived based on Assumption <ref>. Specific details can be found in Lemma A5 of <cit.>. The proof is omitted. Under the null hypothesis F = G, if Assumptions <ref>-<ref> and m/n→ c hold, it has Q(F_m,G_n)-Q(F,G) = 1/n∑_j=1^n[F_D(x;F)(D(y_j;F))-1/2]+1/m∑_i=1^m[1/2-F_D(y;F)(D(x_i;F))] +o(Δ_m)+O_P(m^-1), and Q(G_n,F_m)-Q(F,G) = 1/n∑_j=1^n[1/2-F_D(x;F)(D(y_j;F))]+1/m∑_i=1^m[F_D(y;F)(D(x_i;F))-1/2] +o(Δ_m)+O_P(m^-1), where Q(F,G)=1/2. Proof of Lemma <ref>. The proof is analogous to the proof of Theorem 2 in <cit.>. So the proof is skipped here. § PROOF OF THEOREM 1 Step 1: By Hoeffding decomposition Q(F_m,G_n)+Q(G_n,F_m)-1 = ∫∫ I(D(x;F_m)≤ D(y;F_m))dF_m(x)dG_n(y) +∫∫ I(D(y;G_n)≤ D(x;G_n))dF_m(x)dG_n(y)-1 = ∫∫ I(D(x;F_m)≤ D(y;F_m))dF_m(x)dG_n(y) -∫∫ I(D(x;G_n)≤ D(y;G_n))dF_m(x)dG_n(y) = ∫∫ I(x,y,F_m,G_n)dF_m(x)dG_n(y) = ∫∫ I(x,y,F_m,G_n)dF(x)dG_n(y)+∫∫ I(x,y,F_m,G_n)dF_m(x)dG(y)+R_mn(by Hoeffding decomposition) = M_mn1+M_mn2+R_mn, where R_mn = ∫∫ I(x,y,F_m,G_n)dF_m(x)dG_n(y)-∫∫ I(x,y,F_m,G_n)dF(x)dG_n(y) -∫∫ I(x,y,F_m,G_n)dF_m(x)dG(y). Step 2: For the main terms M_mn1 and M_mn2 For i=1,2,…, m, by Lemma <ref>, E_y[I(D(x_i;F_m)≤ D(y;F_m))] = P_y(D(x_i;F)≤ D(y;F)+D(y;F_m)-D(y;F)-D(x_i;F_m)+D(x_i;F)) = 1-F_D(y;F)(D(x_i;F))-f_D(y;F)(D(x_i;F))ρ_1(x_i;F_m,F) -1/2∂/∂(D(x_i;F))[f_D(y;F)(D(x_i;F))ρ_2(x_i;F_m,F)]+O_P(m^-3/2). Meanwhile, it has E_y[I(D(x_i;G_n)≤ D(y;G_n))] = P_y(D(x_i;F)≤ D(y;F)+D(y;G_n)-D(y;F)-D(x_i;G_n)+D(x_i;F)) = 1-F_D(y;F)(D(x_i;F))-f_D(y;F)(D(x_i;F))ρ_1(x_i;G_n,F) -1/2∂/∂(D(x_i;F))[f_D(y;F)(D(x_i;F))ρ_2(x_i;G_n,F)]+O_P(n^-3/2). By (<ref>) and (<ref>), we obtain M_mn1 = ∫∫ I(x,y,F_m,G_n)dF_m(x)dG(y) = 1/m∑_i=1^mE_yI(x_i,y,F_m,G_n) = -1/m∑_i=1^mf_D(y;F)(D(x_i;F))(ρ_1(x_i;F_m,F)-ρ_1(x_i;G_n,F)) -1/2m∑_i=1^m∂/∂(D(x_i;F))[f_D(y;F)(D(x_i;F))(ρ_2(x_i;F_m,F)-ρ_2(x_i;G_n,F)) +O_P(n^-3/2)+O_P(m^-3/2). Similarly, for j=1,2…,n, we have E_x[I(D(x;F_m)≤ D(y_i;F_m))] = P_x(D(x;F)≤ D(y_i;F)+D(y_i;F_m)-D(y_i;F)-D(x;F_m)+D(x;F)) = F_D(x;F)(D(y_i;F))+f_D(x;F)(D(y_i;F))ρ_1(y_j;F_m,F) +1/2∂/∂(D(y_i;F))[f_D(x;F)(D(y_i;F))ρ_2(y_i;F_m,F)]+O_P(m^-3/2), and E_x[I(D(x;G_n)≤ D(y_j;G_n))] = P_x(D(x;F)≤ D(y_i;F)+D(y_j;G_n)-D(y_j;F)-D(x;G_n)+D(x;F)) = F_D(x;F)(D(y_j;F))+f_D(x;F)(D(y_j;F))ρ_1(y_j;F_m,F) +1/2∂/∂(D(y_i;F))[f_D(x;F)(D(y_i;F))ρ_2(y_i;G_n,F)]+O_P(n^-3/2), By (<ref>) and (<ref>), one has M_mn2 = ∫∫ I(x,y,F_m,G_n)dF(x)dG_n(y) = 1/n∑_j=1^nE_xI(x,y_j,F_m,G_n) = 1/n∑_j=1^nf_D(x;F)(D(y_j;F))(ρ_1(y_j;F_m,F)-ρ_1(y_j;G_n,F)) +1/2n∑_j=1^n∂/∂(D(y_j;F))[f_D(x;F)(D(y_j;F))(ρ_2(y_j;F_m,F)-ρ_2(y_j;G_n,F))] +O_P(n^-3/2)+O_P(m^-3/2). Step 3: For the residual term R_mn By (<ref>), we have R_mn = ∫∫ I(x,y,F_m,G_n)d(F_m-F)(x)d(G_n-G)(y) -∫∫ I(x,y,F_m,G_n)dF(x)dG(y). Therefore, combining Lemma <ref> with Lemma <ref>, we get R_mn=o(Δ_m)+O_P(m^-3/2). By (<ref>) and (<ref>) in the Assumption 1, (<ref>), (<ref>), and (<ref>), there is S_m, n = -m n/m+n[Q(F_m, G_n)+Q(G_n, F_m)-1] = -m n/m+n{1/m∑_i=1^mE_yI(x_i,y,F_m,G_n)+1/n∑_j=1^nE_xI(x,y_j,F_m,G_n)+R_mn} = -m n/m+n{-1/m∑_i=1^mf_D(y;F)(D(x_i;F))(ρ_1(x_i;F_m,F)-ρ_1(x_i;G_n,F))] +1/n∑_j=1^nf_D(x;F)(D(y_j;F))(ρ_1(y_j;F_m,F)-ρ_1(y_j;G_n,F))] +O_P(m^-3/2)+O_P(n^-3/2)+R_mn} = -m n/m+n{-1/m∑_i=1^mf_D(y;F)(D(x_i;F))(ρ_1(x_i;F_m,F)-ρ_1(x_i;G_n,F)) +1/n∑_j=1^nf_D(x;F)(D(y_j;F))(ρ_1(y_j;F_m,F)-ρ_1(y_j;G_n,F))} +o(mn/m+nΔ_m)+O_P(m^-1/2). Thus, Proof of (<ref>) is complete. By (<ref>) and Lemma <ref>, it has P_m, n = -m n/m+n[Q(F_m, G_n)× Q(G_n, F_m)-1/4] = -m n/m+n[(Q(F_m, G_n)-1/2+1/2)×(Q(G_n, F_m)-1/2+1/2)-1/4] = -m n/m+n[(Q(F_m, G_n)-1/2)×(Q(G_n, F_m)-1/2) +1/2(Q(F_m, G_n)+Q(G_n, F_m)-1)] = -m n/m+n{[1/n∑_j=1^n(F_D(x;F)(D(y_j;F))-1/2)+1/m∑_i=1^m(1/2-F_D(y;F)(D(x_i;F))) o(Δ_m)+O_P(m^-1)]×[1/n∑_j=1^n(1/2-F_D(x;F)(D(y_j;F))+1/m∑_i=1^m(F_D(y;F)(D(x_i;F))-1/2) +o(Δ_m)+O_P(m^-1)]}-mn/2(m+n){-1/m∑_i=1^mf_D(y;F)(D(x_i;F))(ρ_1(x_i;F_m,F)-ρ_1(x_i;G_n,F))] +1/n∑_j=1^nf_D(x;F)(D(y_j;F))(ρ_1(y_j;F_m,F)-ρ_1(y_j;G_n,F))+o(Δ_m)+O_P(m^-3/2)} = -m n/m+n{[1/n∑_j=1^n(F_D(x;F)(D(y_j;F))-1/2)+1/m∑_i=1^m(1/2-F_D(y;F)(D(x_i;F)))]× [1/n∑_j=1^n(1/2-F_D(x;F)(D(y_j;F)))+1/m∑_i=1^m(F_D(y;F)(D(x_i;F))-1/2)]} -mn/2(m+n){-1/m∑_i=1^mf_D(y;F)(D(x_i;F))(ρ_1(x_i;F_m,F)-ρ_1(x_i;G_n,F)) +1/n∑_j=1^nf_D(x;F)(D(y_j;F))(ρ_1(y_j;F_m,F)-ρ_1(y_j;G_n,F))} +o(m n/m+nΔ_m)+O_P(m^-1/2) = m n/m+n{[1/n∑_j=1^n(F_D(x;F)(D(y_j;F))-1/2)+1/m∑_i=1^m(1/2-F_D(y;F)(D(x_i;F)))]^2} -mn/2(m+n){-1/m∑_i=1^mf_D(y;F)(D(x_i;F))(ρ_1(x_i;F_m,F)-ρ_1(x_i;G_n,F)) +1/n∑_j=1^nf_D(x;F)(D(y_j;F))(ρ_1(y_j;F_m,F)-ρ_1(y_j;G_n,F))} +o(m n/m+nΔ_m)+O_P(m^-1/2). Then, one can immediately obtain (<ref>). The proof of Theorem <ref> is complete. § PROOF OF REMARK 3 According to Theorem 1, for one-dimensional Euclidean depth, we can use Euclidean distance to replace the depth functions. Therefore, S_m, n = -m n/m+n{1/m∑_i=1^mf_d(y;F)(d(x_i;F))( E_y[d(x_i;F_m)-d(y_j;F_m)-d(x_i;G_n)+d(y_j;G_n)|d(x_i;F)=d(y_j;F)] ) -1/n∑_j=1^nf_d(x;F)(d(y_j;F))( E_x[d(y_j;F_m) -d(x_i;F_m)-d(y_j;G_n)+d(x_i;G_n)|d(x_i;F)=d(y_j;F)] )} +o(m n/m+nΔ_m)+O_P(m^-1/2), where f_d(x;F)(·) is the density of Euclidean distance d(x; F). Then we have d(x_i;F_m)=(x_i-x̅)^2, d(y_j;F_m)=(y_j-x̅)^2, d(x_i;G_n)=(x_i-y̅)^2, d(y_j;G_n)=(y_j-y̅)^2, d(x_i;F)=x_i^2, d(y_j;F)=y_j^2. Then, f_d(y;F)(d(x_i;F))=f(x_i^2) and f_d(x;F)(d(y_j;F))=f(y_j^2), where f is density of χ^2_1. Therefore, S_m, n = -m n/m+n{1/m∑_i=1^m f(x_i^2) ( E_y[(x_i-x̅)^2-(y_j-x̅)^2-(x_i-y̅)^2+(y_j-y̅)^2|x_i^2=y_j^2] ) -1/n∑_j=1^n f(y_j^2) ( E_x[(y_j-x̅)^2 -(x_i-x̅)^2-(y_j-y̅)^2+(x_i-y̅)^2|x_i^2=y_j^2 ] )} +o(m n/m+nΔ_m)+O_P(m^-1/2) = -m n/m+n{1/m∑_i=1^m f(x_i^2) ( -2x_i(x̅-y̅) ) -1/n∑_j=1^n f(y_j^2) ( 2y_j(y̅-x̅) )} +o(m n/m+nΔ_m)+O_P(m^-1/2) = -√(mn/m+n)(y̅-x̅) √(mn/m+n){1/m∑_i=1^m f(x_i^2) 2x_i - 1/n∑_j=1^n f(y_j^2) 2y_j } +o(m n/m+nΔ_m)+O_P(m^-1/2) Since √(mn/m+n)(y̅-x̅) →𝒩(0,1) and √(mn/m+n) [ 1/m∑_i=1^m f(x_i^2) 2x_i + 1/n∑_j=1^n -f(y_j^2) 2y_j ] →𝒩(0,2/√(3)π), then S_m,n→ -Z_1 Z_2, where Z_1 ∼𝒩(0,1) and Z_2 ∼𝒩(0,2/√(3)π), with Cov(Z_1, Z_2)=-1/π. In a similar way, we can obtain the asymptotic distribution of P_m,n under one-dimensional Euclidean depth. Thus, P_m, n = -m n/m+n{[-1/n∑_j=1^n(F_d(x;F)(d(y_j;F))-1/2) -1/m∑_i=1^m(1/2-F_d(y;F)(d(x_i;F)))] ×[1/n∑_j=1^n(F_d(x;F)(d(y_j;F))-1/2) +1/m∑_i=1^m(1/2-F_d(y;F)(d(x_i;F)))]} -mn/2(m+n){1/m∑_i=1^mf_d(y;F)(d(x_i;F))( E_y[d(x_i;F_m)-d(y_j;F_m)-d(x_i;G_n)+d(y_j;G_n)|d(x_i;F)=d(y_j;F)] ) -1/n∑_j=1^nf_d(x;F)(d(y_j;F))(E_x[d(y_j;F_m) -d(x_i;F_m)-d(y_j;G_n)+d(x_i;G_n)|d(x_i;F)=d(y_j;F)] )} +o(m n/m+nΔ_m)+O_P(m^-1/2), where F_D(x;F)(·) and f_D(x;F)(·) are the distribution function and density of D(x; F), respectively. Under one-dimensional Euclidean depth, F_d(y;F) (d(x_i;F))=F(x_i^2) and F_d(x;F)(d(y_j;F))=F(y_j^2), where F is CDF of χ^2_1, we have P_m, n = -m n/m+n{[-1/n∑_j=1^n( F(y_j^2)-1/2) -1/m∑_i=1^m(1/2-F(x_i^2) )] ×[1/n∑_j=1^n(F(y_j^2)-1/2) +1/m∑_i=1^m(1/2-F(x_i^2) )]} +1/2{ -√(mn/m+n)(y̅-x̅) √(mn/m+n){1/m∑_i=1^m f(x_i^2) 2x_i - 1/n∑_j=1^n f(y_j^2) 2y_j }} +o(m n/m+nΔ_m)+O_P(m^-1/2) = m n/m+n[1/n∑_j=1^n( F(y_j^2)-1/2) +1/m∑_i=1^m(1/2-F(x_i^2) )]^2 +1/2{ -√(mn/m+n)(y̅-x̅) √(mn/m+n){1/m∑_i=1^m f(x_i^2) 2x_i - 1/n∑_j=1^n f(y_j^2) 2y_j }} +o(m n/m+nΔ_m)+O_P(m^-1/2) Since √(mn/m+n) [1/m∑_i=1^m 1/2-F(x_i^2) +1/n∑_j=1^n F(y_j^2)-1/2 ] →𝒩(0,1/12), √(mn/m+n)(y̅-x̅) →𝒩(0,1) and √(mn/m+n) [ 1/m∑_i=1^m f(x_i^2) 2x_i + 1/n∑_j=1^n -f(y_j^2) 2y_j ] →𝒩(0,2/√(3)π), then P_m,n→ Z_3^2 -1/2 Z_1 Z_2, where Z_1 ∼𝒩(0,1), Z_2 ∼𝒩(0,2/√(3)π), and Z_3 ∼𝒩(0,1/12), with Cov(Z_1, Z_2)=-1/π. Z_3 is independent of Z_1 and Z_2. § EXTENSIONS OF REMARK 3 According to the proof of Remark <ref>, the approximated probability density functions of S_m,n and P_m,n can be denoted as f_S(x) and f_P(x), respectively: f_S(x) =1/π√(2π-√(3)/√(3)π^2 ) e^√(3) x/2-√(3)/π∫_0^∞1/z_1 e^-1/2-√(3)/π (z_1^2+√(3)π x^2/2 z^2_1 ) dz_1, f_P(x) =∫_-∞^∞∫_-∞^∞ 12 f_χ^2(12x+6z_1z_2) 1/2 π√(2π-√(3)/√(3)π^2 ) e^-1/2-√(3)/π (z_1^2 +√(3) z_1 z_2 +√(3)π/2 z_2^2) dz_1 dz_2, where f_χ^2(12x+6z_1z_2) is the probability density of a χ_1^2 at 12x+6z_1z_2. Figure <ref> compared f_S(x) and f_P(x) with simulated density for m=n=1000 and 10,000 repetitions, illustrating that both densities are right-skewed with sharp peaks. To further demonstrate the rate of convergence for distributions of S_m,n and P_m,n, we conduct a simulation study. We generate samples where m=n=100,⋯,1000 from F=G=𝒩(0,1), each with a mean of 0 and a variance of 1. This simulation is repeated 10,000 times to compute the empirical α quantiles at levels α=0.2, 0.1, 0.05 and 0.01. The empirical quantiles are then compared with the theoretical quantiles by assuming m,n→∞, based on the asymptotic distributions of S_m,n and P_m,n specified in (<ref>) and (<ref>). This comparison is intended to quantitatively assess how well the finite sample distributions of S_m,n and P_m,n match their respective asymptotic behaviors. As shown in Tables <ref> and <ref>, there is significant agreement between empirical and theoretical quantiles at different α levels. For all evaluated values, the empirical quantiles are very close to the theoretical quantiles, except for α=0.01 which requires a larger sample size. This observation holds true even with relatively small sample sizes, thus demonstrating the fast convergence rate of the asymptotic distributions of S_m,n and P_m,n in approximating the behavior of their finite sample counterparts. § PROOF OF THEOREM 2 As each block from the same distribution are the same, we can express ||θ(F)-θ(G)|| as ||∑_i=1^b_1θ(F_i)/b_1-∑_j=1^b_2θ(G_b_1+j)/b_2||, where F_i and G_b_1+j are distributions of i-th block and b_1+j-th block in combined samples x_1, …, x_m, y_1, …, y_n, respectively. For the permutation (π(1), …, π(N)), we have ||∑_i=1^b_1θ(F_π(i))/b_1-∑_j=1^b_2θ(G_π(b_1+j))/b_2||, which can be expressed as ||(b_1-c)θ(F)+cθ(G)/b_1-(b_2-c)θ(G)+cθ(F)/b_2||=||θ(F)-θ(G)|||1-c/b_1-c/b_2|, where 0≤ c≤min(b_1,b_2). We note that ||∑_i=1^b_1θ(F_i)/b_1-∑_j=1^b_2θ(G_b_1+j)/b_2||=∑_i=1^b_1θ(F_π(i))/b_1-∑_j=1^b_2θ(G_π(b_1+j))/b_2|| with probability less than 2 m! n!/(m+n)!, i.e., the permutations occur only within each sample with probability m! n!/(m+n)! for c=0 or two samples are exchanged when m=n with probability m! m!/(2m)! for c=b_1=b_2. Except these case, for 0< c<min(b_1,b_2). we have ||∑_i=1^b_1θ(F_i)/b_1-∑_j=1^b_2θ(G_b_1+j)/b_2||>||∑_i=1^b_1θ(F_π(i))/b_1-∑_j=1^b_2θ(G_π(b_1+j))/b_2||. That's because -1<1-min(b_1,b_2)-1/b_1-min(b_1,b_2)-1/b_2≤ 1-c/b_1-c/b_2≤ 1-1/b_1-1/b_2<1. As s→∞, under Assumption A, the permuted new S_m,n and P_m,n are less than their original ones in probability. In addition, the p-value_S and p-value_P converge to the probabilities of permuted new S_m,n and P_m,n greater than their original ones as 𝒞→∞, which is zero and less than α. So, Theorem <ref> follows. 99 natexlab#1#1 [Anderson(1962)]Anderson1962 Anderson, T. On the Distribution of the Two-Sample Cramér-von Mises Criterion. The Annals of Mathematical Statistics 1962, 33(3), 1148–1159. [Auner et al.(2018)]Auner2018 Auner, G. W., Koya, S.K., Huang, H., Broadbent, B., Trexler, M., Auner, Z., Elias, A., Mehne, K.C. and Brusatori, M.A. Applications of Raman spectroscopy in cancer diagnosis. Cancer and Metastasis Reviews 2018, 37(4), 691–717 [Barale and Shirke(2021)]Barale Shirke Barale, M. and Shirke, D. A test based on data depth for testing location-scale of the two multivariate populations. Journal of statistical Computation and Simulation 2021, 91(4), 768–785. [Berenguer et al.(2023)]Berenguer et al. (2023) Berenguer, D., Pereira. F., Câmara, J., and Pereira, J. Underlying Features of Prostate Cancer—Statistics, Risk Factors, and Emerging Methods for Its Diagnosis. Current Oncology 2023 2300–2321.doi:10.3390/curroncol30020178 [Brewer(2023)]Brewer 2023 Brewer, J. Automated processing of prostate-based Raman spectra [MSc thesis]. University of British Columbia 2023 [Brown(1958)]Brown58 Brown,M., B. Statistical use of spatial median. Journal of the Royal Statistical Society 1958, 53, 448–456. [Chen et al.(2023)]Chen23 Chen, Y., Lin, W. and Shi, X. Multivariate two-sample test statistics based on data depth. 2023. arXiv:2306.04818. [Chenouri and Small(2012)]Small2011 Chenouri, S. and Small, C. G. A nonparametric multivariate multisample test based on data depth. Electronic Journal of Statistics 2012, 6, 760–782. [Chung and Romano(2016)]Chung 2016 Chung, E., Romano, J. Asymptotically valid and exact permutation tests based on two-sample U-statistics.Journal of Statistical Planning and Inference 2016 168, 97-105 [Corsetti et al.(2018)]Corsetti2018 Corsetti, Stella., Rabl, Thomas., McGloin, David., and Nabi, Ghulam. Raman spectroscopy for accurately characterizing biomolecular changes in androgen-independent prostate cancer cells. Journal of Biophotonics 2018, 11(3) [Cox and Reid(1987)]Cox Cox, D., and Reid, N. Approximations to Noncentral Distributions. The Canadian Journal of Statistics / La Revue Canadienne de Statistique 1987 15, 2: 105–14. https://doi.org/10.2307/3315199. [Craig(1936)]Craig Craig, C. On the Frequency of the function xy. The Annals of Mathematical Statistics 1936 7(1), 1–15 [Devpura et al.(2014)]Devpura2014 Devpura, S., Barton, K.N., Brown, S. L., Palyvoda, O., Kalkanis, S., Naik, V. N., Siddiqui, F., Naik, R. and Chetty, I. J. Vision 20/20: The role of Raman spectroscopy in early stage cancer detection and feasibility for application in radiation therapy response assessment: Raman spectroscopy for cancer detection/radiation therapy response assessment. Medical Physics 2014, 41(5), 050901 [Fisher(1936)]Fisher1936 Fisher, R. A. Design of experiments. British Medical Journal 1936, 1(3923), 554 [Fuentes et al.(2023)]Fuentes2023 Fuentes, A. M., Narayan, A., Milligan, K., Lum, J. J., Brolo, A. G., Andrews, J. L. and Jirasek, A. Raman spectroscopy and convolutional neural networks for monitoring biochemical radiation response in breast tumour xenografts. Scientific Reports 2023, 13(1), 1530 [Gao et al. (2024)]Gao24 Gao, M., Chen, Y., Shi, X., and Yang, W. Q statistics in data depth: fundamental theory revisited and applications. 2024 arXiv:2407.09678 [Gnettner et al.(2024)]Gnettner2024 Gnettner, F., Kirch, C., and Nieto-Reyes, A. Symmetrisation of a class of two-sample tests by mutually considering depth ranks including functional spaces. Electronic Journal of Statistics. 2024, 18(2), 3021-3106 [Gower(1974)]Gower74 Gower, C., J. Algorithm as 78: The mediancentre. App.Statist. 1974, 23, 466–470. [Jiang et al.(2008)]Jiang2008 Jiang, S. B., Wolfgang, J. and Mageras, G. S. Quality Assurance Challenges for Motion-Adaptive Radiation Therapy: Gating, Breath Holding, and Four-Dimensional Computed Tomography. International Journal of Radiation Oncology Biology Physics 2008, 71(1), 103–107 [Kim et al.(2020)]kim2020 Kim, I., Balakrishnan, S., and Wasserman, L. Robust multivariate nonparametric tests via projection averaging. The Annals of Mathematical Statistics 2020, 48(6), 3417 - 3441. [Kosiorowski and Zawadzki(2017)]Kosiorowski and Zawadzki 2017 Kosiorowski, D. and Zawadzki, Z. DepthProc: An R Package for Robust Exploration of Multidimensional Economic Phenomena. 2017 arXiv:1408.4542 [Lee(1990)]Ustat Lee, A. J. U-Statistics: Theory and Practice. Statistics: Textbooks and Monographs. 1990 Dekker, Inc., New York. [Liu(1992)]Liu92 Liu, R. Y. Data depth and multivariate rank tests. In L_1-Statistics and Related Methods (Y. Dodge, ed.) 1992, 279-294. [Liu and Singh(1993)]liu1993quality Liu, R. Y. and Singh, K. A quality index based on data depth and multivariate rank tests. Journal of the American Statistical Association 1993, 88(421), 252-260. [Liu et al.(1999)]Liu99 Liu, R. Y., Jesse, M. P. and Kesar, S. Multivariate analysis by data depth: Descriptive statistics, graphics and inference. The Annals of Statistics 1999, 783-858. [Liu et al.(2022)]Liu2022 Liu, J., Ma, S., Xu, W., and Zhu, L. A generalized Wilcoxon–Mann–Whitney type test for multivariate data through pairwise distance. Journal of Multivariate Analysis 2022 190 104946 [Milligan et al.(2021)]Milligan2021 Milligan, K., Deng, X., Shreeves, P., Ali-Adeeb, R., Matthews, Q., Brolo, A., Lum, J. J., Andrews, J. L. and Jirasek, A. Raman spectroscopy and group and basis-restricted non negative matrix factorisation identifies radiation induced metabolic changes in human cancer cells. Scientific Reports 2021, 11(1), 3853 [Milligan et al.(2022)]Milligan2022 Milligan, K., Van Nest, S. J., Deng, X., Ali-Adeeb, R., Shreeves, P., Punch, S., Costie, N., Pavey, N., Crook, J. M., Berman, D. M., Brolo, A. G., Lum, J. J., Andrews, J. L. and Jirasek, A. Raman spectroscopy and supervised learning as a potential tool to identify high-dose-rate-brachytherapy induced biochemical profiles of prostate cancer. Journal of Biophotonics 2022, 15(11) [Movasaghi et al.(2007)]Movasaghi 2007 Movasaghi, Z., Rehman, S., and Rehman, I. U. Raman Spectroscopy of Biological Tissues. Applied Spectroscopy Reviews 2007 42(5) 493-541. [Picot et al.(2022)]Picot2022 Picot, F., Shams, R., Dallaire, F., Sheehy, G., Trang, T., Grajales D., Birlea, M., Trudel, D., Menard, C., Kadoury, S. and Leblond, F. Image-Guided Raman Spectroscopy Navigation System to Improve Transperineal Prostate Cancer Detection. Part 1: Raman Spectroscopy Fiber-Optics System and in Situ Tissue Characterization. Journal of Biomedical Optics, 2022, 27(9), 095003 [Shi et al.(2023)]Shi2023 Shi, X., Zhang, Y., and Fu, Y. Two-sample tests based on data depth. Entropy 2023, 25(2), 238. [Sigle et al.(2023)]Sigle2023 Sigle, M., Rohlfing, A., Kenny, M., Scheuermann, S., Sun, N., Graeßner, U., Haug, V., Sudmann, J., Seitz, C. M., Heinzmann, D., Schenke-Layland, K., Maguire, P. B., Walch, A., Marzi, J. and Gawaz, M. P. Translating Genomic Tools to Raman Spectroscopy Analysis Enables High-Dimensional Tissue Characterization on Molecular Resolution. Nature Communications, 2023, 14(1), 5799 [Székely and Rizzo(2013)]szekely2013 Székely , G. J. and Rizzo, M. L. Energy statistics: A class of statistics based on distances. Journal of Statistical Planning and Inference 2013, 143(8), 1249–1272. [Tukey(1974)]tukey Tukey, J. W. Mathematics and the picturing of data. Canadian Mathematical Congress 1974 2, 523–531. [Welch(1990)]Welch 1990 Welch, W. Construction of permutation tests. Journal of the American Statistical Association 1990 85 (411), 693-698 [Wilcoxon(1992)]Wilcoxon1992 Wilcoxon, F. Individual comparisons by ranking methods. Springer New York 1992, 196-202 [Zuo and He(2006)]zuo2006limiting Zuo, Y. and He, X. On the limiting distributions of multivariate depth-based rank sum statistics and related tests. The Annals of Statistics 2006, 24(6), 2879–2896. [Zuo and Serfling(2000)]Serfling2000 Zuo, Y. and Serfling, R. General notions of statistical depth function. The Annals of Statistics 2000, 28, 461-482.
http://arxiv.org/abs/2408.12177v1
20240822074941
Revisiting the Phenomenon of Syntactic Complexity Convergence on German Dialogue Data
[ "Yu Wang", "Hendrik Buschmeier" ]
cs.CL
[ "cs.CL" ]
A Fast and Effective Breakpoints Algorithm for the Quadratic Knapsack Problem D. S. Hochbaum^1, P. Baumann^2, O. Goldschmidt^3, Y. Zhang^1 ^1IEOR Department, University of California, Berkeley, CA 94720, USA ^2Department of Business Administration, University of Bern, Engehaldenstr. 4, 3012 Bern, Switzerland ^3Riverside County Office of Education, Riverside, CA 92501, USA dhochbaum@berkeley.edu, philipp.baumann@unibe.ch, goldoliv@gmail.com, zhang@berkeley.edu August 26, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================= § ABSTRACT We revisit the phenomenon of syntactic complexity convergence in conversational interaction, originally found for English dialogue, which has theoretical implication for dialogical concepts such as mutual understanding. We use a modified metric to quantify syntactic complexity based on dependency parsing. The results show that syntactic complexity convergence can be statistically confirmed in one of three selected German datasets that were analysed. Given that the dataset which shows such convergence is much larger than the other two selected datasets, the empirical results indicate a certain degree of linguistic generality of syntactic complexity convergence in conversational interaction. We also found a different type of syntactic complexity convergence in one of the datasets while further investigation is still necessary. § INTRODUCTION The interactive alignment theory <cit.> states that, in interaction, mutual understanding is reached through the support of adaptive processes, which result in a reduction of the communicative efforts of the dialogue participants. <cit.> have mentioned the co-adaptivity of interlocutors' verbal behaviour on the following six levels: phonetic, phonological, lexical, syntactic, semantic and situational. Several studies have comprehensively explored the co-adaptivity in interlocutors on the linguistic structure of the above-mentioned levels. For example, the empirical results from perception tasks in <cit.> verify the increasing similarity of the phonetic repertoire, which indicates phonetic convergence during conversational interaction. <cit.>, in their lab-based study, show that interlocutors in conversational interaction coordinate their utterances to form a mutually acceptable form of description, which indicates the convergence of lexical choice in interaction. In this paper, we focus on linguistic alignment on the syntactic level. Our argument is that with the development of mutual understanding during conversational interaction, certain types of syntactic convergence can be observed. Previous studies found alignment of syntactic complexity, but only for English data, which lacks linguistic generality. Therefore, we try to find more empirical evidence to show that syntactic alignment happens in other languages, such as German, too. The goal of this paper is to revisit the syntactic complexity convergence phenomenon discussed by <cit.> and test whether it holds for German dialogue data, too. To this end, we selected the following three conversation datasets for German: MUNDEX <cit.>, TexPrax <cit.>, and VERBMOBIL (VM2) <cit.>. § BACKGROUND §.§ Dependency Structure In this paper, we quantify syntactic complexity with the help of dependency parsing <cit.>. We follow the definition of dependency structure by <cit.>. A linguistic structure, such as a dependency structure, consists of relations of pairs of natural language tokens. Let Σ denote a finite set of natural language tokens (the vocabulary). Let V = {w_1, w_2, … ,w_N} denote a spanning node set with its element w_i ∈Σ^∗ <cit.>. The element w_i is a ‘head’ or a dependent in a dependency structure. The spanning node set V represents a sentence ω = w_1w_2… w_N. The dependency structure of the sentence ω is then a typed structure ζ = (V,E,R), where R is the set of dependency relation types, E ⊆ V × V × R the set of arcs, if (x,y,r) ∈ E, it holds that ∀ r ≠ r', (x,y,r') ∉ζ. Under the definition above, a dependency structure is typically a directed acyclic graph (DAG) and the dependency relations within the structure are binary and asymmetric. We use a statistic and neural sequential model based parsing method, namely the StanfordNLP parser https://stanfordnlp.github.io/stanza/depparse.htmlStanza <cit.> for our goal in this paper. Stanza is trained upon the Universal Dependencies (UD) Treebanks <cit.>. UD Treebanks store the information about the dependency relations among the lexicon, i.e., given a word, what are the most likely words that can serve as its heads or dependents in a dependency structure. The core idea can be mathematically expressed as follows based on <cit.>: P_head(w_j | w_i, ϑ) = exp(g(w_i,w_j))/∑_k=0^|ϑ|exp(g(w_i,w_k)) where ϑ is the lexicon, g(·) is a function which outputs the association score of one word choosing the other word as its head. P_head(w_j | w_i, ϑ) thus tells us what is the most likely head word w_j given the dependent word w_i and the lexicon. With the generated probability information, the maximum spanning tree algorithm, e.g., Chu-Liu/Edmonds algorithm <cit.> is then used to decide what is the most likely dependency structure for a given sentence. §.§ Syntactic Complexity The topic of syntactic complexity has been of significant interest for researchers working within either functional (cognitive) or computational frameworks of linguistics. According to <cit.>, syntactic complexity refers to syntactic structures which entail increasing cognitive load to parse and process. Sentences that are ranked as more syntactically complex are considered more difficult for humans to process <cit.>. <cit.> further summarizes three measures for evaluating the syntactic complexity, namely word counts, node counts, and a so-called “Index of Syntactic Complexity”. Word counts use length of a given sentence – number of words, syllables, intonation units – to approximate the syntactic complexity, which is based on the straightforward intuition, that a lengthy sentence tends to be more structurally complex than a short one. Node count uses the idea that the more phrasal nodes a linguistic unit dominates, the more complex a sentence is <cit.>. “Index of Syntactic Complexity” focuses on percentage of subordinate clauses <cit.> as well as embeddedness of word forms <cit.>, which is reflected by the following indicators (i) the number subordinating conjunctions, e.g., because, since, etc.; (ii) the number of WH-pronouns, e.g., what, which, etc.; (iii) embeddedness of the verb forms, e.g., finite or infinite; (iv) the number of noun phrases. According to <cit.>, the convergence of syntactic complexity between two speakers in dialogue correlates to two theories: one is the Interactive Alignment theory <cit.>, which combines the development of mutual understanding with linguistic alignment. The other is the Uniform Information Density hypothesis <cit.>, which states that speakers will strive to keep information density roughly constant. Based on this hypothesis, if a speaker decreases its information amount, the other will increase the amount instead. According to <cit.> and <cit.>, information density is expected to be proportional to the complexity of syntactic structure. This give us an implication that in a dialogue, if a speaker's syntactic complexity is decreasing, the interlocutor's syntactic complexity should be increasing. This implication is consistent with dependency locality theory <cit.>, which claims that comprehension difficulty is associated with some complex dependency structures. The interplay of syntactic complexity and language comprehension has been further investigated in, e.g., <cit.>, which shows that, average dependency distance positively correlates with the comprehension difficulty (processing effort). <cit.> then showed three measures to quantify the syntactic complexity: sentence length, branching factors, and tree depth. Tree depth is used to described how deep a syntactic tree can grow. The deeper a tree is, the more complex a sentence is considered. Branching factor reports the average number of children of all non-leaf nodes in the parse tree of a sentence. Thus, a syntactic tree that contains, e.g., more constituents or noun phrases within a sentence of a given length, is more complex. § DATA In order to check the dynamics of syntactic complexity in conversational interaction, we select the following three German datasets for our study: MUNDEX consists of task-oriented dialogues and focuses on explanation in interaction <cit.>. Each dialogue is a explanation scenario involving a speaker (the explainer) explaining how to play a board game to a recipient (the explainee). The dataset is still under construction but in total it consists of 87 dialogues between dyads of German native speakers. At its current stage, speech diarization was mainly performed automatically using Whisper ASR <cit.>. TexPrax consists of task-oriented dialogues from factory workers on how to solve specific technical issues <cit.>. The data are collected anonymously using an open source messaging application in a simulated factory environment. The dataset has in total 202 task-oriented German dialogues containing 1,027 sentences with sentence-level expert annotations, such as turn taking labels. The VERBMOBIL (VM2) dataset <cit.> is based on recordings of various appointment scheduling scenarios, and consists of 30,800 utterances collected in face-to-face interactions. All utterances are annotated with dialogue acts. The main difference among the three datasets is that in MUNDEX, compared to TexPrax and VM2, one speaker (the explainer) speaks much more than the other (explainee) in every dialogue. This property of the data has been well reflected in our later analysis (e.g., see Figure <ref> in Section <ref>). While for the other two datasets, utterance length among the participants is similar. Moreover, VM2 is much larger than the other two selected datasets. There are two common points among the three selected datasets. First of all, in each dialogue there are only two dialogue participants. For the speaker role assignment, we define the interlocutor who initiates the dialogue as dialogue initiator, the other interlocutor who follows the dialogue as dialogue follower. In this study specifically, we choose to give the role of dialogue initiator to the dialogue participant who starts the conversation. This is based on our observation in the three datasets that there are no topic shifts in the dialogues. For example, MUNDEX is based on a pre-defined scenario, where an explainer explains a board game to an explainee. Therefore, we do not consider that we need to shift participant roles, as in <cit.>, which uses the Switchboard dataset, where each dialogue may have multiple topic shifts. Secondly, at the end of the interactions, a certain level of mutual understanding can be estimated: in MUNDEX, the explainees are likely to understand the game rules and to be able to play the game; in TexPrax, the workers know the technical issues from their co-worker; in VM2 appointments have been successfully made in most of the cases. Under this preposition, in this study, by looking at the change of syntactic complexity, namely the phenomenon of syntactic complexity convergence, we assume that we can infer the level of mutual understanding with the development of the dialogue. § METHODS To quantify the syntactic complexity, we follow the measures developed in <cit.>, mainly looking at branching factor, tree depth, and sentence length. Given that all of the three factors can influence the syntactic complexity, it makes sense to quantify the three factors into a single value to represent the syntactic complexity. We use the number of heads (word count) as a normalisation factor. In dependency structure, the heads are the nodes which have both incoming and outgoing edges, the tree depths are the maximum number of arcs a tree can have from its root to a terminal node. Given two dependency structures with the same number of heads, if one structure has bigger length, it indicates that the heads in general controls more sub-nodes, and thus the structure is more complex. Given a speaker's utterance, we calculate utterance length L and use dependency parsing to get the number of heads α as well as the maximum tree depth β. The syntactic complexity SC of the utterance is thus computed as following: SC = λ·L/α + (1 - λ) ·β if α > 0 (1 - λ) ·β otherwise where λ is a tuning factor set to 0.5 by default. Here we use two German example sentences with corresponding dependency trees to show what is considered as syntactically complex. The example in Figure <ref> is a sentence which is considered syntactically simple based on our definition, its maximum tree depth is three and it only has three heads, its sentence length is four. The example in Figure <ref> in contrast is considered syntactically complex, its maximum tree depth is four and it has four heads, its length is 8. The quantified syntactic complexity for the first sentence, according to our method, is 2.167 (three heads as the root node is also considered as a head by Stanza parser, tree depth is three, length is four) while for the second one it is 3 (four heads as the root node is also considered as a head by Stanza parser, tree depth is four, length is eight). Moreover, utterances in the three selected datasets have varied length. According to our observation, a speaker may produce multiple utterances before the turn is shifted to a listener, which occurs frequently in the MUNDEX dataset. Therefore, it is not rational to calculate syntactic complexity values on a turn-by-turn basis. As a simple solution, for both dialogue initiator and follower, we calculate the syntactic complexity value on an utterance-by-utterance basis. We perform data separation based on the role definition mentioned in in Section <ref>. § RESULTS AND DISCUSSION To verify the convergence of syntactic complexity between two speakers in dialogue, we use a linear mixed effects model, specifically regression, to model the dynamics of syntactic complexity (statistics in Table <ref>, all reported beta coefficient values are statistically significant). It turns out that among the three selected datasets, only VM2 shows the syntactic complexity convergence, as supported by a negative beta coefficient value for the dialogue initiators and a positive beta coefficient value for the dialogue followers, which indicates that the syntactic complexity of the dialogue initiator generally decreases with the development of the utterance position. In contrast, the opposite tendency can be observed for the dialogue followers, where the beta coefficient value is positive. As for the other two selected datasets, in MUNDEX, the beta coefficient value is positive for both dialogue initiators and followers while in TexPrax, the beta coefficient value is instead negative for both dialogue initiators and followers, which indicates that syntactic complexity convergence is not supported by the statistics. Looking at the plots in Figure <ref>, it seems that the increasing/decreasing tendencies are small but still obvious in VM2. This can be explained, at least in part, by the relatively small values of the beta coefficients. Nevertheless, given that the range of syntactic complexity values is not so large (see Table <ref>), we assume that the reported effect sizes are valid. For the MUNDEX dataset, it turns out that dialogue followers' syntactic complexity is gradually increasing, while dialogue initiators' syntactic complexity remains quite stable, although it is slightly increasing as well. We considered this as a different type of syntactic complexity convergence. One possible explanations could be that, in MUNDEX's scenario, the explainers have to continuously introduce different rules and constraints of the game, and thus the syntactic complexity value for dialogue initiators slightly increased (as evidenced by the statistics in Table <ref>). While for the dialogue followers, with the development of an explanation, they got more engaged and thus started to use more complex structures or produce longer utterances. In the TexPrax dataset, a general decreasing trend can be observed for both dialogue initiators and followers, which is in general not consistent with the phenomenon of syntactic complexity convergence. From an information-theoretic perspective, the convergence of syntactic complexity between dialogue participants reflects the convergence of shared information <cit.>, which is seen as evidence that dialogue participants are working co-constructively to build common ground <cit.>. The results reported in this study show that the convergence of syntactic complexity as a linguistic phenomenon can be observed in dialogues, (1) in different languages (e.g., in English and at least partially in German); (2) under different scenarios (e.g., explaining a game in MUNDEX or making an appointment in VM2). § CONCLUSIONS In this paper, we revisit the phenomenon of syntactic complexity convergence by examining it specifically for German dialogue data. The convergence of syntactic complexity is assumed to be strongly related to the uniform information density theory as well as to the interactive alignment theory, which correlates the development of mutual understanding with linguistic alignment. Our empirical results show that the convergence also exists in one of the three German dialogue datasets we analysed, which provides further evidence for the generality of syntactic complexity convergence. Given that the VM2 dataset is much larger than the other two datasets, we are prone to claiming that syntactic complexity convergence has its linguistic generality. We also found a different type of syntactic complexity convergence in the MUNDEX dataset, while further investigation is still necessary. § ACKNOWLEDGEMENTS This research was funded by the https://www.dfg.deDeutsche For­schungs­gemeinschaft (DFG): https://gepris.dfg.de/gepris/projekt/438445824TRR 318/1 2021 – 438445824. § LIMITATIONS When processing German utterances, we did not consider possible solutions to deal with disfluencies. One possible solution would have been to replace disfluent sentences with fluent (i.e., grammatical) ones. This, however, could change the syntactic complexity values. In order to take into account the effect of disfluencies on syntactic complexity, an empirical study on whether disfluencies increases syntactic complexity needs to be carried out beforehand. Another issue we haven't explored further is whether linear models are optimal for our data analysis. A potential future work is to fit a model with a quadratic term for hypothesis testing. § ETHICS STATEMENT Given the scope of this study, there do not appear to be any ethical issues. acl_natbib
http://arxiv.org/abs/2408.12319v1
20240822115543
Neural-ANOVA: Model Decomposition for Interpretable Machine Learning
[ "Steffen Limmer", "Steffen Udluft", "Clemens Otte" ]
stat.ML
[ "stat.ML", "cs.LG" ]
Q-ball collisions and their Gravitational Waves Stephen J. Lonsdale August 26, 2024 =============================================== § ABSTRACT The analysis of variance (ANOVA) decomposition offers a systematic method to understand the interaction effects that contribute to a specific decision output. In this paper we introduce Neural-ANOVA, an approach to decompose neural networks into glassbox models using the ANOVA decomposition. Our approach formulates a learning problem, which enables rapid and closed-form evaluation of integrals over subspaces that appear in the calculation of the ANOVA decomposition. Finally, we conduct numerical experiments to illustrate the advantages of enhanced interpretability and model validation by a decomposition of the learned interaction effects. § INTRODUCTION Deploying machine learning models for regression or control tasks in industrial settings often entails meeting specific certification requirements. These requirements can vary depending on the application domain and the criticality of the task, and may ultimately determine whether a particular machine learning model can be used. Ensuring compliance may involve testing the model against a series of cases curated by domain experts or conducting comprehensive evaluations under adverse operating conditions to confirm that the model accurately captures expected interaction effects. In addition to certification, machine learning models intended for industrial use must often satisfy robustness and explainability criteria. A challenge in this context may be handling missing data, which can arise from various issues such as sensor failures, preprocessing errors, connectivity problems, calibration faults, or data corruption during storage. Addressing missing or corrupted data is particularly problematic for industrial machine learning models operating at short cycle times (e.g., less than 1 ms). In such cases, advanced imputation techniques can be too slow, and simpler methods like mean or median imputation may not provide the necessary performance. Another critical challenge involves ensuring transparency and providing explanations for the decision-making processes of AI systems. Techniques collectively referred to as Explainable AI (XAI) aim to mitigate the "black box" nature of models like neural networks by elucidating the dependencies that lead to specific decisions. Achieving XAI is especially crucial for control systems or neural process models, where comprehending the decisions is essential. The functional analysis of variance (ANOVA) decomposition addresses these challenges by separating interaction effects in order to gain deeper insights into the effects and dependencies between input variables and output variable, owing to its ability to decompose complex relationships into lower-order effects. The ANOVA decomposition has proven valuable in various industrial domains such as modeling of batteries <cit.> and fluid flows <cit.> . A primary challenge in computing the ANOVA decomposition arises from the need to evaluate higher-dimensional integrals over subspaces of the input domain. Often, this problem is addressed by numerical approximation techniques or by restricting the approximation space to random-forests <cit.> or spline functions <cit.>, for which efficient integration techniques are available. However, each of the latter introduces an error due to approximation, model bias or admits limited expressivity for a given task. In this study, we introduce a novel method for applying the ANOVA decomposition based on standard neural networks, resulting in models that are more interpretable and suitable for industrial machine learning applications. We refer to these models as Neural-ANOVA models. Our key contributions are as follows: * We introduce a novel learning formulation that enables rapid and closed-form evaluation of integrals over subspaces appearing in the ANOVA decomposition of neural networks. * We demonstrate that Neural-ANOVA models include Generalized Additive Models (GAMs) as a special case, showing comparable performance across various datasets. Our proposed framework supports diverse activation functions and layer sizes, utilizing only nested automatic differentiation and the sum of evaluations. * Through extensive evaluations on various regression tasks, encompassing both synthetic test functions and real-world industrial datasets, we show that Neural-ANOVA models can outperform GAMs by incorporating appropriate higher-order interactions. § RELATED WORK §.§ Generalized and Neural Additive Models Generalized Additive Models (GAMs) <cit.> are a powerful and versatile approach for machine learning problems. They extend Generalized Linear Models (GLMs) by incorporating non-linear relationships between features and the target variable through flexible shape functions. GAMs are applicable to both regression and classification tasks and have been successfully used in various domains such as healthcare or finance <cit.>. A key advantage of GAMs is their interpretability, which stems from their structure of univariate interactions f(x) = f_0 + ∑_i=k^Kf_k(x_k), or including also bivariate interactions f(x) = f_0 + ∑_k=1^Kf_k(x_k) + ∑_k=1^K∑_l<k^Kf_kl(x_k, x_l). The influence of each feature on the prediction can be comprehensively understood by visualizing its corresponding shape functions. Various methods are available for fitting Generalized Additive Models (GAMs). One traditional method is backfitting <cit.>, which iteratively updates the components of the model by sequentially refitting them. Another common approach involves spline-based regression <cit.>. More recently, several machine learning approaches have been proposed that leverage conventional gradient descent algorithms. Notably, Neural Additive Models (NAMs) <cit.>, use neural networks to represent the shape functions and are trained using standard stochastic gradient descent techniques. However, the authors note some considerations when using NAMs. Computationally, the optimization process can be challenging and demands careful selection of hyperparameters and application of regularization techniques. Furthermore, choosing the appropriate representation for shape functions is crucial to avoid overfitting or underfitting the data, necessitating careful consideration and experimentation. §.§ ANOVA Decomposition The functional ANOVA decomposition <cit.> is a statistical technique for the dimension-wise decomposition of a square-integrable function f: X^K →R into a sum of lower-dimensional functions f_S according to f(x) = ∑_S⊆K f_S(x_S). Here, each function f_S only depends on a subset of variables indexed by the set S⊆K and the sum ranges over all 2^K subsets of K:={1,,K}. A specific construction and algorithm was proposed in <cit.>, necessitating the computation of several multidimensional integrals of the form f_S(x_S) = ∫_X^K - |S| f(x) d x_K\S - ∑_U⊊S f_U (x_U), where first term represents an integral over a subset of variables, while the second term subtracts all proper subsets in a manner similar to backfitting. The resulting computational algorithm is detailed in Alg. <ref>. Using this approach, one can demonstrate that all terms f_S are orthogonal with respect to the inner product ⟨ f,g⟩ = ∫ f(x) · g(x) dx. Additionally, this construction exhibits the favorable property that the functional variance σ^2 = ∫ f^2(x) dx - ( ∫ f dx)^2 can be decomposed into the sum of individual component variances σ^2 = ∑_Sσ_S^2 = ∑_S∫ f^2_S(x_S) dx_S. Furthermore, it can be shown that the decomposition is minimal in the sense that no unnecessary terms are being introduced in the decomposition. To illustrate this minimality, consider a function f(x_1,x_2) = 2x_1 where the ANOVA decomposition ensures that no unfavorable non-minimal terms such as f(x_1,x_2) = x_1 - x_2 + (x_1 + x_2) are introduced <cit.>. The minimality property also allows to define meaningful dimensionalities for a function. For instance, one such dimension can be described as the superposition dimension of a function, defined as f(x) = ∑_|S|≤ d_s f_S(x_S), where the variance decomposes according to ∑_|S|≤ d_sσ_S^2 = σ^2. In other words, if a function f has an effective superposition dimension d_s, it implies that interactions involve no more than d_s variables. Furthermore, if a function has an effective superposition dimension of 1, it indicates the existence of an ideal regressor in the form of a Generalized Additive Model (GAM). The truncation dimension is another meaningful quantity that is said to hold with dimension d_t if there exists a set of truncation variables 𝒯 with |K\𝒯| = d_t such that f(x) = ∑_S⊆K\T f_S(x_S), with ∑_S⊆K\Tσ_S^2 = σ^2. Using the truncation dimension, we can identify sets of relevant and irrelevant variables. Additionally, we can use the truncated sum (<ref>) to approximate the function if the variables in the set T are unavailable, e.g., due to sensor corruption or processing errors. However, in such scenarios, we should not expect a perfect approximation, meaning the equalities in (<ref>, <ref>) will not hold. Various methods for numerically approximating the ANOVA decomposition have been introduced in the literature. These methods include approaches based on random forests <cit.> and orthonormal systems utilizing polynomial or Fourier basis functions <cit.>. Each approach incorporates different model-specific approximation techniques for evaluating the integral (<ref>). The effectiveness of these approximation schemes can be constrained by the expressivity of the chosen model or the maximum order of interactions that can be efficiently included in the numerical approximation process. Moreover, integrating a numerical approximation scheme into the training loop of a machine learning model is challenging. This difficulty arises from the need to balance the number of required evaluations with the acceptable level of approximation error. For example, <cit.> report needing approximately ten thousand function evaluations to achieve an acceptable approximation error in a five-dimensional setting using quasi-Monte Carlo integration. §.§ Automatic Integration Analytical integration is generally considered more challenging than differentiation. Various strategies for exact integration include variable substitution, integration by parts, and partial fractions. Closed-form solutions for general antiderivatives, i.e., indefinite integrals, are limited to a small class of functions and often involve complex algorithms such as the Risch algorithm <cit.>. Numerical integration methods, including Riemann sums, quadratures, and Monte Carlo methods <cit.>, are commonly used in practice. These methods typically require a tradeoff between the number of samples and accuracy. Neural networks, being universal function approximators, can also be utilized for analytical integration within the framework of automatic integration <cit.>. This technique involves training a neural network to approximate the antiderivative so that integrals can be obtained by evaluating the trained network at the boundary points of the integration domain. The approach relies on taking derivatives of the neural network, applied repeatedly to all input coordinates and subsequently used to fit the training data. Using this method enables the computation of any definite D-dimensional integral using 2^D evaluations of a neural network. It has inspired a range of applications, such as neural radiance fields <cit.>, tomography <cit.>, pathloss prediction <cit.> and neural point processes <cit.>. § NEURAL ANOVA DECOMPOSITION In this section, we present our main contribution, which provides a rapid and closed-form evaluation of integrals over subspaces of the type given by (<ref>) in the ANOVA decomposition, utilizing neural networks. §.§ Bivariate Example We begin by demonstrating the fundamental process of automatic integration using a sample bivariate function, f(x_1, x_2), to emphasize the differences in the training approach. Conventional neural network training typically involves minimizing a loss function of the form r(θ) = ∑_i ϕ( f(x_1^(i),x_2^(i)) - NN(θ,x_1^(i),x_2^(i)) ), where ϕ denotes an appropriate loss function, such as the absolute error or squared error. In the proposed method, we aim to fit samples of a given function f(x_1, x_2) while simultaneously calculating integrals over the input domain. The work <cit.> suggests training a neural network NN(θ, x_1, x_2) by differentiating the network with respect to all its input coordinates, specifically evaluating its mixed partial derivative. The training process involves minimizing a loss function defined as r(θ) = ∑_i ϕ( f(x_1^(i),x_2^(i)) - d/dx_1d/dx_2NN(θ,x_1^(i),x_2^(i)) ). To ensure computational efficiency, the term d/dx_1d/dx_2NN(θ,x_1,x_2) can be compiled just-in-time and evaluated during the training process using standard techniques in automatic differentiation. After successful optimization, the optimized neural network parameters, denoted as θ^⋆, are obtained. Integrals can then be computed by evaluating the neural network at the corner points of the integration domain, [l_1, u_1] × [l_2, u_2] according to ∫_l_1^u_1∫_l_2^u_2 f(x_1,x_2) dx_1 dx_2 = NN(θ,l_1,l_2) - NN(θ,u_1,l_2) - NN(θ,l_1,u_2) + NN(θ,u_1,u_2) := NN(θ,x_1,x_2) |_x_1,x_2 ∈ (l_1,u_1)× (l_2,u_2) . §.§ High-dimensional Generalization Next, we present the generalization of automatic integration to calculate higher-dimensional integrals that appear in Alg. <ref> for a function comprising K input features and one target output, i.e., f: R^K →R. To this end, the neural network NN_θ(x) : R^K →R is trained using the loss r(θ) = ∑_i ϕ( f(x^(i)) - d/dx_1⋯d/dx_KNN(θ,x^(i)) ). Then, we can establish the following relation between (i) the trained neural network, (ii) the general anti-derivative (integral) and (iii) the definite anti-derivative (integral) by using the fundamental theorem of calculus <cit.> according to f(x) = d/dx_1⋯d/dx_KNN(x) ⇕ ∫ f(x) dx = NN(x) ⇕ ∫_l^u f(x) dx = ∑_x∈ (l_1,u_1)×⋯× (l_K,u_K) (-1)^s NN(x), where s denotes the multiplicity of lower bounds in the evaluated expression. Using this relation, we can verify that integration over a single variables (e.g. x_1) can be obtained for instance in the 3-dimensional case by ∫_l_1^u_1 f(x_1, x_2, x_3) dx_1 = d/dx_2d/dx_3NN(x) |_x_1 ∈ (l_1,u_1) and integrals over a subset of two variables (e.g. x_2,x_3) can be obtained by ∫ f(x_1, x_2, x_3) dx_2 dx_3 = d/dx_1NN(x) |_x_2,x_3 ∈ (l_2,u_2)× (l_3,u_3) . §.§ Summary of Algorithm We now present the main result of this paper: a computational algorithm designed to train a neural network, denoted as NN, which allows for a closed-form decomposition into lower-dimensional subnetworks NN_𝒮. This method is termed Neural-ANOVA and summarized in Alg. <ref>. In Alg. <ref>, the following steps are performed in order to calculate the required integrals of the ANOVA decomposition as a signed sum of multiple evaluations of the trained model and the functional transformation of differentiation. First, the model is trained using the loss function specified in (<ref>) where the model is differentiated w.r.t. all input variables. Second, we compute the integral over the subspace spanned by the variables x_S^c (cf. (<ref>)) according to I_S(x_S) = ∫_S^cNN(x) dx_S^c = d^|𝒮|/d x_𝒮NN(x) |_x_S^c∈ (0,1)^|S^c | = ∑_x_S^c∈ (0,1)^|S^c |( (-1)^s d^|𝒮|/d x_𝒮NN(x) ) . Here, the sign exponent s denotes the multiplicity of lower bounds in the evaluated expression and S^c := K\S the complement of S over the full index set K. In other words, in (<ref>) the trained model is first differentiated w.r.t. the variables in the active-set S and then evaluated at the 2^|S^c | corner points of the inactive-set S^c that are to be integrated over so that the result is a function of only the variables x_S. Lastly, the Neural-ANOVA component NN_S is obtained by using the integral calculated in (<ref>) and subtracting the components of all proper subsets NN_S(x_S) = ∫_S^cNN(x) dx_S^c - ∑_U⊊SNN_U (x_U). The complete resulting algorithm to obtain Neural-ANOVA is provided in Alg. <ref> where all the neural network terms can be calculated fast and in closed-form at runtime and the variances σ^2, σ_S^2 can be obtained offline by standard numerical methods such as Monte Carlo approximation. One approach to calculate the mixed partial derivative in (<ref>) supported by standard automatic differentiation frameworks is to apply nested differentiation. While the implementation of this approach is straight forward e.g. in the automatic differentiation framework JAX <cit.>, it requires traversing the original computation graph multiple times which may result in redundant computations as was noted in <cit.>. We highlight that also more sophisticated methods for calculating the mixed partial derivative exist compared to the nested approach such as Taylor series approximation, which was shown to admit favorable runtime properties for calculating higher-order derivatives <cit.>. In this paper, we chose to retain the nested approach approach as we observe satisfactory runtime and numerical stability up to moderate dimension K ≤ 10. §.§ Numerical Example This section presents a concise numerical example for a common test-function from sensitivity analysis, namely the 3-dimensional Ishigami-function f(x)=sin(x_1)+a sin^2(x_2) + b x_3^4sin(x_1), with a=7, b=0.1. We normalize the input and output domain and present the generated data for x_3=0 in Fig. <ref>. The loss function for training the neural network is defined as r(θ) = ∑_i ( f(x_1,x_2,x_3) - d^3/d x_1 d x_2 dx_3NN(x_1,x_2,x_3) )^2. Next, we find the terms of the Neural-ANOVA decomposition using the trained network according to NN_∅ = NN(u_1,u_2,u_3) - NN(u_1,l_2,u_3) -NN(l_1,u_2,u_3) + NN(l_1,l_2,u_3) -NN(u_1,u_2,l_3) + NN(u_1,l_2,l_3) + NN(l_1,u_2,l_3) - NN(l_1,l_2,l_3) NN_1(x_1) = d/dx_1NN(x_1,x_2,x_3)|_x_2,x_3 ∈ (l_2,u_2)× (l_3,u_3) - NN_∅ NN_1,2(x_1,x_2) = d/dx_1d/dx_2NN(x_1,x_2,x_3)|_x_3 ∈ (l_3,u_3) - NN_∅ - NN_1(x_1) - NN_2(x_2) Finally, we can evaluate and illustrate the decomposed function (cf. Fig. <ref>) and obtain sensitivities using a Monte Carlo estimate according to Alg. <ref>. We see in Tab. <ref> that the sensitivities match well with their closed form expressions <cit.>. § EXPERIMENTS This section presents the results of numerical experiments performed on simulation functions from the field of sensitivity analysis and real-world industrial applications. For sensitivity analysis, we use sampled data of the simulation functions Ishigami, OTL Circuit and Piston from the UQTestFuns-library <cit.>. The training, validation and testing data is generated by evaluating the function using the default sampling distributions and min-max scaling of input and output domain to [0,1]. The primary objective of these experiments is to evaluate the expressive power and generalization capabilities of mixed partial derivative networks with different activation functions. The study also examines function properties such as superposition and truncation dimensions. As industrial datasets we consider Airfoil Self-Noise (ASN), Combined Cycle Power Plant (CCP) and Concrete Compressive Strength (CCS) datasets <cit.>. We provide an overview of the considered datasets in Tab. <ref>. In all cases the data is split into 60 / 20 / 20 ratio for training, validation, and, testing, respectively. We use JAX <cit.> to implement both Neural-ANOVA and an MLP baseline. The MLP serves as benchmark for evaluating the expressive power and noise robustness of different architectures. We experiment with the following standard architectures: (i) 3 layers with 32 neurons and sigmoid activation, and ablations with (ii) {8,16,32,48} hidden neurons and {sigmoid, relu, swish, rep} activation where rep denotes the rectified polynomial function. The default architecture serves as the model for mixed partial derivative training in the N-ANOVA approach (<ref>) for the simulation functions ISH, CIR and PST and the MLP approach on all datasets. For the ASN, CCP, and CCS datasets, we observe, similar to <cit.>, the necessity of regularization due to the limited number of data points. For N-ANOVA, our empirical findings indicate that a two-layer architecture with rep activation, 16 neurons and ℓ_2-weight regularization provides satisfactory results for ASN and CCP, but led to a small number of divergent runs, which were excluded from the analysis. This issue could potentially be mitigated through more advanced hyperparameter tuning or by employing methods based on cloning the trained baseline MLP. We also report results for Neural Additive Models (NAMs), which consist of three layers with 32 neurons and relu activation, following the JAX implementation[<https://github.com/Habush/nam_jax>] to maintain consistency with the experimental setup. In Tab. <ref>, we present a comparison of training time between N-ANOVA and a standard MLP and compare the model sizes of all three approaches in terms of trainable parameters. This confirms that the N-ANOVA model has an identical parameter count to the MLP and the NAM architecture typically contains more parameters due to the K independent feature networks. Tab. <ref> shows the impact of truncating variables on the Ishigami dataset to analyze the effect of reducing input variables, such as in scenarios with missing values. In Tab. <ref>, we report the root MSE (RMSE) and standard error on the test set, based on 10 runs with different random seeds. The trainings utilize validation early stopping and are obtained using the adam and bfgs optimizers. We find that MLP and N-ANOVA_∞ (all interactions) as well as NAM and N-ANOVA_1 (univariate interactions) perform similarly on the simulation functions. For datasets with a small sample count, NAMs demonstrate slightly superior generalization in the univariate setting. This performance can be matched by N-ANOVA_2 where bivariate interactions are included. However, N-ANOVA shows performance deterioration for small sample sizes, specifically for 1030 samples in the largest dimension K=8 of the CCS dataset. These results suggest the potential for developing mixed partial derivative architectures that generalize better in future research. For the Airfoil dataset, we also depict the shape functions of the N-ANOVA approach and the estimated sensitivities in Fig. <ref> where we see that the model assigns a strong impact to a small number of interactions. Finally, Fig. <ref> illustrates ablation studies on the stability of different models and under varying levels of additive noise. The results indicate that the mixed partial derivative networks within the N-ANOVA framework exhibit similar scaling and robustness behavior to a standard MLP architecture where the error level is slightly higher for the N-ANOVA networks. Notably, N-ANOVA models utilizing the relu activation function demonstrate a significant loss in expressive power when subjected to differentiation and the rep activation shows promising robustness to higher noise levels. § CONCLUSION In this paper, we present an efficient method for computing the functional ANOVA decomposition using neural networks to quantify learned interaction effects across multiple datasets. We derive a novel learning problem focused on computing integrals over subspaces essential to the ANOVA decomposition and demonstrate how this algorithm can decompose a network by fitting the mixed partial derivative to the training data. Our approach is empirically validated on various test functions from uncertainty quantification and real-world industrial datasets, confirming the accuracy of the functional decomposition. We also show that the Neural-ANOVA approach can specialize to obtain a generalized additive model. The method provides a principled way to analyze interaction effects, offering deeper insights into training results and the implications of using a specific trained model, allowing domain experts to certify particular use cases. Further research may address more taylored architectures that maintain higher expressive power or generalization under differentiation. Our implementation will be made available with the paper.
http://arxiv.org/abs/2408.12534v1
20240822163845
Automatic Organ and Pan-cancer Segmentation in Abdomen CT: the FLARE 2023 Challenge
[ "Jun Ma", "Yao Zhang", "Song Gu", "Cheng Ge", "Ershuai Wang", "Qin Zhou", "Ziyan Huang", "Pengju Lyu", "Jian He", "Bo Wang" ]
eess.IV
[ "eess.IV", "cs.AI", "cs.CV" ]
§ ABSTRACT Organ and cancer segmentation in abdomen Computed Tomography (CT) scans is the prerequisite for precise cancer diagnosis and treatment. Most existing benchmarks and algorithms are tailored to specific cancer types, limiting their ability to provide comprehensive cancer analysis. This work presents the first international competition on abdominal organ and pan-cancer segmentation by providing a large-scale and diverse dataset, including 4650 CT scans with various cancer types from over 40 medical centers. The winning team established a new state-of-the-art with a deep learning-based cascaded framework, achieving average Dice Similarity Coefficient (DSC) scores of 92.3% for organs and 64.9% for lesions on the hidden multi-national testing set. The dataset and code of top teams are publicly available, offering a benchmark platform to drive further innovations <https://codalab.lisn.upsaclay.fr/competitions/12239>. Automatic Organ and Pan-cancer Segmentation in Abdomen CT: the FLARE 2023 Challenge Jun Ma, Yao Zhang, Song Gu, Cheng Ge, Ershuai Wang, Qin Zhou, Ziyan Huang, Pengju Lyu, Jian He, and Bo Wang Jun Ma is with the Department of Laboratory Medicine and Pathobiology, University of Toronto; Peter Munk Cardiac Center, UHN AI Hub, University Health Network; Vector Institute, Toronto, Canada Yao Zhang is with AI Lab, Lenovo Research, Beijing, China Song Gu is with the Department of Image Reconstruction, Nanjing Anke Medical Technology Co., Ltd., Nanjing, China Cheng Ge is with Ocean University of China, Qingdao, China Ershuai Wang is with Department of Research and Development, ShenZhen Yorktal DMIT Co. LTD., Shenzhen, China Qin Zhou is with Institute of Medical Robotics, School of Biomedical Engineering, Shanghai Jiao Tong University; Department of Computer Science and Engineering, East China University of Science and Technology, Shanghai, China Ziyan Huang is with Shanghai Jiao Tong University; Shanghai AI Laboratory, Shanghai, China Pengju Lyu is with City University of Macau, Macau, China; Hanglok-Tech Co., Ltd., Hengqin, China Jian He is with the Department of Nuclear Medicine, Nanjing Drum Tower Hospital, Nanjing, China Bo Wang is with the Peter Munk Cardiac Center, University Health Network; Department of Laboratory Medicine and Pathobiology and Department of Computer Science, University of Toronto; Vector Institute; UHN AI Hub, University Health Network, Toronto, Canada Corresponding author: Bo Wang. E-mail: bowang@vectorinstitute.ai August 26, 2024 ========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= § INTRODUCTION Abdomen organs are quite common cancer sites, such as colorectal cancer and pancreatic cancer, which are the second and third most common cause of cancer death <cit.>. Computed Tomography (CT) scanning yields important prognostic information for cancer patients and is a widely used imaging technology for cancer diagnosis and treatment monitoring <cit.>. In both clinical trials and daily clinical practice, radiologists and clinicians measure the tumor and organ on CT scans based on manual measurements (e.g., Response Evaluation Criteria In Solid Tumors (RECIST) criteria) <cit.>. However, this manual assessment is inherently subjective with considerable inter- and intra-expert variability and cannot measure the 3D tumor morphology. Deep learning-based methods have shown great potential for automatic tumor segmentation and quantification. Many challenges have been established to benchmark algorithm performance by providing standard datasets and fair evaluation platforms, such as the brain tumor segmentation (BraTS) <cit.>, liver and liver tumor segmentation <cit.>, kidney and kidney tumor segmentation <cit.>, and pancreas and colon lesion segmentation <cit.>. These challenges have greatly advanced algorithm development <cit.>, but they only focus on one type of lesion (e.g., liver cancer, kidney cancer, or pancreas cancer), which cannot provide holistic lesion analysis. Pan-cancer segmentation in abdomen CT plays an important role in clinical practice because lesions can spread from one organ to another organ. For example, pancreas cancer and colorectal cancer could transfer to the liver, leading to liver metastases. There is a great need for general algorithms that can segment all kinds of lesions from CT scans. Recently, universal lesion segmentation algorithms for abdomen CT have received increasingly attention <cit.>. However, these algorithms are developed and evaluated under various datasets, leading to difficulties in fairly comparing them. The main barrier is the lack of a general benchmark platform and publicly available dataset. In this work, we addressed the limitation by providing the largest abdominal pan-cancer dataset and organized the first international competition to prompt the development of universal abdominal organ and lesion segmentation algorithms. In particular, we curated a diverse abdominal pan-cancer dataset with 4650 CT scans, covering various abdomen cancers from 50 medical centers, which is the most comprehensive abdomen pan-cancer dataset to date. The competition attracted 292 participants from all over the world. The winning algorithm surpassed existing state-of-the-art models and achieved average DSC scores of 92.31%± 3.3 and 64.9%± 27.4 for organ and lesion, respectively. The inference pipeline consumed less than 4 GB GPU memory with an average runtime of 8.58 ± 1.92 seconds, which can be deployed on consumer desktops. § RESULTS §.§ Challenge design This challenge aimed to prompt the methodology development of fully automatic abdominal organ and lesion segmentation algorithms in CT scans. Different from the previous FLARE challenges designed for pure organ segmentation <cit.>, the FLARE 2023 challenge introduced three improvements (<ref>a). First, the challenge task was expanded to joint organ and lesion segmentation, including 13 abdominal organs and one general lesion class. Notably, the lesion class focused on abdominal pan-cancer segmentation rather than a single type of cancer. Second, we increased the dataset size from 2050 to 4000 CT scans, the largest abdomen CT benchmark, for model development. Third, we formulated the challenge as a partially supervised learning task instead of fully supervised learning because only parts of organs or lesions are labeled in clinical practice, and annotating all regions of interest is expensive. During the past decade, abdominal organ and lesion segmentation and partial-label learning methods have received increasing attention, but these approaches are developed with different datasets and evaluation metrics, which has led to difficulties in fair comparisons in various studies <cit.>. This challenge is set up in a timely way to bridge the gap. The challenge design followed the Biomedical Image Analysis ChallengeS (BIAS) <cit.>, which has been pre-registered at the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023) <cit.>[https://conferences.miccai.org/2023/en/challenges.asp]. We also provided the BIAS <cit.> and CLAIM (Checklist for Artificial Intelligence in Medical Imaging) <cit.> checklists in the supplementary. The challenge consisted of two phases (Fig. <ref>b). During the development phase, participants received 2200 partially labeled cases and 1800 unlabeled cases CT scans for model training. We also provided 100 fully labeled cases for model tuning and set up a public leaderboard where participants can submit segmentation masks and compare the performance with others. Each team can submit up to three results on the leaderboard per day. During the testing phase, we held a hidden testing set with 400 fully labeled cases and one of their reference standard was publicly available. Each team was required to submit their algorithm via the docker container and we manually ran the docker container on the same workstation to generate the testing set segmentation results. To avoid overfitting the testing set, each team can only submit one algorithm docker container. §.§ Dataset characteristic CT scans are very diverse in different medical centers because of various imaging protocols, manufacturers, and diseases. We curated the challenge dataset by aggregating CT scans from multiple medical centers (Fig. <ref>c, Supplementary Table 1-5) and all the datasets were allowed to be used and shared for research purposes. The training and tuning sets were mainly from North America and Europe while the testing set contained unseen centers from Asia, aiming to evaluate the generalization ability of the submitted algorithms. Moreover, the dataset covered all CT phases, such as the plain phase, artery phase, portal phase, and delay phase, as well as common CT manufacturers, including GE Healthcare, Philips, Siemens, and Toshiba. Fig. <ref>d shows the statistics of the number of CT scans in existing abdominal tumor segmentation challenges. Our dataset is 23 times larger than the widely used LiTS challenge dataset and substantially exceeds the existing largest abdominal tumor segmentation dataset (KiTS23). §.§ Overview of evaluated algorithms 47 teams from 292 participants joined the challenge and we received 37 successful algorithm docker container submissions during the testing phase, where four submissions failed and the remaining six teams did not submit. We analyzed three key components of the employed deep learning model among the 37 teams, including network architectures, loss functions, and optimizers. All teams used 3D networks and 60% of the teams used U-Net <cit.> as the main architecture. The combination of Dice loss and cross-entropy loss was the most popular loss function, used by 87% of the teams. Stochastic gradient descent was usually used for optimizing the U-Net while Transformer-based networks usually used Adam <cit.> and its variant (AdamW <cit.>). Best-performing algorithm. Team aladdin5 (T1 <cit.>) designed an efficient cascaded framework that localized the region of interest first followed by fine-grained segmentation for organs and lesions. The organ pseudo labels of the unlabeled organs, generated by the best-accuracy-algorithm <cit.> in FLARE 2022 <cit.> were used to enlarge the annotations in the training set. In particular, a lightweight nnU-Net <cit.> was first trained on the combination of labeled cases and unlabeled cases with pseudo labels for binary segmentation of the region of abdominal organs. After that, two individual models were trained with the cropped images with different image spacing for organ and tumor segmentation, respectively. The segmentation models were further fine-tuned with selected patches where the organ or tumor was centered. To improve inference efficiency, the prediction interpolation was implemented on GPU and multithreading was employed to pre-process images and load model checkpoints simultaneously. Second-best-performing algorithm. Team citi (T2 <cit.>) presented a partially supervised framework with nnU-Net <cit.> to leverage the partially labeled data. During partially supervised training, the images were first grouped by their annotated classes and then a batch of training images were selected from one group with a probability of its proportion. In this way, the images in a training batch should have identically labeled classes. When calculating the loss between the prediction and the ground truth, the output channels of unlabeled classes were merged to one channel by max pooling and corresponded to the background channel in the reference standard. The pseudo labels were selected by uncertainty estimation and further cleaned by filtering out small isolated regions. Due to the lack of tumor annotations, a CutMix augmentation strategy was exploited to copy tumor regions from the cases with labeled tumors to those without labeled tumors. To preserve the context of each tumor region, the neighboring regions were cropped together with the tumor object. Third-best-performing algorithm. Team blackbean (T3 <cit.>) developed a self-training framework that exploits a large model for pseudo label generation and a small model for efficient segmentation. Specifically, a large STU-Net-L <cit.> was trained on 250 cases with complete organ annotations and generated pseudo labels for 1497 cases with tumor annotations. A fully annotated dataset with 1497 cases can be obtained by merging the tumor and pseudo organ labels. Then, another STU-Net-L was trained on the 1497 cases to generate both organ and tumor pseudo labels for the remaining cases. Finally, a small STU-Net-B was trained on the whole dataset with complete manual or pseudo labels. Inference pipeline was optimized by applying large target spacing, avoiding the cropping process, and using GPU-based interpolation for fast image resampling. Fourth-best-performing algorithm. Team hmi306 (T4 <cit.>) proposed a two-stage pipeline that first located the abdominal organs and then segmented organs and tumors respectively. In the first stage, PHTrans <cit.>, a hybrid network consisting of convolutional network and Swin Transformer <cit.>, was trained for binary organ segmentation in low-resolution images. The second stage integrated self-training and mean teacher to leverage the unlabeled and partially labeled data for organ and tumor segmentation respectively. For organ segmentation, a PHTrans model was trained on fully annotated abdominal organ data followed by generating organ pseudo labels for unlabeled data. Then, another PHTrans model was trained on both labeled and pseudo labels. For tumor segmentation, following the mean teacher approach <cit.>, a student model is supervised by the prediction of a teacher model and the reference standard. The teacher model is updated by exponential moving average of trained student models. Both teacher and student models employ a lightweight ResU-Net and a whole-volume-based input strategy for highly efficient segmentation. Fifth-best-performing algorithm. Team hanglok (T5 <cit.>) introduced a cascaded framework with a Transformer-based architecture and self-training. A lightweight binary segmentation network with partially convolutional encoder <cit.> and SegFormer <cit.> decoder was first trained to localize the ROI in low-resolution images. Then, a Transformer-based segmentation network with MetaFormer <cit.> encoder and UNETR <cit.> decoder was used to segment the 13 organs and tumor from the ROI in high-resolution images. To enhance spatial context and reduce computation cost, the conventional self-attention <cit.> was replaced by depth-wise convolution with different kernel sizes and a group-wise convolution for multi-scale aggregation. To leverage the unlabeled data, the winning algorithm in FLARE22 <cit.> was used to initialize the pseudo labels of unlabeled data. Then, self-training was adopted to refine the pseudo labels and optimize the segmentation model. §.§ Segmentation accuracy and efficiency analysis on the testing set We show the testing set segmentation accuracy and efficiency performance of the 37 teams in Fig. <ref>a (Supplementary Table 6). The majority of teams are concentrated on the top left of the plot, indicating that most participants aimed to develop accurate and efficient algorithms. However, we also noticed that some teams only pursue the unilateral metric. For example, T25 and T30 consumed around 1500MB GPU memory, but the segmentation accuracy is inferior to the others. T24 achieved very competitive DSC score but the inference speed is slow, costing 99 seconds for each case. In contrast, the winning team (T1) stands out with an average DSC of 92.3% and 64.9% for organ and lesion segmentation, respectively, and an inference speed of 8.6s by consuming 3561.6 MB of GPU RAM. Next, we compared the performance of top five algorithms across seven metrics (organ DSC and NSD, lesion DSC and NSD, runtime, GPU memory consumption, and final rank) in terms of the number of algorithms it outperformed, aiming to provide a comprehensive comparison of the algorithms' strengths and weaknesses across the multiple criteria (Fig. <ref>b). The radar plot reveals that the top five algorithms outperformed most others across these metrics, as evidenced by the significant overlap in the plotted areas. However, closer inspection highlights subtle variations in performance. In particular, T1-aladdin5 demonstrated the highest accuracy in lesion segmentation, while T2-citi excelled in organ segmentation. They all obtained the perfect GPU consumption metric, but T4-hmi406 achieved the fastest inference speed. Overall, T1-aladdin5 secured the best final rank with a better balance between segmentation accuracy and computational efficiency. We also analyzed the ranking stability of all the employed metrics regarding testing case sampling variability. Specifically, the bootstrap approach was used by generating 1000 bootstrap samples where each sample contained 400 randomly selected testing cases with replacement from the testing set. Then, we compute Kendall's τ values for all the metrics and Fig. <ref>c shows the corresponding distributions with violin and box plots. The Kendall's τ values for all metrics are clustered around 1.0, implying that the rankings of the algorithms are highly consistent and stable. §.§ Organ-wise performance analysis We further present a detailed analysis of the performance of top five teams in segmenting a diverse set of 13 organs (Fig. <ref>, Supplementary Table 7). Specifically, we categorized these organs into three groups based on their size, morphology, and segmentation challenges: large solid organs, small organs, and tubular organs. This classification allows us to highlight the specific difficulties and successes associated with each category, providing insights into how the algorithms perform differently across varying anatomical structures. The first group consists of large solid organs, including the liver, kidneys, spleen, pancreas, and stomach. These organs generally have well-defined structures, making them more straightforward targets for segmentation algorithms. The liver, kidneys, and spleens demonstrated high and consistent performance in all five teams, with average DSC and NSD scores above 95% in all five teams (Fig. <ref>a-c). Their box plots also show tight clusters, suggesting that these organs are easier for segmentation due to the relatively large size and distinct boundaries. The pancreas, despite being part of this group, exhibited greater variability with average DSC scores ranging from 88.9% to 91.3%, reflecting the anatomical complexity and variability in shape and size across patients, which presents additional challenges for accurate segmentation (Fig. <ref>d). The stomach also performs well, with mean DSC scores between 93.4% (T4) and 94.9% (T1), indicating overall good segmentation accuracy across the teams (Fig. <ref>e). The second group focuses on small organs, specifically the gallbladder and adrenal glands (Fig. <ref>f-g). These organs are characterized by their smaller size and less distinct boundaries, making them more difficult to segment. The DSC and NSD scores exhibit greater variability than large organs, ranging from 83.5% (T1-aladdin5) to 86.9% (T4-hmi306). The adrenal glands show a similar trend, with mean DSC scores from 79.0% (T3-blackbean) to 89.5% (T2-citi). These results suggest that the small size and less distinct boundaries of these organs pose substantial challenges for accurate and robust segmentation, leading to increased variability in performance across the different algorithms. The final group includes tubular organs, such as the esophagus, aorta, inferior vena cava (IVC), and duodenum, which are defined by their elongated, tube-like shapes, which pose unique challenges for segmentation. The performance across this group was more diverse, particularly for the duodenum and esophagus. Specifically, the aorta and inferior vena cava have high and consistent accuracy with mean DSC scores ranging from 90.6% (T5-hanglok) to 98.0% (T2-citi) (Fig. <ref>i-j). In contrast, the esophagus shows more variability, with mean DSC scores ranging from 87.8% (T5-hanglok) to 91.6% (T2-citi) (Fig. <ref>h). The duodenum, another challenging tubular organ, has mean DSC scores ranging from 86.1% (T5-hanglok) to 90.8% (T2-citi), indicating the difficulty in accurately segmenting this organ due to its elongated and irregular shape (Fig. <ref>k). §.§ Lesion performance analysis The challenge task approached the lesion as a semantic segmentation problem, categorizing each voxel as either part of a lesion or not. This approach aligns with organ segmentation tasks, enabling uniform evaluation across different cases. Alternatively, the task can be framed as an instance segmentation problem, which not only captures category information but also distinguishes between different lesions. This allows for the identification and analysis of multiple distinct lesions within the same image. We evaluated the lesion segmentation results of the top five teams using both semantic segmentation metrics (DSC and NSD) and instance segmentation metrics (Sensitivity, Specificity, and F1 score), where we separated the disconnected lesions as individual entities with connected components analysis. The dot-box plots show the testing set performance distribution for each team across the five metrics (Fig. <ref>a, Supplementary Table 8). All the top three teams achieved a median DSC score of over 70% where T1-aladdin5 had the highest median DSC score of 75.9 (interquartile range (IQR):49.8-86.2%). However, the median NSD scores dropped below 60% for all the teams, indicating that lesion boundaries may not be accurately delineated and small lesions could be missed. In terms of instance segmentation metrics, all the teams achieved high precision but low recall. This imbalance indicates that the algorithms are conservative in their segmentations. In particular, T1-aladdin5 and T4-hmi306 obtained the best specificity with median scores of 50.0% (IQR: 25.0-100.0%) and 50.0% (IQR: 0.0-100.0%), respectively. However, T4-hmi306 had the lowest sensitivity with a median score of 20.0% (IQR: 0.0-50.0%), indicating that many lesions were missed in the segmentation results. The F1 scores follow a similar trend to the DSC scores with T1-aladdin5 achieving the highest median score of 40.0% (IQR: 22.2-66.7%). Overall, the F1 scores across teams show a broad range, reflecting inconsistencies in balancing lesion detection precision and recall. We also employed the majority vote approach to generate the ensemble results of the top three and top five algorithms, respectively, and computed the panoptic quality metric to understand both segmentation and detection quality. As shown in Fig. <ref>b (Supplementary Table 9), the ensemble models generally show comparable or slightly improved panoptic quality compared to the individual teams, with Ensemble-3 slightly outperforming the others in terms of median score. However, all teams and ensembles exhibit a wide range of panoptic quality scores, indicating variability in performance across different cases. Next, we analyzed the relationship between lesion volume and segmentation accuracy (DSC) of the winning algorithm T1-aladdin5 (Fig <ref>c). There is a clear trend where larger lesion volumes tend to correspond with higher DSC scores, suggesting that the algorithm performs better as the lesion volume increases. For smaller lesions, particularly those with volumes below approximately 100, the DSC values are widely dispersed, ranging from near 0 to around 80% or higher. This indicates that the segmentation algorithm struggles more with smaller lesions, leading to less consistent and generally lower DSC scores. Tumor volume is an important image biomarker and we compared the predicted lesion volume to the true lesion volume (Fig. <ref>d). The result reveals that while the model generally performs well in predicting lesion volumes, with a strong overall correlation between true and predicted volumes, it exhibits variability in accuracy, particularly at the extremes of the volume range. Smaller lesions tend to be underestimated, while larger lesions are occasionally overestimated, as indicated by the spread of data points around the diagonal line in the scatter plot. Finally, we present a visualization of typical examples (Fig <ref>e) that contains two successful cases (the 1st and 2nd columns) and three failure examples (the 3rd to 5th columns). These examples demonstrate that the algorithm is capable of accurately identifying and segmenting both large and small lesions when their appearances and boundaries are well-defined. However, the algorithm often struggles with small, heterogeneous lesions, as evidenced by its complete failure to detect the pancreas and colorectal lesions leading to poor performance. § DISCUSSION AI has revolutionized medical image segmentation tasks, but most algorithms rely on a large number of human expert annotations, which are extremely hard and expensive to collect. Moreover, the performance of existing algorithms is mainly evaluated based on accuracy-related metrics on limited cohorts while the generalization ability, running efficiency, and resource consumption are overlooked. These barriers hinder the wider adoption of AI algorithms in clinical practice. The main goal of this study was to address these critical issues. In particular, we created the largest abdomen organ and pan-cancer CT dataset with a well-defined segmentation task to benchmark algorithms. Furthermore, we organized an international competition to gather community efforts to facilitate algorithm development with partially unlabeled data. All the algorithms were validated in real-world settings by blinded evaluation of their generalization ability, efficiency, and resource consumption on intercontinental and multinational cohorts. The final results of the FLARE 2023 challenge reveal four main findings. First, despite the increased difficulty posed by the inclusion of lesion segmentation, U-Net-based algorithms <cit.>, as demonstrated by the top-performing teams, continued to achieve the highest lesion segmentation accuracy without compromising organ segmentation performance. This outcome underscores the robustness of the U-Net model, confirming its capability to effectively handle both general abdominal organ and lesion segmentation tasks. Second, we have identified some useful strategies to enhance segmentation accuracy. For example, the cascaded segmentation framework, beginning with region of interest (ROI) extraction followed by fine-grained segmentation, allows the model to focus on the segmentation targets and extract detailed information for better accuracy. Additionally, the use of a weighted compound loss function, which combines Dice loss and focal loss, proved beneficial in addressing class imbalance, particularly improving the segmentation of smaller organs and tumors. Further accuracy improvements were achieved through strategic training methods, including selective patch sampling, fine-tuning with optimized learning rates and batch sizes, and model ensembling to leverage the strengths of different models. A post-processing step, which retains the largest connected component, was also recommended to boost organ segmentation accuracy. Third, top-performing teams also implemented several strategies to boost segmentation efficiency. These included utilizing GPU-based operations for faster probability interpolation and label generation, replacing slower CPU-based preprocessing with GPU processing, and adopting multi-threading to minimize model initialization times. Together, these strategies contributed to the development of robust and efficient segmentation models, well-suited to the low-resource hardwares of clinical applications. Fourth, the utility of unlabeled data in improving organ segmentation accuracy was demonstrated across all top teams. However, its impact on lesion segmentation was less pronounced, likely due to the inaccuracy of lesion pseudo labels, which could have introduced negative effects during iterative training. While the top algorithm surpassed existing state-of-the-art models <cit.>, none of the teams or ensemble approaches achieved high segmentation quality for lesion segmentation, highlighting areas for further improvement in both segmentation accuracy and detection robustness. This work has two primary limitations. First, the dataset contained a relatively small number of lesion annotations. Second, the challenge focused exclusively on abdominal lesions, whereas other cancer types, such as lung cancer, are also critical in clinical practice. Future work could address these limitations by expanding the dataset to add synthetic lesion images <cit.> and other publicly available abdomen CT datasets <cit.>, as well as cancer types <cit.>. Extending the challenge to include multimodal data is another promising direction, given that text data has shown potential in enhancing lesion detection and segmentation accuracy. <cit.>. In summary, the FLARE 2023 challenge presents the first and largest benchmark for organ and pan-cancer segmentation in abdomen CT. The winning algorithm set a new state of the art with the cascaded framework and efficient network and inference pipeline designs. The top algorithms have achieved high accuracy for most of the organs but small organs and lesions remain unsolved problems. All the data and code have been made publicly available for further algorithm developments. § METHODS §.§ Challenge schedule The FLARE 2023 challenge was preregistered <cit.> and the proposal passed peer review at the 25th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI 2023). We launched the challenge on April 1st 2023 on the CodaLab platform <cit.>. During the development phase, each team can submit up to three tuning set segmentation results every day to the online platform and get the segmentation accuracy scores. Moreover, each team also had five chances to submit docker containers to challenge organizers and obtain segmentation efficiency scores. During the testing phase, participants were required to submit the final algorithm docker by August 25th 2023. We manually evaluated all the submitted dockers on the hidden testing set and announced the results on October 8th 2023 at MICCAI. §.§ Data standardization and annotation protocol The data standardization followed the common practice in the other 3D medical image segmentation challenges <cit.> and the past FLARE challenges <cit.>. We curated the CT scans from public datasets based on the license permission. Detailed information on these datasets is presented in Supplementary Table 1-4. All CT scans were converted to the standard NIfTI format (<https://nifti.nimh.nih.gov/>) and preserved the original CT HU values. The orientation was standardized as canonical 'RAS', which means that the first, second, and third voxel axes go from left to Right, posterior to Anterior, and inferior to superior, respectively. Organ annotation protocol remained the same as for the FLARE 2022 challenge <cit.>, which adhered to the radiation therapy oncology group consensus (RTOG) panel guideline <cit.> and Netter's anatomical atlas <cit.>. In the training set, the lesion annotations in the source datasets were directly used. In the tuning and testing set, all visible lesions were annotated by a senior radiologist with the assistance of ITK-SNAP <cit.> and MedSAM <cit.>. §.§ Evaluation protocol All algorithms were sequentially run on the same GPU desktop workstation for a fair evaluation. The workstation was a Ubuntu 20.04 desktop with one central processing unit (CPU, Intel Xeon(R) W-2133 CPU, 3.60GHz × 12 cores), one graph processing unit (GPU, NVIDIA QUADRO RTX5000, 16G), 32G of memory, and 500G of hard disk drive storage. We used two groups of metrics to evaluate segmentation accuracy and efficiency. Following the recommendations in Metrics Reloaded <cit.>, the segmentation accuracy metrics contained Dice Similarity Coefficient (DSC) and Normalized Surface Distance (NSD), measuring the region and boundary overlap between segmentation mask and reference standards. The segmentation efficiency metrics included runtime and Area Under the Curve of GPU memory-time (AUC GPU), where the GPU memory consumption was recorded every 0.1s. In addition, we also analyzed commonly used instance segmentation metrics for lesion segmentation, including precision, recall, F1 score, and panoptic quality. §.§ Ranking scheme The final rank was computed with both segmentation accuracy and efficiency metrics. We give runtime and GPU memory consumption a tolerance of 15s and 4GB, respectively, because they are acceptable in clinical practice. The employed metrics cannot be directly merged because of the dimension difference. Thus, we used rank-then-aggregation to obtain the final rank. Specifically, the ranking scheme had three steps: * Step 1. Compute the six metrics for each case in the testing set (N=400), including two organ-wise metrics: average DSC and NSD scores for 13 abdominal organs; two lesion-wise metrics: DSC and NSD scores; two efficiency metrics: runtime and area under GPU memory-time curve. * Step 2. Rank algorithms for each of the 400 testing cases and each metric. Each algorithm has 2400 (400x6) rankings. * Step 3. Compute the final rank for each algorithm by averaging all the rankings. §.§ Ranking stability and statistical analysis We applied the bootstrapping approach and computed Kendall’s τ <cit.> to quantify the variability of the ranking scheme. Specifically, we first extracted 1000 bootstrap samples from the international validation set and computed the ranks again for each bootstrap sample. Then, the ranking agreement was quantified by Kendall’s τ. Kendall’s τ computes the number of pairwise concordances and discordances between ranking lists. Its value ranges [-1, 1] where -1 and 1 denote inverted and identical order, respectively. A stable ranking scheme should have a high Kendall’s τ value that is close to 1. Wilcoxon signed rank test was used to compare the performance of different algorithms. Results were considered statistically significant if the p-value is less than 0.05. The following packages were used in the analysis: ChallengeR <cit.>, Python 3 <cit.>, Numpy <cit.>, Pandas <cit.>, Scipy <cit.>, PyTorch <cit.>, and matplotlib <cit.>. §.§ Data availability All the datasets have been publicly available on the challenge website <https://codalab.lisn.upsaclay.fr/competitions/12239>. §.§ Code availability The code, method descriptions, and docker containers of the top ten teams are available at <https://codalab.lisn.upsaclay.fr/competitions/12239#learn_the_details-awards>. IEEEtran
http://arxiv.org/abs/2408.11637v1
20240821140622
Private Counting of Distinct Elements in the Turnstile Model and Extensions
[ "Monika Henzinger", "A. R. Sricharan", "Teresa Anna Steiner" ]
cs.DS
[ "cs.DS", "cs.CR" ]
1]Monika Henzinger 2]A. R. Sricharan 3]Teresa Anna Steiner [1]Institute of Science and Technology, Klosterneuburg, Austria [2]Faculty of Computer Science, Doctoral School Computer Science, University of Vienna, Austria [3]Technical University Denmark Private Counting of Distinct Elements in the Turnstile Model and Extensions [ ============================================================================= § ABSTRACT Privately counting distinct elements in a stream is a fundamental data analysis problem with many applications in machine learning. In the turnstile model, Jain et al. [NeurIPS2023] initiated the study of this problem parameterized by the maximum flippancy of any element, i.e., the number of times that the count of an element changes from 0 to above 0 or vice versa. They give an item-level (ϵ,δ)-differentially private algorithm whose additive error is tight with respect to that parameterization. In this work, we show that a very simple algorithm based on the sparse vector technique achieves a tight additive error for item-level (ϵ,δ)-differential privacy and item-level ϵ-differential privacy with regards to a different parameterization, namely the sum of all flippancies. Our second result is a bound which shows that for a large class of algorithms, including all existing differentially private algorithms for this problem, the lower bound from item-level differential privacy extends to event-level differential privacy. This partially answers an open question by Jain et al. [NeurIPS2023]. § INTRODUCTION Counting distinct elements in a stream is a fundamental data analysis problem that is widely studied <cit.> and has many applications <cit.>, including network analysis <cit.> and detection of denial of service attacks <cit.>. If the data includes sensitive information, the essential challenge is to give accurate answers while providing privacy guarantees to the data owners. Differential privacy is the de-facto standard in private data analysis and is widely employed both in research and in industry. In the insertions-only model, the problem of counting distinct elements while preserving differential privacy is well-studied <cit.>. Recent work by Jain, Kalemaj, Raskhodnikova, Sivakumar, and Smith <cit.> (which was concurrent with an earlier version of the results presented in this paper, see <cit.>) initiated the study of this problem in the more general turnstile model. They give an algorithm which is item-level, (ϵ,δ)-differentially private and analyze the additive error parameterized in the maximum flippancy of any element, i.e., the number of times that the count of an element changes from 0 to above 0 or vice versa. They also give lower bounds which show that the additive error of the algorithm is tight for item-level differential privacy (up to log factors) with respect to their parameterization. There is still a gap for event-level differential privacy, which is posed as an open question. The algorithm is based on several instantiations of the binary tree mechanism. In this paper we show that a simple algorithm based on the sparse vector technique achieves a tight additive error (up to log factors) for item-level (ϵ,δ)-differential privacy and item-level ϵ-differential privacy, with regards to a different parameterization, namely the total flippancy, i.e., the sum of the flippancies of all elements. The additive error depends polynomially on the total flippancy with a smaller exponent than the exponent of the maximum flippancy in the additive error in <cit.>. Thus, if there are few elements in total, or few elements which change their count from 0 to above 0 or vice versa, then our algorithm achieves a better additive error. Additionally, we give is a reduction which shows that for a large class of algorithms, including all existing differentially private algorithms for this problem, the lower bound from item-level differential privacy extends to event-level differential privacy. This is a step towards answering the open question posed in <cit.>. §.§ Problem Definition More formally, we assume there are d different items, and our goal is to maintain a multiset of them and to determine at each time step how many of them are currently at least once in the multiset, i.e., the number of distinct elements in the multiset. The update operations are modeled as follows: The input at every time step is a -dimensional vector x^t ∈{-1, 0, 1}^d, such that x^t_i=1 if element i gets inserted at time t, x^t_i=-1 if element i gets deleted at time t, and x^t_i=0 otherwise. Note that this means that we allow multiple non-zero entries in x^t, corresponding to multiple updates at every time step. However, the lower bound also extends to the case where we assume that at most one element may be inserted or deleted at any time step, i.e., ||x^t||_1≤ 1, which we call singleton update streams. At every time step t, we want to output the number of distinct elements in the multiset. By our definition of the input stream, an element i is present at time t if and only if ∑_t'≤ tx_i^t'>0. Let x^1,x^2,…, x^T be an input stream with x^t∈{-1,0,1}^d for all t∈[1,T]. We define (x)^t=∑_i=1^d (∑_t'≤ t_i^t'>0), where (E) is the indicator function that is 1 if E is true and 0 otherwise. Then, the problem is to output (x)^t at all time steps t. The error of is defined to be the maximum additive error over all time steps. In this paper, we consider two privacy notions: event-level differential privacy, and item-level differential privacy. They differ in their definition of neighboring input streams. Two input streams x and y are event-level neighboring, if there exists a time step t^* and an item i^*∈ [1,d] such that we have x_i^t=y_i^t for all (i,t)≠ (i^*,t^*). That is, two event-level neighboring streams may differ in at most one item in at most one update operation. Two input streams x and y are item-level neighboring, if there exists an item i^*∈ [1,d] such that x_i^t=y_i^t for all t and for all i∈[1,d] ∖{i^*}. That is, two item-level neighboring streams may differ in all update operations related to one item. Finally, we consider two models regarding the input stream. In the the counts for any item at any time step t is given by ∑_t'≤ t x_i^t', which can be any integer in [-t,t] and we only care about whether ∑_t'≤ t x_i^t' is larger than zero or not. In the [The name was chosen as it models the count of “likes” on a social media website, as motivated by <cit.>.] for every item i at any time step t, it must hold that ∑_t'≤ t x_i^t'∈{0,1}, i.e., the multiset is a set. Said differently, an item can only be inserted if it is absent in the set and it can only be deleted when it is present. §.§ Summary of Results In this paper, we give new upper and lower bounds for item-level differential privacy, parameterized in the total flippancy K, which is defined as the total number of times any item switches from a non-zero count to a zero count, or vice versa. In detail, let f^t(x_i)=(∑_t'≤ t x_i^t'>0). The total flippancy is formally defined as K=∑_i=1^d ∑_t=2^T(f^t(x_i)≠ f^t-1(x_i)). Note that in the , the total flippancy is equal to the total number of updates. As (x)^t=∑_i=1^d f^t(x_i), it follows that K is an upper bound on the number of changes in (x) over time. *Upper Bounds As our first main result, we give algorithms solving while providing item-level differential privacy, which work in the (thus also in the ). In the following, we state the exact bounds for given K. If K is not given to the algorithm, the error bounds worsen by at most a ln^2 K factor. Let d be a non-zero integer, β>0, K be a known upper bound on the total flippancy, and let T be a known upper bound on the number of time steps. Then there exists * an item-level ϵ-differentially private algorithm for the problem in the general model with additive error O(min(d,K,√(ϵ^-1Kln (T/β)),ϵ^-1Tlog(T/β)) with probability at least 1-β at all time steps simultaneously, for any ϵ>0 and β∈ (0,1); * an item-level (ϵ,δ)-differentially private algorithm for in the general model with additive error O(min(d,K,(ϵ^-2Kln(1/δ)ln^2(T/β))^1/3, ϵ^-1√(Tln(1/δ)log(T/β))) with probability at least 1-β at all time steps simultaneously, for any δ∈(0,1), ϵ∈ (0,1), and β∈ (0,1). As our lower bounds (discussed below) show, our bounds for ϵ-differential privacy are tight, if K is known and K≤ T. If K>T, we incur at most an extra ln T factor, and if K is not known, we incur extra ln K factors (see <ref>). For (ϵ,δ)-differential privacy, the upper bounds are tight up to ln T, ln K and ln(1/δ) factors. *Lower bounds We complement our upper bounds by almost tight lower bounds on the additive error which hold for any item-level differentially private algorithm in the . As this is the “more restricted” of the two models, the lower bounds also carry over to the . For ϵ-differential privacy, our lower bound follows from a packing argument. For any L≤ T, there exists an input stream x of d-dimensional vectors from {-1,0,1}^d, which is valid in the , with length T and flippancy K=Θ(L), such that any item-level, ϵ-differentially private algorithm for must with constant probability have an error at least Ω(min(d,K,√(ϵ^-1Kmax(ln(T/K),1)))). The lower bound above also holds for singleton updates. When multiple updates are allowed, then K could potentially be larger than T. In that case, Theorem <ref> in <ref> shows that for any T≤ L≤ dT, there exists a stream with flippancy K=Ω(T), K=O(L), such that any item-level, ϵ-differentially private algorithm to the problem must have error at least Ω(min(d,ϵ^-1T,√(ϵ^-1Lmax(ln(T/L),1)))) with constant probability. Since K = O(L), this gives a lower bound of Ω(min(d,ϵ^-1T,√(ϵ^-1Kmax(ln(T/K),1)))). For (ϵ,δ)-differential privacy, we can use a similar strategy as in <cit.> to get the following bounds: Let ϵ,δ∈(0,1]. Let K, T be sufficiently large parameters. There exists a dimension d ∈ℕ and an input stream x of d-dimensional vectors from {-1,0,1}^d of length T with flippancy at most K which is valid in the “likes”-model, such that any item-level, (ϵ,δ)-differentially private algorithm for the problem must have error at least Ω(^-1·min(√(T)/log T,(Kϵ)^1/3/log (Kϵ))) with constant probability. Note that this lower bound holds for the case where multiple insertions are allowed in every time step. In Theorem <ref> we also give a lower bound of Ω(K^1/3/ϵlog K) for singleton-updates. *Time and Space Complexity Our main algorithm (achieving the O(√(ϵ^-1 K ln T)) error bound for ϵ-differential privacy and O((Kln(1/δ)ln^2 T)^1/3/ϵ^2/3) error bound for (ϵ,δ)-differential privacy) can be implemented using constant time per update, assuming that drawing from a Laplace distribution takes constant time. Specifically, the total running time is O(#updates + S_K t_), where t_ is the time to draw one Laplace random variable, S_K=O(√(Kϵ/ln T)+1) for ϵ-dp, and S_K=O((Kϵ/√(ln(1/δ))ln(T/β))^2/3) for (, δ)-dp. The algorithm uses O(d) words of space. The only information our algorithm needs to store are the true counts for each item, plus a constant number of words of extra information. This holds even for the case where K is unknown, since we sequentially run our known K algorithm with increasing guesses for K. *Comparison to the recent work by Jain, Kalemaj, Raskhodnikova, Sivakumar, and Smith <cit.>. In recent work, <cit.> considered the problem with a similar, but with a different parameterization. In <cit.>, they parameterize the additive error in the maximum flippancy, i.e., they parameterize on w_x=max_i∈[d](∑_t=2^T (f^t(x_i)≠ f^t-1(x_i)). Recall that K denotes the total flippancy of a stream x and note that w_x≤ K≤ d· w_x. <cit.> consider only streams with singleton updates and give algorithms for item-level, (ϵ,δ)-differential privacy in the , with an error bound of Õ(min((√(w_x)log T + log^3 T)·√(log(1/δ))/ϵ,.. . . (Tlog(1/δ))^1/3/ϵ^2/3,T))[For simplicity of exposition, we use Õ(X) to denote O(X·polylog(X))]. In comparison, our bounds in this setting are Õ(min((Kln(1/δ)ln^2 T)^1/3/ϵ^2/3),K). Note that K≤ T for singleton updates, and thus, our upper bounds recover their second and third bound up to a ln^2/3T factor. Furthermore, ignoring polynomial factors in log T, log(1/δ) and ϵ^-1, their bound is O(√(w_x)) while ours is O(K^1/3). Thus, if (roughly) K < w_x^3/2, our algorithm outperforms theirs. Specifically, if d≤√(w_x) or if there are only few items with high flippancy, we expect our algorithm to do better. In cases where the flippancy is well-distributed, i.e., many items have a similar flippancy, and d≥√(w_x), we expect the algorithm in <cit.> to perform better. In terms of space and time complexity, their algorithm, like ours, needs to maintain a count for each element. Thus, the space in terms of words is Ω(d). On top of that, they run a variant of the binary tree mechanism, which depending on the implementation, uses Ω(log T) space. In their final solution, they actually run log T copies of the binary tree mechanism in parallel, bringing their space consumption to O(d+log^2 T) words. Thus, the space of our algorithm is an additive log ^2 T term better, which can be crucial for large streams. In terms of time complexity, each of the binary tree mechanism needs to draw Ω(Tlog T) independent Laplace noises, thus their time complexity is at least Ω(Tlog^2 T t_), where t_ is the time it takes to draw a Laplace noise. Also here, our algorithm is more efficient. In terms of lower bounds, for item-level, ϵ-differential privacy in the , <cit.> give a lower bound of Ω(min(ϵ^-1w,√(ϵ^-1T), T)) for streams of maximum flippancy at most w. For (ϵ,δ)-differential privacy, they give a lower bound of Ω̃(min(ϵ^-1√(w),ϵ^-2/3T^1/3,T)) for item-level privacy in the , and a lower bound of Ω̃(min(ϵ^-1√(w),ϵ^-3/4T^1/4,T)) for event-level privacy in the , for streams of maximum flippancy at most w. Their upper bounds in the item-level setting match their lower bounds up to factors polynomial in log T and log(1/δ). For event-level in the , there is a gap for √(T)≤ w≤ T^2/3, and closing this gap was posed as an explicit open question in <cit.>.[Note that this gap only exists if at most one update per time step is allowed - if many (e.g. up to d many) updates are allowed in each time step, then the lower bound proof for event-level privacy in the general model from <cit.> can be used to show a lower bound of Ω(√(T)), for constant ϵ and δ.] As our second main result, we make a step towards closing this gap, which we explain below. *Reduction from item-level, to output-dependent event-level, All the upper bounds mentioned so far hold for item-level differential privacy. As our upper bounds hold in the and our lower bounds hold in the , we can conclude that for item-level privacy, the and the are roughly equally hard. <cit.> arrived at this conclusion as well, albeit with a different parameterization. However, for event-level differential privacy, the picture is different: for the “likes”-model, a very simple algorithm gives an error of O(ϵ^-1polylog(T)) with constant probability. To see this, define the difference sequence for the problem as diff^t(x)= (x)^t-(x)^t-1 for t>1. As can be easily seen, (diff^t(x))_t>1 and (diff^t(y))_t>1 differ by at most 1 in at most one time step t for any event-level neighboring streams x and y in the . Thus, applying a standard continual counting algorithm gives the claimed error, as shown for “well-behaved” difference sequences in general in <cit.>. For event-level differential privacy and the however, the best known algorithms are the algorithms for item-level differential privacy in this paper and <cit.>. <cit.> also present lower bounds for event-level differential privacy in the which, however, leave a gap for certain parameter settings. Closing that gap was explicitly posed as an open question in <cit.>. We make a step towards closing that gap, by noting that all existing differentially private algorithms for the problem in any model share the following property: If (x)=(y) for any two input streams x and y, then the output distributions of the algorithms are equal. That is, any two streams which produce the same true output, will have the same output distributions. We call such algorithms output-determined. We show that if we only consider output-determined algorithms for , then achieving event-level differential privacy in the is just as hard as item-level differential privacy for the . Thus our above lower bounds also apply to such algorithms. In particular, this shows that if one were trying to close the gap for event-level differential privacy in the , one needs to find an algorithm which does not only depend on the true answers to . Let ϵ > 0 and δ≥ 0. Let _1 be an event-level, (ϵ,δ)-differentially private, algorithm for that works in the and has error at most α for streams of length T+1 with probability 1-β. Then there exists an item-level, (2ϵ,(1+e^ϵ)δ)-differentially private algorithm _2 for that works in the , and has error at most α for streams of length T with probability 1-β. *Generalizations & Applications While our algorithms are (nearly) tight for the problem, they are not tailored specifically to the problem and work in a more general setting as well. In particular, recall that (x)^t=∑_i=1^d f^t(x_i), where f^t(x_i)=(∑_t'≤ t x_i^t'>0). Now consider any real-valued function Q on input streams x_1,x_2,…, with x_i ∈{-1, 0, 1}. We use Q^t(x) to denote Q(x_1,…, x_t). Our algorithm works for any such function Q such that the following two conditions are fulfilled: (1) for any x and y which are neighboring, we have |Q^t(x)-Q^t(y)|≤ 1 for all time steps t, and (2) ∑_t=1^T|Q^t(x)-Q^t-1(x)|≤ K. Let Q be a function satisfying properties (1) and (2). Then there exists * an item-level ϵ-differentially private algorithm for computing Q with additive error O(min(K, √(ϵ^-1Kln(T/β)))),ϵ^-1Tlog(T/β)) at all time steps with probability at least 1-β, for any ϵ>0; * an item-level (ϵ,δ)-differentially private algorithm for computing Q with additive error O(min(K,(ϵ^-2Kln(1/δ)ln^2(T/β))^1/3,ϵ^-1√(Tln(1/δ)log(T/β)))) at all time steps with probability at least 1-β, for any ϵ>0, δ∈ (0,1); The extension to unknown K also holds, with extra ln K factors as earlier. Thus, for a continuous function Q which has maximum sensitivity 1 over all time steps, we get a bound parameterized in the sum of all differences, i.e., the L_1-norm of the difference sequence. While our results hold in the turnstile model and the additive error is parametrized by the total flippancy, <cit.> gave an ϵ-differentially private mechanism with additive error O(Γlog^3/2log(T/β)) in the insertions-only or deletions-only setting, where Γ is the continuous global sensitivity which is the L_1-norm of the difference sequence of two neighboring inputs. We can apply our algorithm to the problem considered in Fichtenberger et al. <cit.> of continuously counting high degree nodes under differential privacy, which counts the number of nodes with degree at least τ, where τ is given and public. For user-level, edge-differential privacy (i.e., neighboring streams may differ in all updates of the same edge), they give a lower bound of Ω(n). Our algorithm gives new parameterized bounds for this problem: In particular, choosing Q^t(x)=# of high degree nodes/2, Theorem <ref> gives an error bound of roughly O(√(K)), under ϵ-differential privacy, and roughly O(K^1/3) under (ϵ,δ)-differential privacy, where we ignore an additive factor of O(ϵ^-1ln Tln(1/δ)ln K). Note that K can be as large as T, but for many applications, it could be much smaller: for example, in social networks, it has been shown that the degree distribution follows a power-law distribution, which implies that the set of high-degree nodes only changes infrequently. K does not to be given to the algorithm. §.§ Algorithm Overview The main idea of our algorithm is to use the sparse vector technique first introduced by Dwork, Naor, Reingold, Rothblum, and Vadhan <cit.> (the form we use it in can be found in Dwork and Roth <cit.>) on carefully chosen queries and with carefully chosen thresholds. The sparse vector technique can be described as follows: It is given a data set x, a sequence of q queries, a threshold Thresh, and a stopping parameter S. It will process these queries sequentially, and for each of them answer “yes” or “no” depending on whether or not q(x) is approximately (up to an additive error α) above the threshold. It stops after it has answered “yes” S times. Dwork and Roth <cit.> show that it is possible to design an ϵ-differentially private algorithm achieving the above with α=O(ϵ^-1Slog (q/β)) with probability 1-β, and an (ϵ,δ)-differentially private algorithm with α=O(ϵ^-1√(Slog(1/δ))log (q/β)) with probability 1-β. In the following discussion, we ignore ϵ^-1, log(1/δ), log q and log(1/β) factors. Our main idea is to note that the total flippancy K can be seen as an upper bound on the total change in the output, i.e., the sum of the absolute differences in the output in every time step. Our strategy is as follows: We start by estimating the number of distinct elements at the beginning of the stream. Then, we keep reporting this estimate until a significant change occurs in the true number of distinct elements. We track whether such a change has occured using the sparse vector technique. Once there has been a significant change, i.e., once the sparse vector technique answers “yes”, we update the output. The goal now is to balance the additive error of the sparse vector technique with the error accumulated between updates. The error between updates is roughly Thresh; the error of the sparse vector technique is α; and the total change of the output is bounded by K. To balance the two we set Thresh =Θ(α). Furthermore we have to choose S in a way that makes sure that the sparse vector technique does not abort before we have seen the entire stream. We can show that every time our sparse vector technique answers “yes”, the change in output has been roughly Thresh. Thus it is enough to set S>K/Thresh. As mentioned above, for ϵ-differential privacy α (and, thus, Thresh) must depend linearly on S, which implies that S must be chosen to be Θ(√(K)), giving an additive error of O(√(K)). For (ϵ,δ)-differential privacy, we have Thresh=Θ(α)=O(√(S)). This implies that S^3/2 must be Θ(K), i.e., S=Θ(K^2/3). Thus the additive error is O(K^1/3). Note that this requires that K is known at the beginning of the algorithm. If K is unknown, we run the above algorithm for exponentially increasing guesses of K (K=2,4,8, etc.). In particular, we run the algorithm for a guess of K, and if it terminates preemptively, we double our guess and repeat. Since we do not know beforehand how many instances are needed, in order to make sure the resulting algorithm is still ϵ-differentially private, we run the jth instance with privacy parameter ϵ_j=O(ϵ/j^2), such that ∑_j=1^∞ϵ_j≤ϵ. At the end of the algorithm, j=Θ(ln K), therefore we incur an extra ln^2 K factor in the additive error. § PRELIMINARIES We denote {1,…,n} by [n] and the input stream length by T, which is the number of time steps. *Continual observation algorithm. An algorithm A in the continual observation model gets an update at every time step t ≤ T, and produces an output a^t=A(x^1,…,x^t) which is a function of x^1 to x^t; A^T(x)=(a^1,a^2,…,a^T) denotes the sequence of outputs at all time steps. A randomized algorithm A is (ϵ,δ)-differentially private ((ϵ,δ)-dp) if for all S∈range(A^T) and all x,y neighboring [A^T(x)∈ S]≤ e^ϵ[A^T(y)∈ S]+δ. If δ=0 then A is ϵ-differentially private (ϵ-dp). The Laplace distribution centered at 0 with scale b is the distribution with probability density function b(x)=(2b)^-1·exp(-|x|/b). We use X∼(b) or just (b) to denote a random variable X distributed according to b(x). In our definitions below, we use χ to represent a generic universe of elements. Let f:χ→ℝ^k. The L_p-sensitivity Δ_p is defined as max_x∈χ,y∈χ, x∼ y||f(x)-f(y)||_p, where x∼ y denotes that x and y are neighbouring. [Theorem 3.6 in <cit.>: Laplace Mechanism] Let f be any function f:χ→ℝ^k with L_1-sensitivity Δ_1. Let Y_i∼(Δ_1/ϵ) for i∈[k]. The mechanism defined as A(x)=f(x)+(Y_1,…,Y_k) satisfies ϵ-differential privacy. [Laplace Tailbound] If X∼(b), then [|X|≥ t· b]≤ e^-t. The following fact follows from Theorem A.1 in <cit.>: [Gaussian Mechanism] Let f be any function f:χ→ℝ^k with L_2-sensitivity Δ_2. Let Y_i∼𝒩(0,σ^2) for i∈[k], where σ≥√(2ln(2/δ))Δ_2/ϵ. The mechanism defines as A(x)=f(x)+(Y_1,…,Y_k) satisfies (ϵ,δ)-differentially privacy. [Gaussian tailbound] If X∼𝒩(0,σ^2), then [|X|≥σ√(ln(2/β))]≤β The following facts are respectively given by Theorem 3.16, 3.20 and Corollary 3.21 in <cit.>. [Composition Theorem] Let A_1 be an (ϵ_1,δ_1)-differentially private algorithm A_1:χ→range(A_1) and A_2 an (ϵ_2,δ_2)-differentially private algorithm A_2:χ×range(A_1)→range(A_2). Then B:χ→range(A_1)×range(A_2) defined as B(x)=(A_1(x), A_2(x,A_1(x)) is (ϵ_1+ϵ_2,δ_1+δ_2)-differentially private. [Advanced Composition] Let ϵ,δ,δ'≥ 0. Let A_1 be an (ϵ,δ)-differentially private algorithm A_1:χ→range(A_1) and A_i be (ϵ,δ)-differentially private algorithms A_i:χ×range(A_i-1)→range(A_i), for 2≤ i≤ k. Then the composition B:χ→range(A_1)×…×range(A_k) defined as B(x)=(A_1(x), A_2(x,A_1(x)),…, A_k(x,A_k-1(x))) is (ϵ',kδ+δ')-differentially private, where ϵ'=√(2kln(1/δ'))ϵ+kϵ(e^ϵ-1). Let ϵ^*,δ,δ'≥ 0 and δ',ϵ^*<1. Let A_1,…,A_k be as in Fact <ref> with ϵ=ϵ^*/(2√(2kln(1/δ'))). Then the composition B (defined as in Fact <ref>) is (ϵ^*,kδ+δ')-differentially private. § ITEM-LEVEL ALGORITHMS IN GENERAL MODEL In this section, we give algorithms which work for any input sequence in the , and thus also for input sequences that fulfill the conditions of the . The upper bounds on the additive error for ϵ-differential privacy match the lower bounds in <ref>, except for the log (T/β) factor in the case where K> T. §.§ Known Total Flippancy We prove Theorem <ref> in this section. We give some intuition first on Algorithm <ref>. The algorithm works by iteratively checking if the true number of distinct elements currently present (called Q) is “far” from the current output of our algorithm (called ) using a sparse vector technique (SVT) instantiation. We start the algorithm by estimating at the beginning of the stream (line <ref>). Then, we keep outputting , while we track the difference between and the true number of distinct elements Q (line <ref>). Once there has been a significant change, we update the output (line <ref>). There are two parameters of interest here. One is the number of times we update the output: we abort after S_K updates happen (line <ref>). The other is the parameter , which determines how large the current error needs to be such that we satisfy the condition in line <ref>. The parameter S_K goes into the error from composition, while the parameter directly goes into the additive error bound. The goal is to balance the error accumulated between updates (which is roughly ), and the error from updating privately (which is roughly S_K for ϵ-differential privacy, and roughly √(S_K) for (ϵ,δ)-differential privacy due to composition). Additionally, we want to make sure our algorithm does not abort before having processed the entire stream. We show that every time SVT returns “yes", the total flippancy in the stream has increased by at least Ω(). Since we know the total flippancy is bounded by K, in order to make sure that we do not abort preemptively, we choose S_K such that S_K·≈ K. Balancing the two error terms yields an additive error of approximately √(K) for ϵ-differential privacy, and K^1/3 for (ϵ,δ)-differential privacy. Let d and T be non-zero integers, let β>0, and let K be an upper bound on the total flippancy which is given. Let T be a known upper bound on the number of time steps. Then there exists * an item-level ϵ-differentially private algorithm for the problem in the general model with error at most O(min(d,K,√(ϵ^-1Kln (T/β)),ϵ^-1Tlog(T/β)) at all time steps with probability at least 1-β for ϵ>0; * an item-level (ϵ,δ)-differentially private algorithm for the problem in the general model with error O(min(d,K,(ϵ^-2Kln(1/δ)ln^2(T/β))^1/3,. . ϵ^-1√(Tln(1/δ)log(T/β))) at all time steps with probability at least 1-β, for any 0<δ<1 and 0<ϵ<1. The O(min(d,K)) bound follows from the fact that the algorithm that outputs 0 at every time step is ϵ-differentially private and has error at most min(d,K) for any ϵ. The third error bounds in the minimum for Theorem <ref> are achieved by Algorithm <ref>, as shown below. Since we assume here all parameters are known, one can compute the minimum of the three bounds and choose the algorithm accordingly. The fourth bound in Theorem <ref> follow by a direct application of the Laplace mechanism Fact <ref> with Δ_1=T resp. Gaussian mechanism Fact <ref> with Δ_2 = √(T). The algorithm for our third bound, given in Algorithm <ref>, is based on the sparse vector technique, where is a parameter dependent on K that we choose suitably below. We omit the proof of the following lemma, since it follows from well-known techniques (Sparse Vector Technique <cit.>, Laplace mechanism (Fact <ref>) and composition theorems (Facts <ref> and <ref>)). For δ=0 and any ϵ>0, Algorithm <ref> is ϵ-differentially private. For 0<ϵ<1 and 0<δ<1, Algorithm <ref> is (ϵ,δ)-differentially private. We show the claimed accuracy bound using the following lemma. For δ=0, for any time step t before the algorithm aborts, we have that the maximum error up to time t is at most O(ϵ^-1S_Kln(T/β)). Setting =√(Kϵ/(18ln(T/β)))+1, with probability at least 1-β, Algorithm <ref> does not abort before having seen the entire stream, and has error at most O(√(ϵ^-1Kln (T/β))+ϵ^-1ln(T/β)). For δ>0, for any time step t before the algorithm aborts, we have that the maximum error up to time t is O(ϵ^-1√(S_Kln(1/δ))ln(T/β)). Setting =(Kϵ/36 √(ln(1/δ))ln(T/β))^2/3+1, with probability at least 1-β, Algorithm <ref> does not abort before having seen the entire stream, and has error at most O((ϵ^-2Kln(1/δ)ln^2(T/β))^1/3. .+ϵ^-1√(ln(1/δ))ln(T/β)). Note that at every time step t in Algorithm <ref>, we set Q=∑_i=1^d f^t(_i). Let α=(8/ϵ_1)ln(2T/β)= (1/2) ·Thresh. By Laplace tailbounds (Fact <ref>), at every time step t: (a) |τ_ℓ|≤ (2/ϵ_1)ln(2T/β) = α/4 with probability at least 1-β/(2T), where ℓ is the value of variable count at time step t, and (b) |μ_t|≤ (4/ϵ_1)ln(2T/β)= α/2 with probability at least 1-β/(2T). Thus, with probability ≥ 1-β, we have at all time steps t simultaneously: (i) Whenever the condition in line <ref> is true at time t, then |-∑_i∈[d]f^t(_i)|> Thresh-3α/4=5α/4, and (ii) Whenever the condition in line <ref> is false at time t, then |-∑_i∈[d]f^t(_i)| ≤Thresh+3α/4 <3α. Further, the random variable ν_ℓ for ℓ∈[] is distributed as Lap(1/ϵ_1) and is added to ∑_i∈[d]f^t(_i) at every time step t where is updated. By the Laplace tail bound (Fact <ref>), ν_ℓ is bounded for all ℓ∈[] by ϵ_1^-1ln(/β)≤α/8 with probability at least 1-β. Altogether, all of these bounds hold simultaneously with probability at least 1-2β. We condition on all these bounds being true. Assume the algorithm has not terminated yet at time t and let be the value of variable at the beginning of time t. Let p_ℓ be the last time step at which the value of was updated. It holds that |- ∑_i∈[d]f^p_ℓ_i(x)| = |ν_ℓ|≤α/8. If the condition in line <ref> is true at time t, then |∑_i∈[d]f^p_ℓ_i(x)-∑_i∈[d]f^t(_i)| ≥|∑_i∈[d]f^t(_i)-| - |- ∑_i∈[d] f^p_ℓ_i(x) | ≥ 5α/4-α/8 = 9α/8. Thus, between two time steps where the value of is updated, there is a change of at least 9α/8 in the sum value, i.e., the value of f^t(x_i) has changed at least once for ≥ 9α/8 different items i. Since K=∑_i=1^∑_t=2^T (f^t(x_i)≠ f^t-1(x_i)), to guarantee (under the noise conditions), that the algorithm does not terminate before we have seen the entire stream, it suffices to choose where >K/(9α/8). For δ=0, we have α=(8/ϵ_1)ln(2T/β)=(16/ϵ)ln(2T/β), thus we have to choose to be at least Kϵ/(18ln(2T/β)). Choosing = ⌊√(Kϵ/(18ln (2T/β)))⌋ +1 fulfills this condition. A similar calculation show that for δ>0, choosing =(Kϵ/36√(ln(1/δ))ln(T/β))^2/3+1 fulfills this condition. Now consider any time step t and let be the output at time t, i.e., the value after processing time step t. If the condition in line <ref> is false, we showed above that |- ∑_i∈[d]f^t(_i)|<3α. If the condition is true at time t, we have = ∑_i∈[d]f^t(_i)+ν_ℓ for some ℓ∈[], and, thus, |- ∑_i∈[d]f^t(_i)|≤α/8<α. For δ=0, we have α= (8/ϵ_1)ln(2T/β) =O(√(ϵ^-1Kln (T/β))ln(T/β)+ϵ^-1ln(T/β)). Plugging in =(Kϵ/36√(ln(1/δ))ln(T/β))^2/3+1 yields the final bound for δ>0. To finish the proof of Theorem <ref>, note that if ϵ^-1ln(T/β)>√(ϵ^-1Kln(T/β)), then √(ϵ^-1Kln(T/β))>K, which can be seen by multiplying both sides of the inequality with √(K)/√(ϵ^-1ln(T/β)). Thus the upper bound min(d,K,√(ϵ^-1Kln(T/β))) holds for δ=0. Also, if ϵ^-1√(ln(1/δ))ln(T/β)>(ϵ^-2Kln(1/δ)ln^2(T/β))^1/3, then ϵ^-1√(ln(1/δ))ln(T/β)>K, which can be seen by first cubing the inequality and then dividing by ϵ^-2ln(1/δ)ln^2(T/β). Thus, for δ > 0, the upper bound of min(d,K, (ϵ^-2Kln(1/δ)ln^2(T/β))^1/3) holds. §.§ Generalizations We now argue about Theorem <ref>. Let Q be a real-valued function on input streams from {-1,0,1} and let Q^t=Q(x_1,…,x_t). Further, let Q be such that 1.) for any x and y which are neighboring, we have |Q^t(x)-Q^t(y)|≤ 1 for all time steps t, and 2.) ∑_t=1^T|Q^t(x)-Q^t-1(x)|≤ K. The first bound from Theorem <ref> is achieved by an algorithm that never updates the output, and the third bounds for ϵ and (ϵ,δ)-differential privacy are obtained by the Laplace and Gaussian mechanisms, respectively. The second bound for both ϵ and (ϵ,δ)-differential privacy is obtained by Algorithm <ref> by setting Q=Q^t(x) at every time step t. The proofs follow by exchanging ∑_i∈[d] f^t(x_i) by Q^t(x) in the proofs of Lemma <ref> and <ref>. § A CONNECTION BETWEEN THE GENERAL MODEL UNDER EVENT-LEVEL PRIVACY AND THE “LIKES"-MODEL UNDER ITEM-LEVEL PRIVACY Our bounds from Theorems <ref>, <ref>, and <ref> as well as the bounds from <cit.> imply that under item-level privacy, the and the are roughly equally hard: all upper bounds hold for the and all lower bounds hold for the , and the bounds are tight up to a log T factor. However, under event-level privacy, the is significantly easier than the general model: It can be solved via continual counting on the difference sequence of the true output, which gives error polylogarithmic in log T. This is possible because for event-level privacy in the , the difference sequence of the output (i.e., the difference between the true output value of the current and the preceding time step) has ℓ_∞-sensitivity 1 for event-level privacy, but for item-level privacy, the sensitivity can be as large as T. In the , there are no better upper bounds known for event-level differential privacy than for item-level differential privacy, and the upper and lower bounds from <cit.> for (ϵ,δ)-differential privacy for the event-level setting in the leave a polynomial (in T) gap, in the case where the maximum flippancy w_x ∈ (T^1/2,T^2/3): In that case, ignoring polynomial factors in ϵ^-1, log(1/δ), and log T, the lower bound of <cit.> is Ω(T^1/4), while their algorithm gives an additive error of O(T^1/3). Specifically, finding the best achievable error for event-level privacy in the is explicitly posed as an open question in <cit.>. We resolve this question for a large class of algorithms, called γ-output-determined algorithms. All known algorithms for this problem in any model are 0-output-determined. Specifically, we show that for γ-output-determined algorithms our lower bounds and the lower bounds from <cit.> for item-level privacy in the basically carry over to event-level privacy in the . It follows that our algorithm and the algorithm from <cit.> for event-level privacy in the are tight up to a factor that is linear in log T within the class of output-determined algorithms. Note that our reduction works both for the ϵ-differential privacy as well as for (ϵ,δ)-differential privacy and we give the corresponding lower bounds in Theorems <ref> and <ref>. In the following, we denote by (x) the stream of true answers to the problem on stream x. Let γ≥ 0. An algorithm for the problem is said to be γ-, if for all inputs x and y such that (x)=(y) and any S∈range() we have: ((x)∈ S) ≤((y)∈ S) + γ Let ϵ > 0, δ≥ 0 and γ≥ 0. Let _1 be an event-level, (ϵ,δ)-differentially private, γ- algorithm for that works in the and has error at most α for streams of length T+1 with probability 1-β. Then there exists an item-level, (2ϵ,(1+e^ϵ)δ + e^ϵγ)-differentially private algorithm _2 for that works in the , and has error at most α for streams of length T with probability 1-β. We describe algorithm _2, that is item-level (2ϵ,(1+e^ϵ)δ+e^ϵγ)-dp in the , derived from a γ- algorithm _1 which is event-level, (ϵ,δ)-dp in the : Let x be an input for in the of length T, i.e., x is such that ∑_t'≤ tx_i^t' can only take the values 0 or 1, for any i∈[d] and t∈[T]. Let x_0=0^d x, i.e., we attach a d-dimensional all-zero vector before x, and define (_2(x))^t=(_1(x_0))^t+1 for all t∈[T] (note that _1 can take inputs from the ). We now show that _2 is item-level (2ϵ,(1+e^ϵ)δ+e^ϵγ)-differentially private. Let x and y be two item-level neighbouring inputs in the . That is, there exists an item i such that the streams x_i and y_i may be completely different, while x_j=y_j for all j≠ i. Additionally, since we are in the , for any time step t, ∑_t'≤ t x_i^t'∈{0,1} and ∑_t'≤ t y_i^t'∈{0,1}. Next, we define input streams z and w in the where (z)=(w), z is event-level neighbouring to x_0, and w is event-level neighbouring to y_0. Since _1 is event-level (ϵ,δ)-dp and works for the , we then have for any S∈range(_2) [_2(x)∈ S] =[(_1(x_0))_t=2^T+1∈ S] ≤ e^ϵ[(_1(z))_t=2^T+1∈ S]+δ ≤ e^ϵ[(_1(w))_t=2^T+1∈ S]+δ +γ ≤ e^2ϵ[(_1(y_0))_t=2^T+1∈ S]+(1+e^ϵ)δ + e^ϵγ = e^2ϵ[_2(y)∈ S]+(1+e^ϵ)δ + e^ϵγ, where the second inequality holds as _1 is γ-output-determined. To define such z and w, let -e_i be the vector such that -e_i(j)=0 for all j≠ i and -e_i(i)=-1. Then z=-e_i x and w=-e_i y. Note that z and w are valid input streams for the , while they are not valid for the . Clearly, z is event-level neighbouring to x_0, and w is event level neighbouring to y. Recall that (z)^t=∑_j=1^(∑_t'≤ tz_j^t>0). Since ∑_t'≤ t x_i^t∈{0,1} for all t∈[T] we have ∑_t'≤ t z_i^t≤ 0 for all t∈[T+1]. By the same argument, we have ∑_t'≤ t w_i^t≤ 0 for all t∈[T+1]. Since z and w only differ in the ith coordinate, which never contributes to the value as it is never 1, we have (z)=(w). We are left with analyzing the error of the two algorithms. For this, note that by definition of x_0, we have (x_0)^t+1=(x)^t. Thus, running _2 on x gives the same error as running _1 on x_0. In particular, for any algorithm, Theorem <ref> implies that all lower bounds on the error for the problem under item-level differential privacy which hold for the (and thus, all lower bounds for under item-level differential privacy shown in this paper in Theorem <ref> and in <cit.>), carry over to event-level differential privacy in the . This means that if there is an algorithm achieving a better error than the bounds stated in Theorem <ref> and in <cit.> for event-level differential privacy in the , it cannot be γ- for γ = O(δ), i.e., it must be such that it does not only depend on the number of distinct elements at any given time step. § ITEM-LEVEL LOWER BOUNDS IN THE “LIKES"-MODEL In the following we show lower bounds for solving under item-level differential privacy, and in the . The lower bounds also apply to the . In <ref>, we showed a complementing upper bound which holds in the , even if K is unknown to the algorithm. Let d and T > 4 be non-negative integers and let ϵ > 0. * Let L≥ 8 be a non-negative integer such that L≤ dT. There exists an input stream x of d-dimensional vectors from {-1,0,1}^d, which is valid in the with multiple updates per time step, with length T and flippancy K with min(3L/8, T/4-1) ≤ K ≤min(L, dT/4) such that any ϵ-differentially private algorithm to the problem with item-level privacy with error at most α at all time steps with probability at least 2/3 must satisfy α=Ω(min(d,L,ϵ^-1T, √(ϵ^-1Lmax(ln (T/L) ,1))))=Ω(min(d,K,ϵ^-1T, √(ϵ^-1Kmax(ln (T/K) ,1)))). * Let L ≥ 8 be a non-negative integer such that L≤ T. There exists an input stream x of d-dimensional vectors from {-1,0,1}^d, which is valid in the with multiple updates per time step, with length T, flippancy K with L/16 ≤ K ≤min(L,T/4), and with ||x^t||_1=1 for all t (i.e., each update modifies at most one item) such that any ϵ-differentially private algorithm to the problem with item-level privacy with error at most α at all time steps with probability at least 2/3 must satisfy α=Ω(min(d,K,√(ϵ^-1Kln (T/K))). Let d, T, and L be as given in the theorem statement. Assume there is an ϵ-differentially private algorithm 𝒜 for the problem with error at most α at all time steps with probability at least 2/3. If α>d/2, then the error is Ω(d). Also, if α > L/8, then α = Ω(L). Thus, in the following, we consider the case α≤ d/2 and α≤ L/8. Defining m=⌊ 2α⌋, it follows that m ≤min(d,L/8). *Singleton updates We first find T' ≤ T and L' ≤ L such that 4m divides T' and m divides L'. If this is not the case for T and L, then pick parameters T' and L' such that (i) 4m divides T' and m divides L', (ii) Δ = T-T' ≤ 4m < L/2 ≤ T/2 (i.e. T' = Θ(T)) and (iii) 0 ≤ L - Δ - L' ≤ m. This implies that L' ≥ 7L/8 - Δ≥ 3L/8. Thus, as L ≤ T, then 0≤ L - Δ - L' = L- (T-T') - L' = T' - L' - (T-L) ≤ T'-L', i.e., L'≤ T'. We use T' and L' in the proof below to construct a sequence of length T' fulfilling the statements of the theorem. To complete the proof of the theorem, we append to the sequence T-T' many all-zero vectors to guarantee that the stream has length T. Note that appending to the sequence “blank” operation will not invalidate the statements of the theorem. We now construct a set of input sequences of length T' with flippancy K:=min(L', T'/4) and use them to prove a lower bound for α of Ω(min(K ln(T'/K), √(ϵ^-1K ln(T'/K)))). Combined with the above case distinctions giving lower bounds on α of Ω(d), and Ω(L), the fact that K =Θ(L) and that T' = Θ(T), this implies that α = Ω(min(d,K, √(ϵ^-1K(ln(T/K)+1))). Let k := min(L',T'/4)/m be a positive integer. Partition the timeline into T'/m blocks of length m, namely B_1=[1,m], B_2=[m+1,2m], …. Now, for any subset of blocks J=(j_1,…,j_k) with 1≤ j_1<j_2<… < j_k≤ T'/m, define an input sequence (J) such that for any item i∈[m] we insert element i in the ith time step of every odd block of J (i.e. the first, third, ... block in J), and delete it again at the ith position of every even block of J (i.e. the second, fourth, ... block in J). More formally, for any item i∈[m], set (J)_i^t=1 for all t=B_j_2p-1[i]=(j_2p-1-1)m+i, p=1…,⌈ k/2 ⌉, and set (J)_i^t=-1 for all t=B_j_2p=(j_2p-1)m+i, p=1…,⌈ k/2 ⌉. In all other time steps t, no updates are performed, i.e., (J)^t is an all-zero vector. Thus, for every i∈[m], we have f^t(x_i)=1 for all time steps t∈[j_2p-1m,(j_2p-1)m], for all p≤⌈ k/2⌉, and f^t(x_i)=0 for all time steps t∈[j_2pm,(j_2p+1-1)m]. For any item m < i ≤ d, we have f^t(x_i)=0 for all t∈[T']. Furthermore, items i with i >m (if they exist) are never inserted or deleted. In total, there are k=min(L',T'/4)/m updates per item i∈[m], thus exactly K updates in total, and, hence, the total flippancy is K = min(L', T'/4). If K = L', then L ≥ K ≥ 3L/8. If K = T'/4, then L'≤ T' implies that L≥ L' ≥ K = T'/4 ≥ L'/4≥ 3L/32 ≥ L/16. Thus in either case K = Θ(L'). Furthermore K≤ T'/4 ≤ T/4. Now let E_J, for J=(j_1,…,j_k) with 1≤ j_1<j_2<… < j_k≤ T'/m, be the set of output sequences where 𝒜 outputs (i) a value of m/2 or larger for all time steps t∈[j_2p-1m,(j_2p-1)m] with 1≤ p≤⌈ k/2⌉, and (ii) smaller than m/2 for all time steps t such that (a) t < j_1 m or (b) t ∈[j_2pm,(j_2p+1-1)m] for some 0≤ p< ⌈ k/2⌉ or (c) t ≥ j_km. Note that for an input sequence x(J) every output sequence where 𝒜 has additive error smaller than α = m/2 must belong to E_J. As the algorithm is correct with probability at least 2/3, [𝒜(x(J))∈ E_J]≥ 2/3. Two input sequences are neighboring if they differ in the data of at most one item for item-level differential privacy. As two input sequences x(I) and x(J) with I J differ in the data of at most m items, it follows by group privacy that [𝒜(x(J))∈ E_I]≥ e^-mϵ2/3 for any J=(j_1,…,j_k) with 1≤ j_1<j_2<… < j_k≤ T'/m and I=(i_1,…,i_k) with 1≤ i_1<i_2<… < i_k≤ T'/m. Also note that the set of output sequences E_J for distinct J=(j_1,…,j_k) are disjoint, since for each multiple of m (i.e., the end of a block), it is clearly defined whether the output is at least m/2 or smaller than m/2, and as such the values j_1,…,j_k can be uniquely recovered. Thus, there are T'/mk disjoint events E_J and the sum over all J of the probabilities that the algorithm with input x(I) outputs an event E_J is at most 1. More formally, we have: 1≥T'/mk e^-mϵ2/3≥(T'/m)^k/(k)^ke^-mϵ2/3. = T'^(K/m)/K^K/me^-mϵ2/3 where the last equality is since k=K/m. This gives m^2 +ϵ^-1 m ln(3/2) ≥ϵ^-1Kln(T'/K) which implies m = Ω(min(K ln(T'/K), √(ϵ^-1Kln(T'/K))). Note that since T' ≥ 4K, ln(T'/K)≥ln(4)>1. This completes the proof. *Multiple updates We first find T' ≤ T and L' ≤ L such that 4 divides T' and m divides L'. If this is not the case for T and L, then pick parameters T' and L' such that (i) 4 divides T' and m divides L', (ii) Δ = T-T' ≤ 4 (i.e. T' = Θ(T)) and (iii) Δ m ≤ L - L' ≤ (Δ + 1) m. This implies that L' ≥ L - (Δ+1)m ≥ L - 5m ≥ 3L/8. We use T' and L' in the proof below to construct a sequence of length T' fulfilling the statements of the theorem. To complete the proof of the theorem, we append to the sequence T-T' many all-zero vectors to guarantee that the stream has length T. Note that appending to the sequence “blank” operation will not invalidate the statements of the theorem. The idea is similar to above, only we do not define blocks, but directly choose k:=min(L'/m,T'/4) time steps in which all items in [m] are updated. Thus the flippancy K will equal mk. More precisely, we construct the following set of input sequences. For any I=(t_1,…,t_k) with 1≤ t_1<t_2<…<t_k≤ T', we define an input sequence (I) as follows: For any item i∈[m], set (I)_i^t_j=1 for all odd j, and (I)_i^t_j=-1 for all even j. All other coordinates are set to 0. In total, there are k updates per item in [m], thus, exactly K updates in total, i.e., the total flippancy equals K = min(L', mT'/4). This implies that min(3L/8, T/4-1) ≤ K ≤min(L, dT/4). Now, let E_I, for I=(t_1,…,t_k) with 1≤ t_1<t_2<…<t_k≤ T', be the set of output sequences with a value of m/2 or larger at all time steps t∈[t_2p-1,t_2p) for some 1≤ p≤⌈ k/2⌉, and a value smaller than m/2 at all time steps t where (a) t≤ t_1 or (b) t∈[t_2p,t_2p+1) for some 0≤ p<⌈ k/2⌉. Note that for input sequence (I) every output sequence where has an additive error smaller than m/2 must be in E_I. As the algorithm is correct with probability at least 2/3, [((I))∈ E_I]≥ 2/3. As two input sequences x(I) and x(J) with I J = (j_1, … , j_k) with 1≤ j_1<j_2<… < j_k≤ T' differ in the data of at most m items, it follows by group privacy that [𝒜(x(I))∈ E_J]≥ e^-mϵ2/3 for any such J. Let J=(j_1,…,j_k) with 1≤ j_1<j_2<…<j_k≤ T'. Note that the events E_I and E_J for any I J are disjoint, since in the event E_I it is clearly defined for every time step whether the output is at least m/2 or smaller than m/2, and from that the set I can be uniquely recovered. Thus, there are T'k disjoint events E_J and the probability that with input x(I) the algorithm outputs any one of them is at most 1. Thus we have 1 ≥T'ke^-mϵ2/3 ≥T'^k/k^ke^-mϵ2/3 = T'^K/m/(K/m)^K/me^-mϵ2/3 where the last equality is since k=K/m. Next we consider two cases, the first one resulting in two different lower bounds on m and the second one giving a third lower bound on m. The combination of these three lower bounds then gives the claimed bound above of α = m/2 = Ω(min(ϵ^-1T', √(ϵ^-1K max(ln(T'/K), 1)), K max(ln(T'/K),1)) )) Case 1: L'< mT'/4. In this case K = L' and we have m^2ϵ + m ln(3/2)≥ Kln(T'm/K)≥ Kmax(ln(T'/K),1) where the last inequality holds since K ≤ mT'/4, i.e., ln( T'm/K) ≥ln(4) > 1. Hence m=Ω(min(√(ϵ^-1K max(ln(T'/K),1)), K max(ln(T'/K),1))). As K = L' = Θ(L) it follows that m=Ω(min(√(ϵ^-1Lmax(ln(T'/L),1)), L max(ln(T'/L),1))). Case 2: L'≥ mT'/4. This implies that K=mT'/4 and, thus, that there are updates in at least T'/4 many time steps. In this case Inequality <ref> can be reformulated as follows: 1≥T'^K/m/K/m^K/me^-mϵ2/3 = 4^T'/4 e^-mϵ2/3 = e^ln(4) T'/4 - mϵ2/3, which implies that Inequality <ref> is satisfied for m = Ω(ϵ^-1 T'). These two cases show that α = Ω(min(ϵ^-1T', √(ϵ^-1L max(ln(T'/L), 1)), L max(ln(T'/L),1)) ) for the above input sequence. Combined with the above lower bounds on α of Ω(min(d, L)) and the fact that T'= Θ(T), it follows that α = Ω(min(d, L, ϵ^-1T, √(ϵ^-1L max(ln(T/L), 1)))). § LOWER BOUNDS FOR APPROXIMATE DIFFERENTIAL PRIVACY In this section, we adapt the lower bounds from <cit.> for item-level differential privacy to our parameter scheme. Let ϵ,δ∈(0,1]. * Let K, T be sufficiently large parameters. There exists a dimension d and an input stream x of d-dimensional vectors from {-1,0,1}^d of length T and with flippancy at most K which is valid in the “likes”-model, such that any item-level, (ϵ,δ)-differentially private algorithm to the problem with error at most α at all time steps with probability at least 0.99 must satisfy α=Ω(min(√(T)/ϵlog T,(Kϵ)^1/3/ϵlog (Kϵ))). * Let K and T be sufficiently large parameters satisfying K≤ T. There exists a dimension d and an input stream x of d-dimensional vectors from {-1,0,1}^d of length T and with flippancy at most K which is valid in the “likes”-model and satisfies ||x^t||_1=1 for all t, such that any item-level, (ϵ,δ)-differentially private algorithm to the problem with error at most α at all time steps with probability at least 0.99 must satisfy α=Ω(K^1/3/ϵlog K). The reduction in <cit.> is based on a lower bound for the 1-way marginals problem. In that problem, the data set y is an table consisting of n rows and m columns, where every entry is in {0,1}. Two data sets y and y' are neighbouring if they differ in at most one row. The goal is to estimate the average column sums, i.e., the vector (∑_i=1^n y[i,j])_j∈[m]. The following lower bound holds for estimating 1-way marginals under (ϵ,δ)-differential privacy: Let ϵ∈ (0,1], γ∈(0,1), and m,n∈ℕ, and δ=o(1/n). Any algorithm which is (ϵ,δ)-differential private and has error at most γ with probability at least 0.99 satisfies n=Ω(√(m)/γϵlog m). We start by arguing about <ref>. For this case, our example stream is exactly the same as in <cit.>, given in Algorithm 5 in <cit.> (for a formulation using our slightly different notation see Algorithm <ref>). They give a reduction from the 1-way marginals problem: For any instance ℐ of the 1-way marginals problem with n rows and m columns, there is an instance C(ℐ) of with T=2mn, such that if ℐ and ℐ' are neighbouring, then C(ℐ) and C(ℐ') are item-neighbouring. Further, if we can solve C(ℐ) within error α, we can solve ℐ within error α/n. It follows by Lemma <ref> that α=Ω(min(√(m)/ϵlog m,n)). In the instance they constructed, d=n, i.e. each row in the 1-way marginals problem gives an item in the problem. Further, the total flippancy K can be as large as 2mn for worst case inputs. Thus, in order to apply the reduction, we need 2mn≤ K≤ T. Given parameters K≤ T, we choose m=K/(2n). The lower bound Ω(min(√(m)/ϵlog m,n)) translates to Ω(min(√(K/(2n))/ϵlog(K/(2n)),n)). For n= K^1/3/2(ϵlog K)^2/3, we have √(K/(2n))/ϵlog(K/(2n))≥K^1/3ϵ^1/3log^1/3 K/ϵlog (K^1/2)=Ω(K^1/3log^1/3 K/ϵ^2/3log K)=Ω(n). Thus, we get α=Ω(K^1/3log^1/3 K/ϵ^2/3log K). For <ref>., where we allow general updates, we have to slightly modify the example in <cit.>: namely, in their Algorithm 5, we collapse every one of their vectors z^(j), j=1,…,m, into vectors of length 2, one time step for all insertions corresponding to column j, and one time step for all deletions corresponding to column j. See Algorithm <ref>. We then again get a reduction with the same properties as before, except that T=2n and K can be as large as 2mn. Now, the analysis from <cit.> can be repeated with our T taking the role of w_x in <cit.>, and our K taking the role of T in <cit.>. *Funding r0.15 < g r a p h i c s > M. Henzinger: This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (MoDynStruct, No. 101019564) and the Austrian Science Fund (FWF) grant DOI 10.55776/Z422, grant DOI 10.55776/I5982, and grant DOI 10.55776/P33775 with additional funding from the netidee SCIENCE Stiftung, 2020–2024. T. A. Steiner: This work was supported by a research grant (VIL51463) from VILLUM FONDEN. gamma § UNKNOWN TOTAL FLIPPANCY The algorithms from Section <ref> can be easily extended to the case where the total flippancy K is not known beforehand, at the cost of polylog(K) factors in the error bound, as shown by Algorithm <ref> and the lemmata below. The fact that K is not known causes no serious problem, as the algorithm repeatedly “guesses” K and then runs the algorithm from earlier with the current guess. For any 0 < ϵ < 1 and 0 ≤δ < 1, Algorithm <ref> is (ϵ,δ)-differentially private. By Lemma <ref>, the jth instance of Algorithm <ref> is (ϵ_j,δ_j)-differentially private. Since ∑_j=1^∞ϵ_j=ϵ and ∑_j=1^∞δ_j=δ, by Fact <ref>, Algorithm <ref> is (ϵ,δ)-differentially private. For δ=0, the error of Algorithm <ref> is at most O(ln K√(ϵ^-1K ln(Tln K /β))+ϵ^-1ln^2 K ln(T ln K /β)). For δ>0, the error of Algorithm <ref> is at most O((ϵ^-1Kln^2 K ln(ln K /δ)ln^2(Tln K /β))^1/3+ϵ^-1ln^2 K√(ln (ln K /δ))ln(Tln K /β)). Let j_l be the value of variable j after the last element in the stream is processed. For any j<j_l, note that by Lemma <ref>, with probability at least 1-β_j, by the choice of S_K_j, the algorithm does not abort before having seen the entire stream if the total flippancy is at most K_j. Thus, when the algorithm aborts for some j<j_l, we know that the flippancy is at least K_j, and the bound from Lemma <ref> holds for the jth instance of Algorithm <ref> with . Since the algorithm aborts for all j<j_l, we can conclude that the total flippancy of the stream processed by the jth run of Algorithm <ref> is at least K_j. Since ∑_j β_j=β, with probability at least 1-β, (1) the total flippancy K is at least ∑_j<j_lK_j=2^j_l-1, and (2) the bound from Lemma <ref> holds for all instances of Algorithm <ref> (with their respective parameters). It follows (a) that K≥ K_j_l-1≥ K_j for all j< j_l and (b) j_l=O(ln K). The maximum error over the stream is the maximum error of any instance of Algorithm <ref>. Since K_j=O(K), ϵ_j_l≤ϵ_j and δ_j_l≤δ_j for all j ≤ j_l, the final bound is now obtained by plugging K, ϵ_j_l = Θ(ϵ/j^2) for ϵ, δ_j_l = Θ(δ/j^2) for δ, and β_j_l = Θ(β/j^2) for β into the bound from Lemma <ref>, and upper bounding j^2 by log^2 K. One can also obtain the minimum of the bound from <ref> and min(K, T, d) at the cost of an additive ϵln^2Kln(ln K/β) factor with a slightly more involved algorithm, which involves choosing to either not update the output or abort if there is a trivial algorithm which performs better for the current estimate of K. If we knew the value of K beforehand, we could choose the best algorithm upfront. Not knowing the value of K makes it slightly more complicated. In the following, we show how to obtain this bound. The full algorithm is given in Algorithm <ref>. Let d and T be non-zero integers, let β>0. Let T be a known upper bound on the number of time steps. Then there exists * an item-level ϵ-differentially private algorithm for the problem in the general model with error at most O(min(d,K,ln K√(ϵ^-1Kln (T/β)),ϵ^-1Tlog(T/β))+ϵ^-1ln^2 K ln(ln K/β)) at all time steps with probability at least 1-β, for any ϵ>0, where K is the total flippancy of the input unknown to the algorithm. * an item-level (ϵ,δ)-differentially private algorithm for the problem in the general model with error at most O(min(d,K,(ϵ^-2Kln^2 Kln(1/δ)ln^2(T/β))^1/3,ϵ^-1√(Tln(1/δ)log(T/β))+ϵ^-1ln^2 K ln(ln K/β))) at all time steps with probability at least 1-β, for any 0<δ<1 and 0<ϵ<1, where K is the total flippancy of the input unknown to the algorithm. Privacy follows since Algorithm <ref> is a composition of A) a post-processing of Algorithm <ref> with parameters ϵ/2 and δ and B) a sequence of Laplace mechanisms such that the jth Laplace mechanism is ϵ_j-differentially private, and ∑_j=1^∞ϵ_j=ϵ/2. For accuracy, first assume that we do not abort before we have seen the entire stream. The updating parameters t, j and K_j are exactly the same as in Algorithm <ref>. Thus, we have j≤log^2 K and K_j=O(K) for all j used in the algorithm. Next, we condition on 1) the accuracy bound from Lemma <ref> holding for all runs of Algorithm <ref> resp. Algorithm <ref>^* for their respective parameters, and 2) the Laplace noise μ_j∼(1/ϵ_j) in line ... satisfying |μ_j|≤ϵ_j^-1log(1/β_j) for any j. By Lemma <ref>, 1) is true with probability at least 1-∑_j β_j≥ 1-β/2. By the Laplace tailbound Fact <ref>, 2) is true with probability at least 1-∑_j β_j≥ 1-β/2. Thus, 1) and 2) hold together with probability at least 1-β. Now, consider first the case where δ=0. For any j such that K_j>B_j, we have that the error of the run of Algorithm <ref> in the jth round is bounded by O(√(ϵ_j^-1K_jln (T/β_j))+ϵ_j^-1ln(T/β_j)) by our condition. Note that since K_j>B_j=√(ϵ_j^-1K_jln (T/β_j)), we have that ϵ_j^-1ln(T/β_j)<√(ϵ_j^-1K_jln (T/β_j)), which can be seen by multiplying both sides of the inequality with √(ϵ_j^-1K_j^-1ln(T/β_j)). Thus, for any j such that K_j>B_j, the error is O(√(ϵ_j^-1K_jln (T/β_j))=O(min(K_j,√(ϵ_j^-1K_jln (T/β_j)))=O(min(K,ln K√(ϵ^-1Kln (Tln K/β))). For any j such that K_j≤ B_j, we do not update the output until the end of round j. Let t_j-1 resp. t_j be the time step where the (j-1)st resp. jth copy of Algorithm <ref> or Algorithm <ref>^* aborted. Note that by Lemma <ref>, the flippancy in interval [t_j-1, t_j) is at most K_j. Thus, |∑_i=1^d f^t_j-1(x_i)-∑_i=1^d f^t'(x_i)|≤ K_j, for all t∈[t_j-1, t_j). Since the output at all those time steps is given by =∑_i=1^d f^t_j-1(x_i)+μ_j, where |μ_j|≤ϵ_j^-1log(1/β_j). By triangle inequality, we have |∑_i=1^d f^t_j-1(x_i)-| ≤ K_j+ϵ_j^-1log(1/β_j) =O(min(K_j,√(ϵ_j^-1K_jln (T/β_j)))+ϵ_j^-1log(1/β_j) =O(min(K,ln K√(ϵ^-1Kln (Tln K/β)))+ϵ^-1ln^2 K ln(ln K/β)). Next, consider the case where δ>0. For any j such that K_j>B_j, we have that the error of the run of Algorithm <ref> in the jth round is bounded by O((ϵ_j^-2K_jln(1/δ_j)ln^2(T/β_j))^1/3+ϵ_j^-1√(ln(1/δ_j))ln(T/β_j)) by our conditioning. Note that if ϵ_j^-1√(ln(1/δ_j))ln(T/β_j)>ϵ_j^-2K_jln(1/δ_j)ln^2(T/β_j), then this gives us that ϵ_j^-1√(ln(1/δ_j))ln(T/β_j)>K_j, in contradiction to K_j>B_j. Thus, for any j such that K_j>B_j, the error is O((ϵ_j^-2K_jln(1/δ_j)ln^2(T/β_j))^1/3) =O(min(K_j,(ϵ_j^-2K_jln(1/δ_j)ln^2(T/β_j))^1/3) =O(min(K,(ln^2 Kϵ^-2Kln(ln K/δ)ln^2(Tln K/β))^1/3). For any j such that K_j≤ B_j, we do not update the output until the end of round j. Let t_j-1 resp. t_j be the time step where the (j-1)st resp. jth copy of Algorithm <ref> or Algorithm <ref>^* aborted. Note that by Lemma <ref>, the flippancy in interval [t_j-1, t_j) is at most K_j. Thus, |∑_i=1^d f^t_j-1(x_i)-∑_i=1^d f^t'(x_i)|≤ K_j, for all t∈[t_j-1, t_j). Since the output at all those time steps is given by =∑_i=1^d f^t_j-1(x_i)+μ_j, where |μ_j|≤ϵ_j^-1log(1/β_j). By triangle inequality, we have |∑_i=1^d f^t_j-1(x_i)-|≤ K_j+ϵ_j^-1log(1/β_j). Lastly, we argue about what happens if we abort and switch to one of the trivial algorithms. Note that this happens exactly when we reach a j such that min(K_j,B_j)>min(d,_T). Note that up to j-1, by the analysis before, we have that the error is bounded by min(K_j-1,B_j-1)+ϵ_j-1^-1log(1/β_j-1)≤min(d,_T)+ϵ_j-1^-1log(1/β_j-1), since the algorithm did not abort. After we abort, the error of the algorithm is O(min(d,_T)). Thus, the algorithm at any time step is bounded by O(min(d,_T))+ln^2Kϵ^-1log(1/β). Since we have that K_j≤ K , we have for δ = 0 that min(d,_T)≤min(K,ln K√(ϵ^-1Kln (Tln K/β))), and for δ > 0 that min(d,_T)≤min(K,(ϵ^-2Kln^2 Kln(1/δ)ln^2(T/β))^1/3). § THE SPARSE VECTOR TECHNIQUE The sparse vector technique is based on an algorithm in <cit.> and was described more fully in <cit.>. The version described in Algorithm <ref> is from <cit.> for c=1 (the main difference is that it allows different thresholds for every query). Algorithm <ref> is ϵ-differentially private. Algorithm <ref> fulfills the following accuracy guarantees for α=8(ln k + ln(2/β))/ϵ: For any sequence q_1,…,q_k of queries it holds with probability at least 1-β, * for any t such that a_t= we have q_t(D)≥-α, * for all t such that a_t= we have q_t(D)≤+α.